EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Getting started with AVR and C

Started by Robert Roland November 24, 2012
In comp.lang.c David Brown <david@westcontrol.removethisbit.com> wrote:

(snip)
>> Now it gets interesting. When were the int_32_t and int_16_t added to C?
> The types were officially added with C99, but they existed in practice > before that as "long int" and "short int" on most compilers (some > targets don't support 16-bit types, and thus have "short int" as 32-bit > and no int16_t. And the standards allow compilers with a "short" of > 64-bit or more, in which case neither "int32_t" nor "int16_t" would > exist - but I have never heard of such a beast).
>> Seems to me that compilers only claiming a version of the standard >> before they were added wouldn't have to use the same rules.
> The rules haven't changed (again, sorry for my mistaken post). Types > such as "int32_t" are just typedef's for "normal" C types.
Most likely, but as they aren't in the C89 standard, unless the user typedef's them, seems to me the compiler is free to implement then in any way desired. -- glen
In comp.lang.c Phil Carmody <thefatphil_demunged@yahoo.co.uk> wrote:

(snip, someone wrote)

>>>> Shouldn't casting just one of the 16 bit values work the same as >>>> casting both of them?
> i.e. that there was one cast to 32 bits already. Therefore ANSI C > says that there would be a second conversion to 32 bits of the other operand. > Or at least the resulting code should behave *as if* that had happened. > (I'm pretty sure I've seen an architecture with shorter*longer->longer > opcodes.)
Maybe, but many have N*N-->2N multiply. Some compilers figure out that if you cast one (or both) from a shorter length that they can use such a multiply on the shorter length. This especially important when the size is large enough that the hardware doesn't support it. Many 32 bit machines have a 32*32 --> 64 multiply, and a 64 bit (long long) type. If you cast one (or both) 32 bit int to 64 bit (long long), the compiler knows to use the 32 bit multiply.
> Initially I thought what Tim wrote was in error, but upon unravelling > the thread, I worked out that he had gone forward from the premises > correctly, and others hadn't. Without those premises - confusion ensues.
-- glen
On 29.11.2012 09:37, David Brown wrote:
> On 29/11/2012 03:31, glen herrmannsfeldt wrote:
>> Consider that a compiler might have a int_128_t that it could add and >> subtract, but not multiply or divide. Maybe it can generate a 128 bit >> product from two 64 bit operands. Does the standard prohibit a >> compiler from offering those operations?
> The "int128_t" here is either a typedef for an existing C type (which > could include "long long int" in C99), in which case it would have to > support all integral operations, or it is purely a compiler extension, > in which case it is non-standard.
Since we were nit-picking anyway: not quite. As of C99 the standard explicitly foresees the possible need to have more than the usual 10 different integer types ({signed|unsigned} {char|short|int|long|long long}) in a target. That's why they included a provision for "extended integer types". These types don't have standardized names (because they can't), but their behaviour is still covered by the standard. So the type behind int128_t need not be an "existing C type" (as in: something that was already defined before), nor is it allowed to be a pure compiler extension (which the standard would have no say over at all). If it's an extension, it has to be a standard extension, so its behaviour is ruled by the standard. > But I believe the standards say that
> /if/ a type of this form "int128_t" is defined in standard headers for > the compiler, then it must act as a full integral type of that size.
Yes.
On 29/11/2012 12:31, glen herrmannsfeldt wrote:
> In comp.lang.c David Brown <david@westcontrol.removethisbit.com> wrote: > > (snip) >>> Now it gets interesting. When were the int_32_t and int_16_t added to C? > >> The types were officially added with C99, but they existed in practice >> before that as "long int" and "short int" on most compilers (some >> targets don't support 16-bit types, and thus have "short int" as 32-bit >> and no int16_t. And the standards allow compilers with a "short" of >> 64-bit or more, in which case neither "int32_t" nor "int16_t" would >> exist - but I have never heard of such a beast). > >>> Seems to me that compilers only claiming a version of the standard >>> before they were added wouldn't have to use the same rules. > >> The rules haven't changed (again, sorry for my mistaken post). Types >> such as "int32_t" are just typedef's for "normal" C types. > > Most likely, but as they aren't in the C89 standard, unless the > user typedef's them, seems to me the compiler is free to implement > then in any way desired. >
That is correct - but I would be very surprised to see a compiler that did have a type with a name like that, and did not implement it the obvious way. It might be /legal/ under C89 rules for the compiler to have a type called "int32_t" with different behaviour, but I can't imagine it actually being the case. I'm sure Tim Wescott can think of an exception, however!
On 11/29/2012 01:59 AM, Tim Wescott wrote:
> On Wed, 28 Nov 2012 23:39:52 -0500, James Kuyper wrote: > >> On 11/28/2012 07:29 PM, Tim Wescott wrote:
...
>> Sort-of, but not quite. When he said "Nope", he wasn't referring to your >> expectation that gcc-avr was ANSI compliant. He was referring to your >> expectation that it would promote 16-bit integers to 32 bits. On a >> conforming implementation of C with 16-bit ints, promotion of integer >> types halts at 16 bits, and goes no further. > > How can you possibly know? Do you read his mind? Have an uncited > conversation with him? Is he your sock-puppet?
I can read and understand English, and in particular, the specialized dialect of it which is sometimes called "standardese". I understood precisely what he was talking about. In particular, I understand what "promotion" means in the context of the C standard, and know that you used the term incorrectly, something which you still do not seem to have understood - nothing in your comments indicates any awareness that this is the issue we're both talking about.
>> It was that first quote from you that I'm correcting. Not any other >> statement. Specifically: >> >>> I would expect that gcc would be ANSI compliant, and would therefore >>> promote both 16-bit integers to 32-bit before doing the multiply. > > Oh Christ. READ THE CONTEXT. That statement was made in reply to a > question asking about what would happen if you cast one of the operands > to 32 bit! And you're replying to a post that told you that it was > misleading without its context, and again taking it out of context.
A conforming implementation of C will promote integer values to 32 bits only if 'int' is exactly 32-bits. Do you believe that the context I've missed changed 'int' to a 32-bit type? If not, your use of "promote" to describe that conversion is incorrect, though your expectation that there would be such a conversion is accurate. -- James Kuyper
On 11/29/2012 05:03 AM, Phil Carmody wrote:
> James Kuyper <jameskuyper@verizon.net> writes:
...
>> Sort-of, but not quite. When he said "Nope", he wasn't referring to your >> expectation that gcc-avr was ANSI compliant. He was referring to your >> expectation that it would promote 16-bit integers to 32 bits. On a >> conforming implementation of C with 16-bit ints, promotion of integer >> types halts at 16 bits, and goes no further. > > The context was > >>>> Shouldn't casting just one of the 16 bit values work the same as >>>> casting both of them? > > i.e. that there was one cast to 32 bits already. Therefore ANSI C > says that there would be a second conversion to 32 bits of the other operand.
Of course. I knew that context, and knew that the conclusion you describe was the correct result. However, that's not the conclusion Tim Wescott reached - the conversion you describe is part of the usual arithmetic conversions (6.3.1.8p1) but is NOT an integer promotion (6.3.1.1p2).
> Initially I thought what Tim wrote was in error, but upon unravelling > the thread, I worked out that he had gone forward from the premises > correctly, and others hadn't. Without those premises - confusion ensues.
Those premises had nothing to do with the confusion, which is about the meaning of the word "promote" in the context of the C standard. -- James Kuyper
David Brown <david@westcontrol.removethisbit.com> writes:

> On 29/11/2012 03:31, glen herrmannsfeldt wrote: >> In comp.lang.c David Brown <david.brown@removethis.hesbynett.no> wrote: >> >> (snip, someone wrote) >>>>> I would expect that gcc would be ANSI compliant, and would therefore >>>>> promote both 16-bit integers to 32-bit before doing the multiply. >> >> (snip, then I wrote) >>>> Maybe I am missing something here, but are there versions of gcc for 16 >>>> bit processors, with 16 bit int? If so, then promotion to int won't >>>> promote to 32 bits without a cast. >> >>> The correct behaviour for C standards compliance is that when you >>> multiply two operands of different int size, the smaller one is promoted >>> to the size of the larger one. Then the multiply is carried out modulo >>> the size of the larger one. Then the result is truncated or extended as >>> needed to fit the target variable. >> >> I haven't read the standard so recently, but I thought that was only >> after the default promotions. Values smaller than int would be promoted >> to int, then the size of the multiply (and product) determined. > > As pointed out by Hans-Bernhard, you are correct here. I'm sorry for > causing confusion by posting while half asleep. > > Default "int" promotions are done first for each operand. They are > promoted to "signed int", "unsigned int", "signed long int" or > "unsigned long int" (and "long long" for newer C standards), stopping > at the first type that covers the entire range. In practice, this > means anything smaller than an "int" will get promoted to a "signed > int".
That's not quite right, though I'm not exactly sure what you mean. As you say, the integer promotions are done first. That can produce either an int or an unsigned int, or it may have no effect at all if the type is already "bigger" than an int. Then one operand, but sometimes both operands, are further converted (not promoted) to get a common type. You can read the rules at www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf (sec. 6.3.1.8) but the conversion does not stop at the first type that covers the entire range (at least according to how I interpret that phrase). For example, in (unsigned)1 * (signed)-1 the signed operand is converted to the type of the unsigned one even though that type can't cover the entire range of the operand or operands. Almost any summary of the rules is going to be wrong; if an accurate summary can be written it should go into the language standard -- it would be a great boon -- but I don't think that's possible. For example I had to put "bigger" in quotes because the rule is based on a technical term called the conversion rank of the type and not on its size. (The integer promotions have no effect on a long int even on systems where a long int is no bigger than an int). <snip> -- Ben.
Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:
<snip>
> Since we were nit-picking anyway: not quite. As of C99 the standard > explicitly foresees the possible need to have more than the usual 10 > different integer types ({signed|unsigned} {char|short|int|long|long > long}) in a target.
These may be the usual ones, but there are 11 "standard integer types" because they include _Bool. (Well, you did say we are nit-picking!) -- Ben.
On 11/29/2012 03:37 AM, David Brown wrote:
...
> Default "int" promotions are done first for each operand. They are > promoted to "signed int", "unsigned int", "signed long int" or "unsigned > long int" (and "long long" for newer C standards), stopping at the first > type that covers the entire range. In practice, this means anything > smaller than an "int" will get promoted to a "signed int".
Not quite. The integer conversions never change anything to any type other than 'int' or 'unsigned int'. "If an int can represent all values of the original type (as restricted by the width, for a bit-field), the value is converted to an int; otherwise, it is converted to an unsigned int. These are called the integer promotions.58) All other types are unchanged by the integer promotions." (6.3.1.1p2) The first use of "integer promotions" in that clause is italicized, which is an ISO convention indicating that the sentence containing that phrase serves as the definition of the phrase. ..
>> Now it gets interesting. When were the int_32_t and int_16_t added to C? >> > > The types were officially added with C99, but they existed in practice > before that as "long int" and "short int" on most compilers (some > targets don't support 16-bit types, and thus have "short int" as 32-bit > and no int16_t. And the standards allow compilers with a "short" of > 64-bit or more, in which case neither "int32_t" nor "int16_t" would > exist - but I have never heard of such a beast).
I've heard of machines with 32-bit short, but not 64-bit. Note that while int32_t and int16_t could not be provided by <stdint.h> for such a compiler, int_least32_t and int_fast32_t (and similarly for 16) must be.
>> Seems to me that compilers only claiming a version of the standard >> before they were added wouldn't have to use the same rules. > > The rules haven't changed (again, sorry for my mistaken post). ...
Yes they have. In C99 and later, <stdint.h> and <inttypes.h> are standard headers, and if #included, the identifiers they define must meet certain well-specified requirements. In C90, there were no such standard headers, no guarantees on what a header file with that name would contain if you successfully #included it, and no corresponding restrictions on how user code could use those identifiers after #including those header files.
> ... Types > such as "int32_t" are just typedef's for "normal" C types.
That depends upon what you mean by 'normal'. The C99 standard distinguishes between standard and extended integer types. The standard integer types have names specified by the C standard; extended types are implementation-defined, and may have other names. There are many standard typedefs that are required to have either arithmetic or integer type; but there are none that are restricted to standard integer types. Would you consider __extended_integer_type to be a "normal" C type? ...
> The "int128_t" here is either a typedef for an existing C type (which > could include "long long int" in C99), in which case it would have to > support all integral operations, or it is purely a compiler extension, > in which case it is non-standard.
No, supporting int128_t would not be a non-standard extension, it's just providing an optional feature of standard C. The key difference is that if an implementation chooses to support an optional feature, it must support it in precisely the manner specified by the standard for that feature; extensions give an implementation a lot more freedom. In C2011, there's a lot of optional features. The only size-named types that a conforming implementation of <stdint.h> must provide are [u]int_leastN_t, and [u]int_fastN_t for N = 8, 16, 32, and 64. For all other values of N, and for [u]intN_t for all values of N, the typedefs are optional. You can determine precisely which of the optional <stdint.h> types are supported by #ifdef of the corresponding *_MAX macro. If that macro is #defined, you can use the corresponding type in full confidence that it behaves precisely as specified by the standard. -- James Kuyper
On 11/29/2012 09:07 AM, James Kuyper wrote:
...
> "If an int can represent all values of the original type (as restricted > by the width, for a bit-field), the value is converted to an int; > otherwise, it is converted to an unsigned int. These are called the > integer promotions.58) All other types are unchanged by the integer > promotions." (6.3.1.1p2) The first use of "integer promotions" in that > clause is italicized, which is an ISO convention indicating that the > sentence containing that phrase serves as the definition of the phrase.
I just realized that the meaning of the phrase "All other types" is not clear without the preceding part of that clause which I snipped:
> The following may be used in an expression wherever an int or unsigned int may > be used: > &mdash; An object or expression with an integer type (other than int or unsigned int) > whose integer conversion rank is less than or equal to the rank of int and > unsigned int. > &mdash; A bit-field of type _Bool, int, signed int, or unsigned int.
-- James Kuyper

The 2024 Embedded Online Conference