EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Getting started with AVR and C

Started by Robert Roland November 24, 2012
In comp.lang.c David Brown <david.brown@removethis.hesbynett.no> wrote:

(snip, someone wrote)
>>> I would expect that gcc would be ANSI compliant, and would therefore >>> promote both 16-bit integers to 32-bit before doing the multiply.
(snip, then I wrote)
>> Maybe I am missing something here, but are there versions of gcc for 16 >> bit processors, with 16 bit int? If so, then promotion to int won't >> promote to 32 bits without a cast.
> The correct behaviour for C standards compliance is that when you > multiply two operands of different int size, the smaller one is promoted > to the size of the larger one. Then the multiply is carried out modulo > the size of the larger one. Then the result is truncated or extended as > needed to fit the target variable.
I haven't read the standard so recently, but I thought that was only after the default promotions. Values smaller than int would be promoted to int, then the size of the multiply (and product) determined. If you multiply two 8 bit unsigned char values, is the product modulo 256? I don't think so.
> So the bit-size of the processor, and the bit-size of "int" on that > particular target, is irrelevant. And the size of the result variable > is also irrelevant (this catches out some newbies).
> Given:
> int16_t a, b; > int32_t c
> c = (int32_t)a * b;
> Then b is cast to int32_t, the 32-bit multiplication is carried out, and > the result assigned to c.
Maybe not completely irrelevent, consider a system with a 64 bit int.
> If you write just "c = a * b", then the multiplication is carried it at > 16-bit, then promoted to 32-bit. This applies regardless of the > bit-size of the target - you will get the same effect on a 64-bit cpu as > on the 8-bit AVR.
Regardless of the target size, but not of int size.
> If your compiler does 16-bit multiplications when you have "c = > (int32_t) a * b", and requires two "int32_t" casts to do 32-bit > multiplication, then your compiler is very badly broken. As Tim says, > badly broken compilers /do/ exist, so if you have to use them, then you > need to use two casts. But I personally don't think you need to write > your code to work with broken toolchains unless you actually have to.
Now it gets interesting. When were the int_32_t and int_16_t added to C? Seems to me that compilers only claiming a version of the standard before they were added wouldn't have to use the same rules. Consider that a compiler might have a int_128_t that it could add and subtract, but not multiply or divide. Maybe it can generate a 128 bit product from two 64 bit operands. Does the standard prohibit a compiler from offering those operations? -- glen
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:

> In comp.lang.c David Brown <david.brown@removethis.hesbynett.no> wrote: > > (snip, someone wrote) >>>> I would expect that gcc would be ANSI compliant, and would therefore >>>> promote both 16-bit integers to 32-bit before doing the multiply. > > (snip, then I wrote) >>> Maybe I am missing something here, but are there versions of gcc for 16 >>> bit processors, with 16 bit int? If so, then promotion to int won't >>> promote to 32 bits without a cast. > >> The correct behaviour for C standards compliance is that when you >> multiply two operands of different int size, the smaller one is promoted >> to the size of the larger one. Then the multiply is carried out modulo >> the size of the larger one. Then the result is truncated or extended as >> needed to fit the target variable. > > I haven't read the standard so recently, but I thought that was only > after the default promotions. Values smaller than int would be promoted > to int, then the size of the multiply (and product) determined.
Yes, it's two-stage process. In case it helps, here is the terminology as used by the C standard: "integer promotions" These are the conversions that often occur prior to preforming an arithmetic operation. They form part of the: "usual arithmetic conversions" which is how a common type is arrived at for arithmetic operations that requite it. Unless complex or floating types are involved, the integer promotions are performed on both operands and then a set of rules is used to determine the common type. For integer types it is usually simply the widest of the two, even if that is an unsigned type and the other is a signed type. "default argument promotions" apply to function calls in the absence of a prototype. These are the integer promotions augmented by a conversion of float to double. The standard never refers to a conversion as a cast (a cast is an operator that performs an explicit conversion) and it uses the term "promotion" only in the context of the implicit conversions described above. A conversion of one type to a wider one in some other context is not called a promotion.
> If you multiply two 8 bit unsigned char values, is the product > modulo 256? I don't think so.
No, it's not. <snip>
> Now it gets interesting. When were the int_32_t and int_16_t added to > C?
1999. The types are intN_t and uintN_t (no extra _) for various N. They are optional, but must be defined if the implementation has suitable types (basically 2's complement and no padding bits). Other, similar, types like int_leastN_t and int_fastN_t are required in all implementations. For example, int_least32_t is the smallest type that has at least 32 (value) bits.
> Seems to me that compilers only claiming a version of the standard > before they were added wouldn't have to use the same rules.
True.
> Consider that a compiler might have a int_128_t that it could add and > subtract, but not multiply or divide. Maybe it can generate a 128 bit > product from two 64 bit operands. Does the standard prohibit a > compiler from offering those operations?
I don't think it could define int128_t unless it could multiply them. -- Ben.
In comp.lang.c Ben Bacarisse <ben.usenet@bsb.me.uk> wrote:

(snip, I wrote)

>> If you multiply two 8 bit unsigned char values, is the product >> modulo 256? I don't think so.
> No, it's not.
For comparison purposes, I believe that Fortran does not have this rule. If you add, subtract, multiply, or (I believe) divide 8 bit integers the result is, generally, eight bits. One should, at least, not be surprised if the result is computed modulo some small value. I like the C rules better. PL/I generally tries to keep the bits until it reaches the implementation maximum. That is complicated when scaled fixed point (non-integer) values are used, where it keeps the appropriate digits (binary or decimal) to the right of the radix point, possibly truncating on the left. -- glen
On 11/28/2012 07:29 PM, Tim Wescott wrote:
...
> Grant did not include all of the context, so you need to read back a bit. > > The original statement was that (a) int16_t * int16_t coughs up a 16-bit > result, unless (b) one of the int16_t numbers is cast to 32 bit. > > Then I pointed out that (c) there are some older, non-compliant compilers > where you have to cast _both_ 16-bit operands to 32 bits to get a 32 bit > result, and (d) that I trusted that the gcc compiler was ANSI C > compliant. Statement (c) is important for the embedded space (which is > the group that I am replying from -- you must be from comp.lang.c) > because one does not always have the luxury of using a compliant tool > chain in embedded. > > Then Grant came in, and if I'm correctly reading what he said, stated > that (e) the gnu-avr compiler is not ANSI-C compliant because it has 16 > bit integers.
Sort-of, but not quite. When he said "Nope", he wasn't referring to your expectation that gcc-avr was ANSI compliant. He was referring to your expectation that it would promote 16-bit integers to 32 bits. On a conforming implementation of C with 16-bit ints, promotion of integer types halts at 16 bits, and goes no further.
> So you are correcting statement -- uh, (0), because no one made it (the > first quote from me refers to statement (b), and appears in its native > habitat two or three posts up in the thread).
It was that first quote from you that I'm correcting. Not any other statement. Specifically:
> I would expect that gcc would be ANSI compliant, and would therefore > promote both 16-bit integers to 32-bit before doing the multiply.
-- James Kuyper
On 11/28/2012 08:19 PM, Tim Wescott wrote:
...
> Just for clarification, since what I said above seems to be easy to > misread unless you pay close attention to context: take "promote both 16- > bit integers to 32-bit" and add in the context (about casting one or more > of the 16 bit values); the result reads "promote both 16-bit integers to > 32-bit _if you cast just one 16-bit integer to 32 bit_".
There is a conversion to 32-bits, but it is NOT a promotion. See 6.3.1.1p2 for a definition of the integer promotions. -- James Kuyper
On Wed, 28 Nov 2012 23:39:52 -0500, James Kuyper wrote:

> On 11/28/2012 07:29 PM, Tim Wescott wrote: > ... >> Grant did not include all of the context, so you need to read back a >> bit. >> >> The original statement was that (a) int16_t * int16_t coughs up a >> 16-bit result, unless (b) one of the int16_t numbers is cast to 32 bit. >> >> Then I pointed out that (c) there are some older, non-compliant >> compilers where you have to cast _both_ 16-bit operands to 32 bits to >> get a 32 bit result, and (d) that I trusted that the gcc compiler was >> ANSI C compliant. Statement (c) is important for the embedded space >> (which is the group that I am replying from -- you must be from >> comp.lang.c) because one does not always have the luxury of using a >> compliant tool chain in embedded. >> >> Then Grant came in, and if I'm correctly reading what he said, stated >> that (e) the gnu-avr compiler is not ANSI-C compliant because it has 16 >> bit integers. > > Sort-of, but not quite. When he said "Nope", he wasn't referring to your > expectation that gcc-avr was ANSI compliant. He was referring to your > expectation that it would promote 16-bit integers to 32 bits. On a > conforming implementation of C with 16-bit ints, promotion of integer > types halts at 16 bits, and goes no further.
How can you possibly know? Do you read his mind? Have an uncited conversation with him? Is he your sock-puppet?
>> So you are correcting statement -- uh, (0), because no one made it (the >> first quote from me refers to statement (b), and appears in its native >> habitat two or three posts up in the thread). > > It was that first quote from you that I'm correcting. Not any other > statement. Specifically: > >> I would expect that gcc would be ANSI compliant, and would therefore >> promote both 16-bit integers to 32-bit before doing the multiply.
Oh Christ. READ THE CONTEXT. That statement was made in reply to a question asking about what would happen if you cast one of the operands to 32 bit! And you're replying to a post that told you that it was misleading without its context, and again taking it out of context. Missing the context the first time is understandable -- that statement came about after two previous postings, and you do have to follow the conversation. But I have just told you to READ THE CONTEXT. So where do you get off with repeating a statement of mine, out of context, which YOU'VE DAMNED WELL BEEN TOLD is misleading when taken out of context, then criticizing that false meaning of it? That's shooting straight through "rude" and getting right into "dishonest". -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On Thu, 29 Nov 2012 03:55:43 +0000 (UTC)
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote:

> In comp.lang.c Ben Bacarisse <ben.usenet@bsb.me.uk> wrote: > > (snip, I wrote) > > >> If you multiply two 8 bit unsigned char values, is the product > >> modulo 256? I don't think so. > > > No, it's not. > > For comparison purposes, I believe that Fortran does not have this > rule. > > If you add, subtract, multiply, or (I believe) divide 8 bit integers > the result is, generally, eight bits. One should, at least, not be > surprised if the result is computed modulo some small value. > > I like the C rules better. > > PL/I generally tries to keep the bits until it reaches the > implementation maximum. That is complicated when scaled fixed > point (non-integer) values are used, where it keeps the appropriate > digits (binary or decimal) to the right of the radix point, > possibly truncating on the left.
Remember the "as if" clause. In reality, if you add, subtract, multiply, or divide two uint8_ts and store the result in another uint8_t on a CPU where 8-bit arithmetic operations are faster than larger operations (or do the same with at least some of those operations for signed numbers which happen, implementation-specifically, to be implemented in twos-complement), any sane compiler *will* perform the operations at 8-bit width because the results in all cases are provably equivalent to the same operations performed with promotion. Chris
On 29/11/2012 01:47, Hans-Bernhard Br&#4294967295;ker wrote:
> On 29.11.2012 01:35, David Brown wrote: > >> The correct behaviour for C standards compliance is that when you >> multiply two operands of different int size, the smaller one is promoted >> to the size of the larger one. > > Close, but not cigar. You forgot about types smaller than the > platform's "int". Those will be converted up to either signed or > unsigned int anyway, i.e. even if both operands are of the same size. > >> So the bit-size of the processor, and the bit-size of "int" on that >> particular target, is irrelevant. > > Incorrect. It is very relevant as soon as either of the operands' types > is smaller than "int" on the particular target. > > The rule to remember is that C never does arithmetic on anything smaller > than an 'int'. > >> Given: >> >> int16_t a, b; >> int32_t c >> >> If you write just "c = a * b", then the multiplication is carried it at >> 16-bit, then promoted to 32-bit. > > Not if you're on a 32-bit target it isn't. Default conversion to 32-bit > int takes place first, so both operands are first converted to 32-bit, > then a 32 x 32 --> 32 bit multiply is carried out. At least in > principle (that is: modulo the "as-if rule").
Of course you are correct here. I should not be posting so late - or I should have drunk more coffee first, because I know this stuff (as long as my brain is functioning correctly!). Apologies if I've added to the confusion here, and thanks for the correction.
On 29/11/2012 03:31, glen herrmannsfeldt wrote:
> In comp.lang.c David Brown <david.brown@removethis.hesbynett.no> wrote: > > (snip, someone wrote) >>>> I would expect that gcc would be ANSI compliant, and would therefore >>>> promote both 16-bit integers to 32-bit before doing the multiply. > > (snip, then I wrote) >>> Maybe I am missing something here, but are there versions of gcc for 16 >>> bit processors, with 16 bit int? If so, then promotion to int won't >>> promote to 32 bits without a cast. > >> The correct behaviour for C standards compliance is that when you >> multiply two operands of different int size, the smaller one is promoted >> to the size of the larger one. Then the multiply is carried out modulo >> the size of the larger one. Then the result is truncated or extended as >> needed to fit the target variable. > > I haven't read the standard so recently, but I thought that was only > after the default promotions. Values smaller than int would be promoted > to int, then the size of the multiply (and product) determined.
As pointed out by Hans-Bernhard, you are correct here. I'm sorry for causing confusion by posting while half asleep. Default "int" promotions are done first for each operand. They are promoted to "signed int", "unsigned int", "signed long int" or "unsigned long int" (and "long long" for newer C standards), stopping at the first type that covers the entire range. In practice, this means anything smaller than an "int" will get promoted to a "signed int".
> > If you multiply two 8 bit unsigned char values, is the product > modulo 256? I don't think so. > >> So the bit-size of the processor, and the bit-size of "int" on that >> particular target, is irrelevant. And the size of the result variable >> is also irrelevant (this catches out some newbies). > >> Given: > >> int16_t a, b; >> int32_t c > >> c = (int32_t)a * b; > >> Then b is cast to int32_t, the 32-bit multiplication is carried out, and >> the result assigned to c. > > Maybe not completely irrelevent, consider a system with a 64 bit int. > >> If you write just "c = a * b", then the multiplication is carried it at >> 16-bit, then promoted to 32-bit. This applies regardless of the >> bit-size of the target - you will get the same effect on a 64-bit cpu as >> on the 8-bit AVR. > > Regardless of the target size, but not of int size. > >> If your compiler does 16-bit multiplications when you have "c = >> (int32_t) a * b", and requires two "int32_t" casts to do 32-bit >> multiplication, then your compiler is very badly broken. As Tim says, >> badly broken compilers /do/ exist, so if you have to use them, then you >> need to use two casts. But I personally don't think you need to write >> your code to work with broken toolchains unless you actually have to. > > Now it gets interesting. When were the int_32_t and int_16_t added to C? >
The types were officially added with C99, but they existed in practice before that as "long int" and "short int" on most compilers (some targets don't support 16-bit types, and thus have "short int" as 32-bit and no int16_t. And the standards allow compilers with a "short" of 64-bit or more, in which case neither "int32_t" nor "int16_t" would exist - but I have never heard of such a beast).
> Seems to me that compilers only claiming a version of the standard > before they were added wouldn't have to use the same rules.
The rules haven't changed (again, sorry for my mistaken post). Types such as "int32_t" are just typedef's for "normal" C types.
> > Consider that a compiler might have a int_128_t that it could add and > subtract, but not multiply or divide. Maybe it can generate a 128 bit > product from two 64 bit operands. Does the standard prohibit a > compiler from offering those operations? >
The "int128_t" here is either a typedef for an existing C type (which could include "long long int" in C99), in which case it would have to support all integral operations, or it is purely a compiler extension, in which case it is non-standard. But I believe the standards say that /if/ a type of this form "int128_t" is defined in standard headers for the compiler, then it must act as a full integral type of that size.
James Kuyper <jameskuyper@verizon.net> writes:
> On 11/28/2012 07:29 PM, Tim Wescott wrote: > ... > > Grant did not include all of the context, so you need to read back a bit. > > > > The original statement was that (a) int16_t * int16_t coughs up a 16-bit > > result, unless (b) one of the int16_t numbers is cast to 32 bit. > > > > Then I pointed out that (c) there are some older, non-compliant compilers > > where you have to cast _both_ 16-bit operands to 32 bits to get a 32 bit > > result, and (d) that I trusted that the gcc compiler was ANSI C > > compliant. Statement (c) is important for the embedded space (which is > > the group that I am replying from -- you must be from comp.lang.c) > > because one does not always have the luxury of using a compliant tool > > chain in embedded. > > > > Then Grant came in, and if I'm correctly reading what he said, stated > > that (e) the gnu-avr compiler is not ANSI-C compliant because it has 16 > > bit integers. > > Sort-of, but not quite. When he said "Nope", he wasn't referring to your > expectation that gcc-avr was ANSI compliant. He was referring to your > expectation that it would promote 16-bit integers to 32 bits. On a > conforming implementation of C with 16-bit ints, promotion of integer > types halts at 16 bits, and goes no further.
The context was
>>> Shouldn't casting just one of the 16 bit values work the same as >>> casting both of them?
i.e. that there was one cast to 32 bits already. Therefore ANSI C says that there would be a second conversion to 32 bits of the other operand. Or at least the resulting code should behave *as if* that had happened. (I'm pretty sure I've seen an architecture with shorter*longer->longer opcodes.) Initially I thought what Tim wrote was in error, but upon unravelling the thread, I worked out that he had gone forward from the premises correctly, and others hadn't. Without those premises - confusion ensues. Phil -- Regarding TSA regulations: How are four small bottles of liquid different from one large bottle? Because four bottles can hold the components of a binary liquid explosive, whereas one big bottle can't. -- camperdave responding to MacAndrew on /.

The 2024 Embedded Online Conference