Forums

Getting started with AVR and C

Started by Robert Roland November 24, 2012
On 2012-11-28, Tim Wescott <tim@seemywebsite.com> wrote:
> On Wed, 28 Nov 2012 18:05:06 +0000, Grant Edwards wrote:
>>> And you have to be careful about how/when any expansions occur. For >>> example with gcc-avr, if you want >>> >>> int32_t = int16_t * int16_t >>> >>> (the full 32 bit result of a 16x16 bit multiply), you have to cast each >>> of the 16-bit operands to 32bits. >> >> Shouldn't casting just one of the 16 bit values work the same as casting >> both of them? > > Yes. But that's if you take "should" as indicating a moral direction, > rather than as an indication of what you can reasonably expect from every > tool chain. > > I would expect that gcc would be ANSI compliant, and would therefore > promote both 16-bit integers to 32-bit before doing the multiply.
Nope. On the target in question (AVR), an "int" is 16 bits (at least by default). Same for msp430 (and maybe for some of the H8 targets). I think there is a command-line option for some 16-bit targets to tell gcc to use 32-bit representations for "int" instead of the defautl 16 bits.
> But I've worked with compilers in the past that didn't do this, so > when writing code that may be used in multiple places, I up-cast the > same way one votes in Chicago: early and often.
If, like AVR and msp430, an "int" is 16 bits, then you must cast at least one of the two operands to a 32 bit integer type if you want a 16x16=>32 multiply. -- Grant Edwards grant.b.edwards Yow! Mr and Mrs PED, can I at borrow 26.7% of the RAYON gmail.com TEXTILE production of the INDONESIAN archipelago?
On Wed, 28 Nov 2012 22:26:20 +0000, Grant Edwards wrote:

> On 2012-11-28, Tim Wescott <tim@seemywebsite.com> wrote: >> On Wed, 28 Nov 2012 18:05:06 +0000, Grant Edwards wrote: > >>>> And you have to be careful about how/when any expansions occur. For >>>> example with gcc-avr, if you want >>>> >>>> int32_t = int16_t * int16_t >>>> >>>> (the full 32 bit result of a 16x16 bit multiply), you have to cast >>>> each of the 16-bit operands to 32bits. >>> >>> Shouldn't casting just one of the 16 bit values work the same as >>> casting both of them? >> >> Yes. But that's if you take "should" as indicating a moral direction, >> rather than as an indication of what you can reasonably expect from >> every tool chain. >> >> I would expect that gcc would be ANSI compliant, and would therefore >> promote both 16-bit integers to 32-bit before doing the multiply. > > Nope. On the target in question (AVR), an "int" is 16 bits (at least by > default). Same for msp430 (and maybe for some of the H8 targets). I > think there is a command-line option for some 16-bit targets to tell gcc > to use 32-bit representations for "int" instead of the defautl 16 bits.
You mean "Absolutely", not "Nope". At least you do if you're referring to a 16-bit int as being conformant to ANSI C. Per ANSI C99 (http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf), page 34, the minimum allowable value of INT_MAX is 32767. That fits nicely inside a 16-bit signed number.
>> But I've worked with compilers in the past that didn't do this, so when >> writing code that may be used in multiple places, I up-cast the same >> way one votes in Chicago: early and often. > > If, like AVR and msp430, an "int" is 16 bits, then you must cast at > least one of the two operands to a 32 bit integer type if you want a > 16x16=>32 multiply.
Yes. And if you're using a broken, not-quite-compliant compiler that needs to see _both_ numbers as 32-bit before it'll do a 32-bit operation, then you need to cast _both_. (I'm pretty sure it was Intel's C compiler for the '196). -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On 11/28/2012 06:32 PM, Tim Wescott wrote:
> On Wed, 28 Nov 2012 22:26:20 +0000, Grant Edwards wrote: > >> On 2012-11-28, Tim Wescott <tim@seemywebsite.com> wrote:
...
>>> I would expect that gcc would be ANSI compliant, and would therefore >>> promote both 16-bit integers to 32-bit before doing the multiply. >> >> Nope. On the target in question (AVR), an "int" is 16 bits (at least by >> default). Same for msp430 (and maybe for some of the H8 targets). I >> think there is a command-line option for some 16-bit targets to tell gcc >> to use 32-bit representations for "int" instead of the defautl 16 bits. > > You mean "Absolutely", not "Nope". At least you do if you're referring > to a 16-bit int as being conformant to ANSI C. > > Per ANSI C99 (http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf), > page 34, the minimum allowable value of INT_MAX is 32767. That fits > nicely inside a 16-bit signed number.
I'm not sure the point you're making here. The type resulting from integer promotions is always either an 'int' or an 'unsigned int', and they occur only for values of other integer types whose entire range can be represented in the promoted type. Therefore, if int is 16 bits, int16_t operands will not be promoted at all, much less promoted to a 32-bit int, in conflict with what you said you expected of an ANSI compliant compiler. That's what his "Nope" was referring to. On such a platform, the usual arithmetic conversions will cause one of the operands to be converted implicitly to a 32-bit int if the other one is explicitly converted to a 32-bit int. However, section 6.3.1.1p2 defines what an "integer promotion" is, and that definition doesn't include those conversions.
On Wed, 28 Nov 2012 19:02:32 -0500, James Kuyper wrote:

> On 11/28/2012 06:32 PM, Tim Wescott wrote: >> On Wed, 28 Nov 2012 22:26:20 +0000, Grant Edwards wrote: >> >>> On 2012-11-28, Tim Wescott <tim@seemywebsite.com> wrote: > ... >>>> I would expect that gcc would be ANSI compliant, and would therefore >>>> promote both 16-bit integers to 32-bit before doing the multiply. >>> >>> Nope. On the target in question (AVR), an "int" is 16 bits (at least >>> by default). Same for msp430 (and maybe for some of the H8 targets). >>> I think there is a command-line option for some 16-bit targets to tell >>> gcc to use 32-bit representations for "int" instead of the defautl 16 >>> bits. >> >> You mean "Absolutely", not "Nope". At least you do if you're referring >> to a 16-bit int as being conformant to ANSI C. >> >> Per ANSI C99 >> (http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf), page 34, >> the minimum allowable value of INT_MAX is 32767. That fits nicely >> inside a 16-bit signed number. > > I'm not sure the point you're making here. The type resulting from > integer promotions is always either an 'int' or an 'unsigned int', and > they occur only for values of other integer types whose entire range can > be represented in the promoted type. Therefore, if int is 16 bits, > int16_t operands will not be promoted at all, much less promoted to a > 32-bit int, in conflict with what you said you expected of an ANSI > compliant compiler. That's what his "Nope" was referring to. > > On such a platform, the usual arithmetic conversions will cause one of > the operands to be converted implicitly to a 32-bit int if the other one > is explicitly converted to a 32-bit int. However, section 6.3.1.1p2 > defines what an "integer promotion" is, and that definition doesn't > include those conversions.
Grant did not include all of the context, so you need to read back a bit. The original statement was that (a) int16_t * int16_t coughs up a 16-bit result, unless (b) one of the int16_t numbers is cast to 32 bit. Then I pointed out that (c) there are some older, non-compliant compilers where you have to cast _both_ 16-bit operands to 32 bits to get a 32 bit result, and (d) that I trusted that the gcc compiler was ANSI C compliant. Statement (c) is important for the embedded space (which is the group that I am replying from -- you must be from comp.lang.c) because one does not always have the luxury of using a compliant tool chain in embedded. Then Grant came in, and if I'm correctly reading what he said, stated that (e) the gnu-avr compiler is not ANSI-C compliant because it has 16 bit integers. So I disagreed with (e), and pointed out where in the ANSI specification type 'int' is, indeed, allowed to be 16 bit (and 1's compliment or sign- magnitude, if you've got a perverse processor). So you are correcting statement -- uh, (0), because no one made it (the first quote from me refers to statement (b), and appears in its native habitat two or three posts up in the thread). You are correct that statement (0) is not true, however you are not correct in thinking that it was said. I assume that you inferred it because you did not pick up the context that Grant trimmed out. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On 28/11/12 22:45, glen herrmannsfeldt wrote:
> In comp.lang.c Tim Wescott <tim@seemywebsite.com> wrote: >> On Wed, 28 Nov 2012 18:05:06 +0000, Grant Edwards wrote: > >>> On 2012-11-28, Frank Miles <fpm@u.washington.edu> wrote: > >>>> And you have to be careful about how/when any expansions occur. For >>>> example with gcc-avr, if you want > >>>> int32_t = int16_t * int16_t > >>>> (the full 32 bit result of a 16x16 bit multiply), you have to cast each >>>> of the 16-bit operands to 32bits. > >>> Shouldn't casting just one of the 16 bit values work the same as casting >>> both of them? > >> Yes. But that's if you take "should" as indicating a moral direction, >> rather than as an indication of what you can reasonably expect from every >> tool chain. > >> I would expect that gcc would be ANSI compliant, and would therefore >> promote both 16-bit integers to 32-bit before doing the multiply. > > Maybe I am missing something here, but are there versions of gcc for 16 > bit processors, with 16 bit int? If so, then promotion to int won't > promote to 32 bits without a cast. >
The correct behaviour for C standards compliance is that when you multiply two operands of different int size, the smaller one is promoted to the size of the larger one. Then the multiply is carried out modulo the size of the larger one. Then the result is truncated or extended as needed to fit the target variable. So the bit-size of the processor, and the bit-size of "int" on that particular target, is irrelevant. And the size of the result variable is also irrelevant (this catches out some newbies). Given: int16_t a, b; int32_t c c = (int32_t)a * b; Then b is cast to int32_t, the 32-bit multiplication is carried out, and the result assigned to c. If you write just "c = a * b", then the multiplication is carried it at 16-bit, then promoted to 32-bit. This applies regardless of the bit-size of the target - you will get the same effect on a 64-bit cpu as on the 8-bit AVR. If your compiler does 16-bit multiplications when you have "c = (int32_t) a * b", and requires two "int32_t" casts to do 32-bit multiplication, then your compiler is very badly broken. As Tim says, badly broken compilers /do/ exist, so if you have to use them, then you need to use two casts. But I personally don't think you need to write your code to work with broken toolchains unless you actually have to.
On 29.11.2012 01:35, David Brown wrote:

> The correct behaviour for C standards compliance is that when you > multiply two operands of different int size, the smaller one is promoted > to the size of the larger one.
Close, but not cigar. You forgot about types smaller than the platform's "int". Those will be converted up to either signed or unsigned int anyway, i.e. even if both operands are of the same size.
> So the bit-size of the processor, and the bit-size of "int" on that > particular target, is irrelevant.
Incorrect. It is very relevant as soon as either of the operands' types is smaller than "int" on the particular target. The rule to remember is that C never does arithmetic on anything smaller than an 'int'.
> Given: > > int16_t a, b; > int32_t c > > If you write just "c = a * b", then the multiplication is carried it at > 16-bit, then promoted to 32-bit.
Not if you're on a 32-bit target it isn't. Default conversion to 32-bit int takes place first, so both operands are first converted to 32-bit, then a 32 x 32 --> 32 bit multiply is carried out. At least in principle (that is: modulo the "as-if rule").
David Brown <david.brown@removethis.hesbynett.no> writes:
<snip>
> The correct behaviour for C standards compliance is that when you > multiply two operands of different int size, the smaller one is > promoted to the size of the larger one.
Not exactly, no, though there is some confusion because you talk of different int sizes. int is one C type so there is only one int size, but I'm assuming you meant "integer types of different size". If that's what you meant, it's not quite right because multiplying a short by a char (for example) will involve promoting both operands to int. Other more outlandish examples include multiplying a char by a _Bool and many cases involving bit fields.
> Then the multiply is carried > out modulo the size of the larger one.
That's one commonly observed behaviour but it is not "the correct behaviour". If the common type arrived at by the arithmetic conversions is a signed type, the multiplication may overflow and anything at all can happen (i.e. what happens is undefined by the C standard). Unsigned integer arithmetic does not overflow.
> Then the result is truncated > or extended as needed to fit the target variable.
Again, not quite. The result is converted to type of the object it is being assigned to, and a great deal of leeway is given to implementations when the target type is a signed int. If the result can't be represented in the target type, either the result is implementation defined or an implementation defined signal is raised. For unsigned types, the behaviour is entirely defined by the C standard (conversion modulo 2^width which is, as you say, truncation).
> So the bit-size of the processor, and the bit-size of "int" on that > particular target, is irrelevant. And the size of the result variable > is also irrelevant (this catches out some newbies). > > Given: > > int16_t a, b; > int32_t c > > c = (int32_t)a * b; > > Then b is cast to int32_t, the 32-bit multiplication is carried out, > and the result assigned to c.
(unless int happens to be wider than 32 bits)
> If you write just "c = a * b", then the multiplication is carried it > at 16-bit, then promoted to 32-bit. This applies regardless of the > bit-size of the target - you will get the same effect on a 64-bit cpu > as on the 8-bit AVR.
The machine bit-size is not really the thing that matters. What matters is the sizes assigned to the various types by the C implementation. What you say is roughly correct for an implementation with a 16 bit int type ("roughly" because of the possibility of overflow). The size given to int is often the natural one (or one of the natural ones) for the machine in question. When this is the case, the bit size of the target does matter, but only because of the differing int sizes.
> If your compiler does 16-bit multiplications when you have "c = > (int32_t) a * b", and requires two "int32_t" casts to do 32-bit > multiplication, then your compiler is very badly broken. As Tim says, > badly broken compilers /do/ exist, so if you have to use them, then > you need to use two casts. But I personally don't think you need to > write your code to work with broken toolchains unless you actually > have to.
It leads to a special kind of hell! When you can't ever shake off the idea that x, y or z once went wrong on compiler p, q or r, you end up having to fold every trick you ever used to get your code past bad compilers into every program. -- Ben.
On Wed, 28 Nov 2012 12:56:37 -0600, Tim Wescott wrote:

> On Wed, 28 Nov 2012 18:05:06 +0000, Grant Edwards wrote: > >> On 2012-11-28, Frank Miles <fpm@u.washington.edu> wrote: >>> >>>> Generally, the result is as wide as the widest of the operands, >>>> or "int", if no operand is wider than "int". The result then may >>>> be truncated on function application or assignment. (For >>>> instance, i += j is 0 if i is 1, j is 255, and both are declared >>>> to be of the 8-bit unsigned integer uint8_t type.) >>>> >>>> >>> And you have to be careful about how/when any expansions occur. For >>> example with gcc-avr, if you want >>> >>> int32_t = int16_t * int16_t >>> >>> (the full 32 bit result of a 16x16 bit multiply), you have to cast >>> each of the 16-bit operands to 32bits. >> >> Shouldn't casting just one of the 16 bit values work the same as >> casting both of them? > > Yes. But that's if you take "should" as indicating a moral direction, > rather than as an indication of what you can reasonably expect from > every tool chain. > > I would expect that gcc would be ANSI compliant, and would therefore > promote both 16-bit integers to 32-bit before doing the multiply. But > I've worked with compilers in the past that didn't do this, so when > writing code that may be used in multiple places, I up-cast the same way > one votes in Chicago: early and often.
Just for clarification, since what I said above seems to be easy to misread unless you pay close attention to context: take "promote both 16- bit integers to 32-bit" and add in the context (about casting one or more of the 16 bit values); the result reads "promote both 16-bit integers to 32-bit _if you cast just one 16-bit integer to 32 bit_". The primary intent of my comment above was to point out that while an ANSI C compliant compiler will convert both operands to and the result to 32-bits if you cast just one operand to 32 bits, there are compilers out there that won't. I did not mean to say -- and indeed it is not the case -- that a compiler with 16-bit integers will automatically promote them to 32 bits just because you want them to, or even just because the result is getting stuck into a 32-bit number. For a nice ANSI-C compliant compiler you only have to tell it once (by casting one of the operands). For at least one compiler that I have used in the past, you had to tell it so over and over again (by casting everything to 32 bits, oh joy). -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Wed, 28 Nov 2012 19:17:18 +0100, Arlet Ottens wrote:

> On 11/28/2012 07:15 PM, James Kuyper wrote: >> On 11/28/2012 12:53 PM, Frank Miles wrote: ... >>> And you have to be careful about how/when any expansions occur. For >>> example with gcc-avr, if you want >>> >>> int32_t = int16_t * int16_t >>> >>> (the full 32 bit result of a 16x16 bit multiply), you have to cast >>> each of the 16-bit operands to 32bits. >> >> I'm not familiar with gcc-avr. That constitutes a significant deviation >> from standard C, where casting either operand would be sufficient to >> guarantee implicit conversion of the other operand, in accordance with >> the "usual arithmetic conversions". What is the reason for this >> difference? > > There's no difference. For gcc-avr it also suffices to cast just one > operand.
It's certainly what I would expect from gcc-avr. There's no reason you can't make a beautifully compliant, reasonably efficient compiler that works well on the AVR. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Thu, 29 Nov 2012 01:18:42 +0000, Ben Bacarisse wrote:

> David Brown <david.brown@removethis.hesbynett.no> writes: <snip> >> The correct behaviour for C standards compliance is that when you >> multiply two operands of different int size, the smaller one is >> promoted to the size of the larger one. > > Not exactly, no, though there is some confusion because you talk of > different int sizes. int is one C type so there is only one int size, > but I'm assuming you meant "integer types of different size". > > If that's what you meant, it's not quite right because multiplying a > short by a char (for example) will involve promoting both operands to > int. Other more outlandish examples include multiplying a char by a > _Bool and many cases involving bit fields. > >> Then the multiply is carried >> out modulo the size of the larger one. > > That's one commonly observed behaviour but it is not "the correct > behaviour". If the common type arrived at by the arithmetic conversions > is a signed type, the multiplication may overflow and anything at all > can happen (i.e. what happens is undefined by the C standard). Unsigned > integer arithmetic does not overflow. > >> Then the result is truncated >> or extended as needed to fit the target variable. > > Again, not quite. The result is converted to type of the object it is > being assigned to, and a great deal of leeway is given to > implementations when the target type is a signed int. If the result > can't be represented in the target type, either the result is > implementation defined or an implementation defined signal is raised. > > For unsigned types, the behaviour is entirely defined by the C standard > (conversion modulo 2^width which is, as you say, truncation). > >> So the bit-size of the processor, and the bit-size of "int" on that >> particular target, is irrelevant. And the size of the result variable >> is also irrelevant (this catches out some newbies). >> >> Given: >> >> int16_t a, b; >> int32_t c >> >> c = (int32_t)a * b; >> >> Then b is cast to int32_t, the 32-bit multiplication is carried out, >> and the result assigned to c. > > (unless int happens to be wider than 32 bits) > >> If you write just "c = a * b", then the multiplication is carried it at >> 16-bit, then promoted to 32-bit. This applies regardless of the >> bit-size of the target - you will get the same effect on a 64-bit cpu >> as on the 8-bit AVR. > > The machine bit-size is not really the thing that matters. What matters > is the sizes assigned to the various types by the C implementation. What > you say is roughly correct for an implementation with a 16 bit int type > ("roughly" because of the possibility of overflow). > > The size given to int is often the natural one (or one of the natural > ones) for the machine in question. When this is the case, the bit size > of the target does matter, but only because of the differing int sizes. > >> If your compiler does 16-bit multiplications when you have "c = >> (int32_t) a * b", and requires two "int32_t" casts to do 32-bit >> multiplication, then your compiler is very badly broken. As Tim says, >> badly broken compilers /do/ exist, so if you have to use them, then you >> need to use two casts. But I personally don't think you need to write >> your code to work with broken toolchains unless you actually have to. > > It leads to a special kind of hell! When you can't ever shake off the > idea that x, y or z once went wrong on compiler p, q or r, you end up > having to fold every trick you ever used to get your code past bad > compilers into every program.
Me, I just try to remember that x, y or z went wrong on some compiler some time, so that if I see symptoms again those problems are on my short list, and maybe even a fix or two. Like Texas Instrument's Code Composter for the TMS320F2812, which has a 32-bit "double". #$%@. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com