EmbeddedRelated.com
Forums

Delay Routine: Fully-portable C89 if possible

Started by Martin Wells October 9, 2007
Martin Wells <warint@eircom.net> writes:

> David: > >> You can contest all you want. At best, you can argue that it is >> *possible* to write embedded code (portable or otherwise) without having >> specific sized types, but there is no doubt that people write better >> code by taking advantage of these types. > > > The C language is described in such a way that "int" should be the > most efficient type (or at the very least tied for first place).
Tends not to be true for smaller micros.
> If I want to store a number, I use unsigned.
I tend to use "int" for all "numbers" (integers), unless - I need the specific features of unsigned arithmetic (overflow behaviour) - I am interested in the bit pattern, rather than the numeric value - I absolutely need the extra 1 bit of range and don't want to use "long"
> If the number can be negative, I use int. > If the number needs more than 32-Bits, I use long.
16 bits! You are really undermining your argument here... :)
> If I need to conserve memory, I use char if it has enough bits, > otherwise short. > If I *really* need to conserve memory, I make an array of raws bytes > and do bit-shifting. > > There's nothing wrong with the likes of uint_atleast_8, it's just that > they're not portable C89. I've heard there's a fairly efficient fully- > portable C89 stdint.h header file going around, so maybe that would be > useful.
[...] -- John Devereux
Martin Wells wrote:
> John: > >> Now it might be that if I used int_fast16_t, >> then the code could run microscopically more efficiently on the >> ARM. > > > "int_fast16_t" shouldn't be anything other than plain old int. >
First off, that's not correct (consider the original Motorola 68000 - it was a 32-bit processor, so "int" has a natural size of 32 bits on that architecture, and yet it processed 16-bit data faster). Secondly, even if it were correct, there is still plenty of use for "int_fast16_t" as it says exactly what you want it to do, and is consistent with other types such as "int_fast8_t" which will *not* be the same as "int" on many architectures.
Martin Wells wrote:
> David: > >> You can contest all you want. At best, you can argue that it is >> *possible* to write embedded code (portable or otherwise) without having >> specific sized types, but there is no doubt that people write better >> code by taking advantage of these types. > > > The C language is described in such a way that "int" should be the > most efficient type (or at the very least tied for first place). >
That's only the case for some architectures - in particular, for 8-bit micros, it is far from true.
> If I want to store a number, I use unsigned. > If the number can be negative, I use int. > If the number needs more than 32-Bits, I use long. > If I need to conserve memory, I use char if it has enough bits, > otherwise short. > If I *really* need to conserve memory, I make an array of raws bytes > and do bit-shifting. > > There's nothing wrong with the likes of uint_atleast_8, it's just that > they're not portable C89. I've heard there's a fairly efficient fully- > portable C89 stdint.h header file going around, so maybe that would be > useful. >
When writing code for small embedded systems, you are often interested in getting the code to do exactly what you ask - no more, and no less. You care at a level of detail unfamiliar to those used to programming on big systems - thinks like exact type sizes are important. For bigger systems, the tradeoffs for development are different - you don't have to worry so much about the minor details, and can afford to be sloppy about implementation efficiency in the name of developer efficiency (for example, you might use higher-level interpreted languages). For small systems, you are looking at something with similar levels of control as for assembly programming, but faster development. Thus you should be aware of things like type sizes, library implementations, and the strengths and weaknesses of your target cpu. Development for larger embedded systems falls somewhere in between these two.
> >> You are clearly new to >> embedded development, at least on small micros (judging from your >> original post in particular) - those of us who have been developing on a >> wide range of cpus for years understand the benefits of size-specific >> types. That's why <stdint.h> was introduced, that's why number 1 hit on >> lists of shortcomings of C is its inconsistent type sizes, and that's >> why embedded programmers always like to know the exact size of the int >> types on their target. Sure, it's possible to get away without it - >> just as its possible to do embedded programming without C - but why >> *not* use size specific types to your advantage? > > > Again I don't see much use for them. If you want efficiency, go with > int. If you wanna save memory, go with char if possible, otherwise > short. If you've got big numbers, go with long. > > >>>> Certainly many of the situations where size specifics are important are >>>> in hardware dependant and non-portable - and thus the only issue is that >>>> the code in question is clear. >>> The microcontroller I'm using currently has 6-Bit ports but 8-Bit >>> registers and memory chunks... still no problem though. >> You'll find that these are logically 8-bit registers, with only 6 bits >> implemented. > > > Indeed the highest two bits are ignored when outputing to the pins. > > Martin >
David Brown wrote:
>
... snip ...
> > And if you need 2^32 as a constant (which you don't for 32-bit > modulo addition and subtraction), the most practical way is to > write 0x100000000ull, or perhaps (1ull << 32), since any realistic > embedded development compiler that supports more than 32-bit > integers will support long long ints.
long long is a C99 feature. Most C compilers today do not implement that. However, they do implement C90/C90/C95 which does provide long and unsigned long. These, in turn, are guaranteed to provide at least 32 bits. So change the 'ull' above to 'ul' and things will probably work, provided the compiler is C89 or later compliant. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
David Brown wrote:
>
... snip ...
> > I don't know what you mean by this. If I want the smallest type > that holds 16-bit signed integers, I use a sint16_t - that's > portable, and gives me exactly what I want.
There is no such thing as a 'sint16_t', except as a non-standard extension in some system or other. The following extracts are from N869. 7.8 Format conversion of integer types <inttypes.h> [#1] The header <inttypes.h> includes the header <stdint.h> and extends it with additional facilities provided by hosted implementations. [#2] It declares four functions for converting numeric character strings to greatest-width integers and, for each type declared in <stdint.h>, it defines corresponding macros for conversion specifiers for use with the formatted input/output functions.170) Forward references: integer types <stdint.h> (7.18). and 7.18.1.1 Exact-width integer types [#1] The typedef name intN_t designates a signed integer | type with width N. Thus, int8_t denotes a signed integer | type with a width of exactly 8 bits. [#2] The typedef name uintN_t designates an unsigned integer | type with width N. Thus, uint24_t denotes an unsigned | integer type with a width of exactly 24 bits. | [#3] These types are optional. However, if an | implementation provides integer types with widths of 8, 16, | 32, or 64 bits, it shall define the corresponding typedef | names. Notice the above paragraph 3, stating these types are _optional_. Also note that all this is not present in the C90 (or C95) standard. However the thing that is universally known, and not optional, is that an int (and a short int) has a minimum size of 16 bits, and that a long has a minimum size of 32 bits. If a compiler fails to meet these values it is non-compliant, and does not deserve to be called a C compiler. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
Martin Wells wrote:
>
... snip ...
> > The C language is described in such a way that "int" should be the > most efficient type (or at the very least tied for first place). > > If I want to store a number, I use unsigned. > If the number can be negative, I use int. > If the number needs more than 32-Bits, I use long.
No such guarantee exists. You need to check the various values in limits.h. If the C99 long long type exists it does guarantee 64 bits.
> If I need to conserve memory, I use char if it has enough bits, > otherwise short.
In general you use unsigned to avoid undefined behaviour (or implementation defined) on overflow. Remember you can always cast an int to unsigned int, but you cannot reliably do the reverse with a cast. I suggest you read the standard. A useful text version of N869 exists, bzip2 compressed, at: <http://cbfalconer.home.att.net/download/n869_txt.bz2> You can also get a completely up to date free draft at: <http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf> but that is in PDF format, and not nearly as useful. The point of reading the standard is that you will then know where (if anywhere) your particular compiler and library system fails to meet it, and thus where to take especial care. You can also choose to use fully portable code as far as possible, with much less fuss over future porting. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
John Devereux wrote:

> If a "short" is bigger that 16 bit, surely the int_least16_t will be > too? They both have the same rationale AFAIK.
That may no longer be the case once you come across your first 128-bit CPU. short at 32 bits, int at 64, and long int at 128 might just be the right choice --- with int_least16_t still having 16 bits. There are more things than meet the eye here. One of them is that the classical short/int/long arrangement runs out of steam sooner or later, as CPU s keep getting wider. We already got "long long" because of this.
> I confess, I would probably just go ahead with the exact same code but > using unsigned char as a synonym for uint8_t.
... and doing so, you would be introducing the exact mistake that uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it doesn't exist. unsigned char is _not_ synonymous to that --- there's no particular reason it couldn't be 11 bits wide.
> They are always the same > on the machines I am familiar with. (And if they were not, there would > probably not be a uint8_t at all!)
So you'ld rather bet your code's future on would-be probabilities and what you happen to be familiar with today, than get it right once and for all.
Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:

> John Devereux wrote: > >> If a "short" is bigger that 16 bit, surely the int_least16_t will be >> too? They both have the same rationale AFAIK. > > That may no longer be the case once you come across your first 128-bit > CPU. short at 32 bits, int at 64, and long int at 128 might just be > the right choice --- with int_least16_t still having 16 bits. > > There are more things than meet the eye here. One of them is that the > classical short/int/long arrangement runs out of steam sooner or > later, as CPU s keep getting wider. We already got "long long" > because of this. > >> I confess, I would probably just go ahead with the exact same code but >> using unsigned char as a synonym for uint8_t. > > ... and doing so, you would be introducing the exact mistake that > uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it > doesn't exist. unsigned char is _not_ synonymous to that --- there's > no particular reason it couldn't be 11 bits wide.
I know... but are there actually any current embedded processors that have uint8_t, but do not have 8 bit chars?
>> They are always the same >> on the machines I am familiar with. (And if they were not, there would >> probably not be a uint8_t at all!) > > So you'ld rather bet your code's future on would-be probabilities and > what you happen to be familiar with today, than get it right once and > for all.
It is far more likely that I will want to use a routine on a system without uint8_t, than one without 8 bit chars. YMMV. -- John Devereux
David:

> > "int_fast16_t" shouldn't be anything other than plain old int. > > First off, that's not correct (consider the original Motorola 68000 - it > was a 32-bit processor, so "int" has a natural size of 32 bits on that > architecture, and yet it processed 16-bit data faster).
Sounds exactly like a 16-Bit CPU to me. I have a 32-Bit CPU that can do 64-Bit arithmetic at a slower rate... should I start calling it a 64-Bit CPU? Martin
David:

> > The C language is described in such a way that "int" should be the > > most efficient type (or at the very least tied for first place). > > That's only the case for some architectures - in particular, for 8-bit > micros, it is far from true.
Are you talking about machines that can do 8-Bit arithmetic faster than 16-Bit arithmetic... ? I hadn't considered that on these particular systems, char will be faster than int. I think I'll bring this topic up in comp.lang.c. Martin