EmbeddedRelated.com
Forums

Delay Routine: Fully-portable C89 if possible

Started by Martin Wells October 9, 2007
Martin Wells wrote:
> David: > >>> "int_fast16_t" shouldn't be anything other than plain old int. >> First off, that's not correct (consider the original Motorola 68000 - it >> was a 32-bit processor, so "int" has a natural size of 32 bits on that >> architecture, and yet it processed 16-bit data faster). > > > Sounds exactly like a 16-Bit CPU to me. >
No, it was a 32-bit cpu that had a 16-bit ALU. All the registers were 32-bit, all instructions could be done as 8-bit, 16-bit or 32-bit, and its internal address space is 32-bit. It uses the same 32-bit ISA as modern ColdFire devices. The only limitations were that the ALU was 16-bit (to save space and money), so 32-bit ALU operations took twice as long as 16-bit operations, and a 16-bit external databus. There are modern derivatives such as the 68332 that have 16-bit external databuses (but 32-bit ALU), and are thus also faster at working with 16-bit data if it is external to the CPU. The 68k architecture is nonetheless a full 32-bit architecture, and "ints" are 32 bits.
> I have a 32-Bit CPU that can do 64-Bit arithmetic at a slower rate... > should I start calling it a 64-Bit CPU? >
No, you should consider the cpu's "width" to be that of its internal general purpose integer registers and datapaths. If it makes it easier for you, the width of the cpu is the width of its "int".
> Martin >
Martin Wells wrote:
> David: > >>> The C language is described in such a way that "int" should be the >>> most efficient type (or at the very least tied for first place). >> That's only the case for some architectures - in particular, for 8-bit >> micros, it is far from true. > > > Are you talking about machines that can do 8-Bit arithmetic faster > than 16-Bit arithmetic... ? I hadn't considered that on these > particular systems, char will be faster than int. I think I'll bring > this topic up in comp.lang.c. >
Of course an 8-bit "char" will be faster than a 16-bit "int" on an 8-bit architecture! At the very least, 16-bit data takes twice as much code and twice as much time for basic arithmetic. The C language's rule of "promote everything to int" is a royal PITA for users and implementers of compilers for 8-bit micros, and requires a lot of optimisation to produce good code. For many small micros, the fastest type is specifically an "unsigned char" - some operations, such as compares, are faster done unsigned if the architecture does not have an overflow flag. What microcontrollers have you worked with (or are planning to work with), and what compilers? And have you tried looking at the generated assembly for the code you compile?
On 2007-10-16, Martin Wells <warint@eircom.net> wrote:
> David: > >> > "int_fast16_t" shouldn't be anything other than plain old int. >> >> First off, that's not correct (consider the original Motorola 68000 - it >> was a 32-bit processor, so "int" has a natural size of 32 bits on that >> architecture, and yet it processed 16-bit data faster). > > > Sounds exactly like a 16-Bit CPU to me.
It was a 32-bit CPU that came packaged with bus interfaces of various widths. Using a type that was large than the bus width was slower than using a type that was less than or equal to the bus width.
> I have a 32-Bit CPU that can do 64-Bit arithmetic at a slower rate... > should I start calling it a 64-Bit CPU?
The 68K was a 32-bit CPU. -- Grant Edwards grante Yow! I'm meditating on at the FORMALDEHYDE and the visi.com ASBESTOS leaking into my PERSONAL SPACE!!
John Devereux wrote:
> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:
>> ... and doing so, you would be introducing the exact mistake that >> uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it >> doesn't exist. unsigned char is _not_ synonymous to that --- there's >> no particular reason it couldn't be 11 bits wide.
> I know... but are there actually any current embedded processors that > have uint8_t, but do not have 8 bit chars?
I'm afraid you're still not getting it. You bet on assumptions about current processors, when the primary purpose of uint8_t is that it removes the both need to assume anything, and the restriction to currently existing hardware.
> It is far more likely that I will want to use a routine on a system > without uint8_t, than one without 8 bit chars. YMMV.
Fixing a missing uint8_t in that case is a one-liner (fix or create a <stdint.h> for that system). Fixing the code is an open-ended job.

John Devereux wrote:

> I know... but are there actually any current embedded processors that > have uint8_t, but do not have 8 bit chars?
uint8_t is not the same as an 8 bit char. Char may be implemented as signed or unsigned and often compilers have switches to change the default. With C99 uint8_t is always 8bits unsigned independent of implementation or compiler switches. (Thanks Misra/C99 for size specific types) w..
Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:

> John Devereux wrote: >> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes: > >>> ... and doing so, you would be introducing the exact mistake that >>> uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it >>> doesn't exist. unsigned char is _not_ synonymous to that --- there's >>> no particular reason it couldn't be 11 bits wide. > >> I know... but are there actually any current embedded processors that >> have uint8_t, but do not have 8 bit chars? > > I'm afraid you're still not getting it.
So that's a "no" then? :)
> You bet on assumptions about current processors, when the primary > purpose of uint8_t is that it removes the both need to assume > anything, and the restriction to currently existing hardware.
It still assumes the existence of an 8 bit type, which would likely not exist on a machine where "unsigned char" was not 8 bit. I.e, if I am worried about portability to that extent, I should use a mask instead (as discussed upthread).
>> It is far more likely that I will want to use a routine on a system >> without uint8_t, than one without 8 bit chars. YMMV. > > Fixing a missing uint8_t in that case is a one-liner (fix or create a > <stdint.h> for that system). Fixing the code is an open-ended job.
-- John Devereux
John Devereux wrote:
> It still assumes the existence of an 8 bit type, which would likely > not exist on a machine where "unsigned char" was not 8 bit.
I'm curious about how a storage type smaller than char could exist without knock-on effects. For instance, what would sizeof(uint8) return when a char contained more than 8 bits? Afterall, by definition, sizeof(char) == 1. -- Paul
John Devereux wrote:
> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:
>> You bet on assumptions about current processors, when the primary >> purpose of uint8_t is that it removes the both need to assume >> anything, and the restriction to currently existing hardware.
> It still assumes the existence of an 8 bit type, which would likely > not exist on a machine where "unsigned char" was not 8 bit.
Likeliness is irrelevant. Either uint8_t exists, or it doesn't. Period. A routine should use uint8_t if, and only if, the algorithm needs those variables to be exactly 8 bits wide and unsigned. So if uint8_t isn't available, the code won't compile --- but that's a good thing, since it wouldn't work even if compiled. If it is available, the code will compile and work exactly as designed.
Paul Black wrote:
> John Devereux wrote: >> It still assumes the existence of an 8 bit type, which would likely >> not exist on a machine where "unsigned char" was not 8 bit. > > I'm curious about how a storage type smaller than char could exist > without knock-on effects. > > For instance, what would sizeof(uint8) return when a char contained more > than 8 bits? Afterall, by definition, sizeof(char) == 1. >
I should imagine that if a compiler supports a "uint4_t", for example, then sizeof(uint4_t) will give an error, just as applying sizeof to a bitfield does. Storage types smaller than a "char" (other than bit-fields) are not required or specified by the standards. Thus any such feature is an extra, and the compiler writer can quite reasonably have restrictions such as not allowing sizeof, or making it impossible to take the address of such a type.
CBFalconer wrote:
> David Brown wrote: > ... snip ... >> And if you need 2^32 as a constant (which you don't for 32-bit >> modulo addition and subtraction), the most practical way is to >> write 0x100000000ull, or perhaps (1ull << 32), since any realistic >> embedded development compiler that supports more than 32-bit >> integers will support long long ints. > > long long is a C99 feature. Most C compilers today do not > implement that. However, they do implement C90/C90/C95 which does > provide long and unsigned long. These, in turn, are guaranteed to > provide at least 32 bits. So change the 'ull' above to 'ul' and > things will probably work, provided the compiler is C89 or later > compliant. >
2^32 requires 33 bits (it's a one with 32 zeros), which is why I specifically wrote ull. The only way a compiler could support 2^32 without supporting long long ints is if it is a 64-bit compiler with 64-bit long ints. There are very few 64-bit cpus in the embedded arena - MIPS and perhaps PPC are the only ones I know of, excluding amd64 cpus, and any practical compiler for these devices will support long long ints.