Reply by David Brown October 22, 20072007-10-22
CBFalconer wrote:
> David Brown wrote: > ... snip ... >> I don't know what you mean by this. If I want the smallest type >> that holds 16-bit signed integers, I use a sint16_t - that's >> portable, and gives me exactly what I want. > > There is no such thing as a 'sint16_t', except as a non-standard > extension in some system or other. The following extracts are from > N869. > > 7.8 Format conversion of integer types <inttypes.h> > > [#1] The header <inttypes.h> includes the header <stdint.h> > and extends it with additional facilities provided by hosted > implementations. > > [#2] It declares four functions for converting numeric > character strings to greatest-width integers and, for each > type declared in <stdint.h>, it defines corresponding macros > for conversion specifiers for use with the formatted > input/output functions.170) > > Forward references: integer types <stdint.h> (7.18). > > and > > 7.18.1.1 Exact-width integer types > > [#1] The typedef name intN_t designates a signed integer | > type with width N. Thus, int8_t denotes a signed integer | > type with a width of exactly 8 bits. > > [#2] The typedef name uintN_t designates an unsigned integer | > type with width N. Thus, uint24_t denotes an unsigned | > integer type with a width of exactly 24 bits. | > > [#3] These types are optional. However, if an | > implementation provides integer types with widths of 8, 16, | > 32, or 64 bits, it shall define the corresponding typedef | > names. > > Notice the above paragraph 3, stating these types are _optional_. > Also note that all this is not present in the C90 (or C95) > standard. > > However the thing that is universally known, and not optional, is > that an int (and a short int) has a minimum size of 16 bits, and > that a long has a minimum size of 32 bits. If a compiler fails to > meet these values it is non-compliant, and does not deserve to be > called a C compiler. >
The "sint16_t" was a typo - it should of course be "int16_t". For a C99 compiler, "int16_t" is *not* optional unless there are no 16-bit integer types available on the implementation. Thus int16_t is standard and portable on modern compilers (many non-C99 compilers provide "cheap" C99 features, such as // comments and <stdint.h>). You are perfectly correct in saying that a compiler whose "int" is less that 16-bit does not implement the C language correctly (never mind any of the standards). However, the fact remains that in the embedded market there are "C" compilers that target 8-bit cpus, have 8-bit ints (at least as an option), and are useful and productive tools despite not technically being "C".
Reply by David Brown October 22, 20072007-10-22
CBFalconer wrote:
> David Brown wrote: > ... snip ... >> And if you need 2^32 as a constant (which you don't for 32-bit >> modulo addition and subtraction), the most practical way is to >> write 0x100000000ull, or perhaps (1ull << 32), since any realistic >> embedded development compiler that supports more than 32-bit >> integers will support long long ints. > > long long is a C99 feature. Most C compilers today do not > implement that. However, they do implement C90/C90/C95 which does > provide long and unsigned long. These, in turn, are guaranteed to > provide at least 32 bits. So change the 'ull' above to 'ul' and > things will probably work, provided the compiler is C89 or later > compliant. >
2^32 requires 33 bits (it's a one with 32 zeros), which is why I specifically wrote ull. The only way a compiler could support 2^32 without supporting long long ints is if it is a 64-bit compiler with 64-bit long ints. There are very few 64-bit cpus in the embedded arena - MIPS and perhaps PPC are the only ones I know of, excluding amd64 cpus, and any practical compiler for these devices will support long long ints.
Reply by David Brown October 18, 20072007-10-18
Paul Black wrote:
> John Devereux wrote: >> It still assumes the existence of an 8 bit type, which would likely >> not exist on a machine where "unsigned char" was not 8 bit. > > I'm curious about how a storage type smaller than char could exist > without knock-on effects. > > For instance, what would sizeof(uint8) return when a char contained more > than 8 bits? Afterall, by definition, sizeof(char) == 1. >
I should imagine that if a compiler supports a "uint4_t", for example, then sizeof(uint4_t) will give an error, just as applying sizeof to a bitfield does. Storage types smaller than a "char" (other than bit-fields) are not required or specified by the standards. Thus any such feature is an extra, and the compiler writer can quite reasonably have restrictions such as not allowing sizeof, or making it impossible to take the address of such a type.
Reply by Hans-Bernhard Bröker October 17, 20072007-10-17
John Devereux wrote:
> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:
>> You bet on assumptions about current processors, when the primary >> purpose of uint8_t is that it removes the both need to assume >> anything, and the restriction to currently existing hardware.
> It still assumes the existence of an 8 bit type, which would likely > not exist on a machine where "unsigned char" was not 8 bit.
Likeliness is irrelevant. Either uint8_t exists, or it doesn't. Period. A routine should use uint8_t if, and only if, the algorithm needs those variables to be exactly 8 bits wide and unsigned. So if uint8_t isn't available, the code won't compile --- but that's a good thing, since it wouldn't work even if compiled. If it is available, the code will compile and work exactly as designed.
Reply by Paul Black October 17, 20072007-10-17
John Devereux wrote:
> It still assumes the existence of an 8 bit type, which would likely > not exist on a machine where "unsigned char" was not 8 bit.
I'm curious about how a storage type smaller than char could exist without knock-on effects. For instance, what would sizeof(uint8) return when a char contained more than 8 bits? Afterall, by definition, sizeof(char) == 1. -- Paul
Reply by John Devereux October 16, 20072007-10-16
Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:

> John Devereux wrote: >> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes: > >>> ... and doing so, you would be introducing the exact mistake that >>> uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it >>> doesn't exist. unsigned char is _not_ synonymous to that --- there's >>> no particular reason it couldn't be 11 bits wide. > >> I know... but are there actually any current embedded processors that >> have uint8_t, but do not have 8 bit chars? > > I'm afraid you're still not getting it.
So that's a "no" then? :)
> You bet on assumptions about current processors, when the primary > purpose of uint8_t is that it removes the both need to assume > anything, and the restriction to currently existing hardware.
It still assumes the existence of an 8 bit type, which would likely not exist on a machine where "unsigned char" was not 8 bit. I.e, if I am worried about portability to that extent, I should use a mask instead (as discussed upthread).
>> It is far more likely that I will want to use a routine on a system >> without uint8_t, than one without 8 bit chars. YMMV. > > Fixing a missing uint8_t in that case is a one-liner (fix or create a > <stdint.h> for that system). Fixing the code is an open-ended job.
-- John Devereux
Reply by Walter Banks October 16, 20072007-10-16

John Devereux wrote:

> I know... but are there actually any current embedded processors that > have uint8_t, but do not have 8 bit chars?
uint8_t is not the same as an 8 bit char. Char may be implemented as signed or unsigned and often compilers have switches to change the default. With C99 uint8_t is always 8bits unsigned independent of implementation or compiler switches. (Thanks Misra/C99 for size specific types) w..
Reply by Hans-Bernhard Bröker October 16, 20072007-10-16
John Devereux wrote:
> Hans-Bernhard Br&ouml;ker <HBBroeker@t-online.de> writes:
>> ... and doing so, you would be introducing the exact mistake that >> uint8_t was invented to avoid. uint8_t is exactly 8 bits, or it >> doesn't exist. unsigned char is _not_ synonymous to that --- there's >> no particular reason it couldn't be 11 bits wide.
> I know... but are there actually any current embedded processors that > have uint8_t, but do not have 8 bit chars?
I'm afraid you're still not getting it. You bet on assumptions about current processors, when the primary purpose of uint8_t is that it removes the both need to assume anything, and the restriction to currently existing hardware.
> It is far more likely that I will want to use a routine on a system > without uint8_t, than one without 8 bit chars. YMMV.
Fixing a missing uint8_t in that case is a one-liner (fix or create a <stdint.h> for that system). Fixing the code is an open-ended job.
Reply by Grant Edwards October 16, 20072007-10-16
On 2007-10-16, Martin Wells <warint@eircom.net> wrote:
> David: > >> > "int_fast16_t" shouldn't be anything other than plain old int. >> >> First off, that's not correct (consider the original Motorola 68000 - it >> was a 32-bit processor, so "int" has a natural size of 32 bits on that >> architecture, and yet it processed 16-bit data faster). > > > Sounds exactly like a 16-Bit CPU to me.
It was a 32-bit CPU that came packaged with bus interfaces of various widths. Using a type that was large than the bus width was slower than using a type that was less than or equal to the bus width.
> I have a 32-Bit CPU that can do 64-Bit arithmetic at a slower rate... > should I start calling it a 64-Bit CPU?
The 68K was a 32-bit CPU. -- Grant Edwards grante Yow! I'm meditating on at the FORMALDEHYDE and the visi.com ASBESTOS leaking into my PERSONAL SPACE!!
Reply by David Brown October 16, 20072007-10-16
Martin Wells wrote:
> David: > >>> The C language is described in such a way that "int" should be the >>> most efficient type (or at the very least tied for first place). >> That's only the case for some architectures - in particular, for 8-bit >> micros, it is far from true. > > > Are you talking about machines that can do 8-Bit arithmetic faster > than 16-Bit arithmetic... ? I hadn't considered that on these > particular systems, char will be faster than int. I think I'll bring > this topic up in comp.lang.c. >
Of course an 8-bit "char" will be faster than a 16-bit "int" on an 8-bit architecture! At the very least, 16-bit data takes twice as much code and twice as much time for basic arithmetic. The C language's rule of "promote everything to int" is a royal PITA for users and implementers of compilers for 8-bit micros, and requires a lot of optimisation to produce good code. For many small micros, the fastest type is specifically an "unsigned char" - some operations, such as compares, are faster done unsigned if the architecture does not have an overflow flag. What microcontrollers have you worked with (or are planning to work with), and what compilers? And have you tried looking at the generated assembly for the code you compile?