John Devereux wrote:> David Brown <david@westcontrol.removethisbit.com> writes: > > [...] > >> Secondly, I was suggesting that if you want portable code, you have to >> use size-specific integer types. Using <stdint.h> is an easy way to >> get that - otherwise, a common format header file that is adapted for >> the compiler/target in question is a useful method. It doesn't really >> matter whether you use "uint32_t" from <stdint.h>, or have a "typedef >> unsigned long int uint32_t" in a common header file - nor does it >> matter if you give the type your own name. But it *does* matter that >> you have such types available in your code. >> >> Certainly many of the situations where size specifics are important >> are in hardware dependant and non-portable - and thus the only issue >> is that the code in question is clear. >> >> But there are many cases where you need a minimum range which may not >> be satisfied by "int" on every platform, and also where you want the >> fastest implementation. If you have a function >> delayMicrosecs(unsigned int n), then the useful range is wildly >> different on a 32-bit target and a 16-bit (or 8-bit, with 16-bit int) >> target. On the other hand, if it is declared with "uint32_t n", it is >> portable from 8-bit to 64-bit processors. Since the OP was asking for >> portable code in an embedded newsgroup, there's no way he can make >> assumptions about the size of "int". > > If I wanted a minimum range of 32 bits, I would use "unsigned > long". As you know this is already guaranteed to be at least 32 bits > on all platforms, so I don't think there is any portability problem > with 8,16 and 32 bit processors. >"unsigned long int" will, as you say, work for when you need at least 32 bits, and "unsigned int" will work for when you need at least 16 bits. I prefer to be more specific and explicit - I find I often want my types to be of a given width, neither more nor less than I specify. I'll grant you that this is somewhat a matter of style - but I am not alone in this (it takes a big demand to get something like <stdint.h> into the standards).> Now I admit that these are the only cases I think about when writing > my own embedded code. But I suppose you could argue that on some > hypothetical 64 bit embedded processor, I would then be using values > that were longer than needed. But, >The 64-bit MIPs processors are non-hypothetical embedded processors (although they are not common in this newsgroup).> - with 64 bit CPUs, is it not true that the compilers still tend to > have 32 bit longs, and use "long long" for 64 bits? >That varies. "long long int" is always at least 64 bits, but there are different models for whether "long int" is 32-bit or 64-bit. In particular, 64-bit Windows uses 32-bit long ints, while 64-bit Linux uses 64-bit long ints. It is also perfectly possible for both "int" and "long int" to be 64-bit, but I think that is uncommon.> - If longs *are* 64 bits, it could be because 32 bit operations are > *slower* on that processor. Strictly, I think there might not even > *be* a 32 bit type - unlikely, I agree. >As you say, possible but unlikely. I don't agree with your thoughts as to why "long int" might be 32-bit or 64-bit - the trouble is, different implementations of the same instruction set might have different balances (the first Intel chips to support amd64 instructions were, IIRC, quite a bit slower at 64-bit arithmetic, while the AMD chips were faster in 64-bit mode).> - On a 64 bit system, it is very likely that I would want to take > advantage of the greater word size and use 64 bit arguments in any > case. >For most of my work, I use 8 and 16 bit systems, but I sometimes use 32-bit cpus (some with 16-bit external buses, making 16-bit ints faster for some purposes) - thus I have the same situation when using 32-bit cpus as you describe for 64-bit cpus.
Delay Routine: Fully-portable C89 if possible
Started by ●October 9, 2007
Reply by ●October 10, 20072007-10-10
Reply by ●October 10, 20072007-10-10
David Brown <david@westcontrol.removethisbit.com> writes:> John Devereux wrote: >> David Brown <david@westcontrol.removethisbit.com> writes: >> >> [...] >>[...]>>> But there are many cases where you need a minimum range which may not >>> be satisfied by "int" on every platform, and also where you want the >>> fastest implementation. If you have a function >>> delayMicrosecs(unsigned int n), then the useful range is wildly >>> different on a 32-bit target and a 16-bit (or 8-bit, with 16-bit int) >>> target. On the other hand, if it is declared with "uint32_t n", it is >>> portable from 8-bit to 64-bit processors. Since the OP was asking for >>> portable code in an embedded newsgroup, there's no way he can make >>> assumptions about the size of "int". >> >> If I wanted a minimum range of 32 bits, I would use "unsigned >> long". As you know this is already guaranteed to be at least 32 bits >> on all platforms, so I don't think there is any portability problem >> with 8,16 and 32 bit processors. >> > > "unsigned long int" will, as you say, work for when you need at least > 32 bits, and "unsigned int" will work for when you need at least 16 > bits. I prefer to be more specific and explicit - I find I often want > my types to be of a given width, neither more nor less than I specify. > I'll grant you that this is somewhat a matter of style - but I am not > alone in this (it takes a big demand to get something like <stdint.h> > into the standards). > >> Now I admit that these are the only cases I think about when writing >> my own embedded code. But I suppose you could argue that on some >> hypothetical 64 bit embedded processor, I would then be using values >> that were longer than needed. But, >> > > The 64-bit MIPs processors are non-hypothetical embedded processors > (although they are not common in this newsgroup). > >> - with 64 bit CPUs, is it not true that the compilers still tend to >> have 32 bit longs, and use "long long" for 64 bits? >> > > That varies. "long long int" is always at least 64 bits, but there > are different models for whether "long int" is 32-bit or 64-bit. In > particular, 64-bit Windows uses 32-bit long ints, while 64-bit Linux > uses 64-bit long ints. It is also perfectly possible for both "int" > and "long int" to be 64-bit, but I think that is uncommon. > >> - If longs *are* 64 bits, it could be because 32 bit operations are >> *slower* on that processor. Strictly, I think there might not even >> *be* a 32 bit type - unlikely, I agree. >> > > As you say, possible but unlikely. I don't agree with your thoughts > as to why "long int" might be 32-bit or 64-bit - the trouble is, > different implementations of the same instruction set might have > different balances (the first Intel chips to support amd64 > instructions were, IIRC, quite a bit slower at 64-bit arithmetic, > while the AMD chips were faster in 64-bit mode). > >> - On a 64 bit system, it is very likely that I would want to take >> advantage of the greater word size and use 64 bit arguments in any >> case. >> > > For most of my work, I use 8 and 16 bit systems, but I sometimes use > 32-bit cpus (some with 16-bit external buses, making 16-bit ints > faster for some purposes) - thus I have the same situation when using > 32-bit cpus as you describe for 64-bit cpus.OK. I think I do see the rationale, but I don't find it convincing enough to want to expunge the native types and "pollute" my code with *int*_t everywhere. One other point I did not mention (perhaps someone else did?) would be interfacing to standard C functions. E.g. What happens when you call printf with a uint32_t, or a int_atleast_32_and_fast_please_t? Doesn't that imply a whole new set of things to worry about? -- John Devereux
Reply by ●October 10, 20072007-10-10
John Devereux wrote:> David Brown <david@westcontrol.removethisbit.com> writes: > >> John Devereux wrote: >>> David Brown <david@westcontrol.removethisbit.com> writes: >>> >>> [...] >>> > [...] >>>> But there are many cases where you need a minimum range which may not >>>> be satisfied by "int" on every platform, and also where you want the >>>> fastest implementation. If you have a function >>>> delayMicrosecs(unsigned int n), then the useful range is wildly >>>> different on a 32-bit target and a 16-bit (or 8-bit, with 16-bit int) >>>> target. On the other hand, if it is declared with "uint32_t n", it is >>>> portable from 8-bit to 64-bit processors. Since the OP was asking for >>>> portable code in an embedded newsgroup, there's no way he can make >>>> assumptions about the size of "int". >>> If I wanted a minimum range of 32 bits, I would use "unsigned >>> long". As you know this is already guaranteed to be at least 32 bits >>> on all platforms, so I don't think there is any portability problem >>> with 8,16 and 32 bit processors. >>> >> "unsigned long int" will, as you say, work for when you need at least >> 32 bits, and "unsigned int" will work for when you need at least 16 >> bits. I prefer to be more specific and explicit - I find I often want >> my types to be of a given width, neither more nor less than I specify. >> I'll grant you that this is somewhat a matter of style - but I am not >> alone in this (it takes a big demand to get something like <stdint.h> >> into the standards). >> >>> Now I admit that these are the only cases I think about when writing >>> my own embedded code. But I suppose you could argue that on some >>> hypothetical 64 bit embedded processor, I would then be using values >>> that were longer than needed. But, >>> >> The 64-bit MIPs processors are non-hypothetical embedded processors >> (although they are not common in this newsgroup). >> >>> - with 64 bit CPUs, is it not true that the compilers still tend to >>> have 32 bit longs, and use "long long" for 64 bits? >>> >> That varies. "long long int" is always at least 64 bits, but there >> are different models for whether "long int" is 32-bit or 64-bit. In >> particular, 64-bit Windows uses 32-bit long ints, while 64-bit Linux >> uses 64-bit long ints. It is also perfectly possible for both "int" >> and "long int" to be 64-bit, but I think that is uncommon. >> >>> - If longs *are* 64 bits, it could be because 32 bit operations are >>> *slower* on that processor. Strictly, I think there might not even >>> *be* a 32 bit type - unlikely, I agree. >>> >> As you say, possible but unlikely. I don't agree with your thoughts >> as to why "long int" might be 32-bit or 64-bit - the trouble is, >> different implementations of the same instruction set might have >> different balances (the first Intel chips to support amd64 >> instructions were, IIRC, quite a bit slower at 64-bit arithmetic, >> while the AMD chips were faster in 64-bit mode). >> >>> - On a 64 bit system, it is very likely that I would want to take >>> advantage of the greater word size and use 64 bit arguments in any >>> case. >>> >> For most of my work, I use 8 and 16 bit systems, but I sometimes use >> 32-bit cpus (some with 16-bit external buses, making 16-bit ints >> faster for some purposes) - thus I have the same situation when using >> 32-bit cpus as you describe for 64-bit cpus. > > OK. I think I do see the rationale, but I don't find it convincing > enough to want to expunge the native types and "pollute" my code with > *int*_t everywhere. >These things are a matter of personal style, but I think it's important to have concrete specific sized types available when you need them. In my code, a lot of local variables end up as "int" for convenience (although not on 8-bit targets), but for many exported functions, types, and data, I use size-specific types. I also use them for structs that will be used in arrays - on small micros, it can make a big difference if the size of such structs makes it easy to calculate addresses. In practice, I make more use of my own typedef'ed types like "byte" and "word" (16-bit, regardless of the cpu), simply because I've been using them for over a decade. But whereas previously my common include file (target/compiler specific) might have "typedef unsigned short int word;", it would now have "typedef uint16_t word;".> One other point I did not mention (perhaps someone else did?) would be > interfacing to standard C functions. E.g. What happens when you call > printf with a uint32_t, or a int_atleast_32_and_fast_please_t? >First off, I don't use printf or friends very often (I write small embedded systems). Secondly, if I *do* use printf (more likely snprintf), I use gcc which will type-check the parameters against the format so that any mistakes are caught - although with any variable parameter function, you've lost much of C's already limited type checking. Thirdly, I occasionally have to cast the parameters explicitly so that I can be sure there are no mistakes.> Doesn't that imply a whole new set of things to worry about? >I would not say so, no. On the other hand, if I were writing a C program on a PC or an embedded Linux box, I'd expect to use the fundamental C types a lot more often - because then parameters like the size of an "int" are known and fixed, "int" is generally the fastest type (which is not always the case in embedded systems), and there are far fewer demands in trying to use the smallest possible type in order to save memory. mvh., David
Reply by ●October 10, 20072007-10-10
On 2007-10-10, John Devereux <jdREMOVE@THISdevereux.me.uk> wrote:> If I wanted a minimum range of 32 bits, I would use "unsigned > long".You'd be better off using uint_least32_t. It _may_ mean the same thing to the compiler, but it expresses your intent more clearly to the reader (and that's what really matters). It also allows the compiler to user a longer type if that would result in faster or smaller code.> As you know this is already guaranteed to be at least 32 bits > on all platforms, so I don't think there is any portability > problem with 8,16 and 32 bit processors. > > Now I admit that these are the only cases I think about when > writing my own embedded code. But I suppose you could argue > that on some hypothetical 64 bit embedded processor, I would > then be using values that were longer than needed. But, > > - with 64 bit CPUs, is it not true that the compilers still tend to > have 32 bit longs, and use "long long" for 64 bits?Why worry about it? Use uint32_t if you want exactly 32 bits. Use uint_least32_t if you want at least 32 bits. Use uint64_t if you want 64 bits. -- Grant Edwards grante Yow! I just went below the at poverty line! visi.com
Reply by ●October 10, 20072007-10-10
"John Devereux" <jdREMOVE@THISdevereux.me.uk> skrev i meddelandet news:87hcl0q9e6.fsf@cordelia.devereux.me.uk...> Martin Wells <warint@eircom.net> writes: > >> David: >> >>> Most >>> importantly, it gives you "stdint.h" and types like "uint32_t" so that >>> you can avoid unspecific non-standardised types like "long unsigned" >>> (which should always be written "long unsigned int"). >> >> "long unsigned int" is a part of C89. >> >> Perhaps you were on about "long long unsigned int"? (which is a part >> of C89 but not C99) >> >> As far as any compliant C89 compiler is concerned, "long unsigned" and >> "long unsigned int" are the same thing. If the compiler doesn't accept >> it, then it isn't a C89 compiler. >> >> Even if it were worth switching to C99 (which I don't think it is), I >> still wouldn't because it's so poorly implemented today. > > No, he is saying that he prefers being able to use uint32_t, instead > of, for example, long unsigned. > > I personally don't agree though. At least in my own work I have not > found a real-life situation where these types are an > improvement. Basically, if you are worried about the exact widths of > types, then that part of the program is likely non-portable anyway, so > the new types don't help much. > > For example, in C99 I could define a 32 bit hardware register like > this: > > #include <stdint.h> > #define PORT (*(volatile uint32_t *)(0x1FFF0000)) > > But in fact this code will likely be useless on some hypothetical > other CPU anyway. I can just as easily rely on ints being 32 bit on my > platform, and do > > #define PORT (*(volatile unsigned long *)(0x1FFF0000)) > >Its a terribly good example to try to access H/W functionality when we are talking about portability - not. Main problem is likely to be the int, which is quite often 16 bit on a small micro and 32 bit on a larger micro. char's are sometimes set to unsigned char on small micros, and to be able to guarantee signedness regardless of compiler options is an improvement. -- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB> -- > > John Devereux
Reply by ●October 10, 20072007-10-10
On Tue, 09 Oct 2007 05:49:09 -0700, Martin Wells wrote:> I'm doing an embedded systems project which consists of taking input > from the user via simple buttons, and giving output in the form of > lighting LED's. So far, I've written the program as fully-portable > C89, and I intend to keep it that way as much as possible. Obviously, > I'll have microcontroller-specific parts to it such as: > > void SetPinHigh(unsigned) > { > /* Must call microcontroller-specific library functions or > something here */ > } > > , but the rest of my program calls these "wrapper" functions so I can > keep the bulk of it fully-portable. > > Anyway, I've come to a point where I need to introduce delays, and I > again want this to be fully portable. The delays will be in the region > of milliseconds (typically 250ms, eg. for flashing LED's). > > I had considered using a macro which indicates the "Floating Point > Operations per Second" for the given hardware setup, and then doing > something like: > > void Delay(unsigned const fraction_of_full_second) > { > long unsigned amount_flop = FLOPS / fraction_of_full_second; > > float x = 56.3; > > do x /= x; > while (--amount_flop); > } > > (I realise I'll have to take into account the delays of looping and so > forth) > > Is this a bad idea? How should I go about introducing a delay? Must > this part of my program be non-portable? > > MartinMany embedded systems need to do multiple tasks, either with an RTOS kernel or with a task loop. These systems will _not_ look favorably on some "portable code" that steals the processor for a delay loop. To make your code compatible with both a task loop and an OS you need to put everything into a state machine that gets run when the application programmer calls your 'update' function (don't just name it 'update', of course). The best way (to my mind) to enforce timing is to require the user to hook you up with a system time function that returns your choice of time units (I find myself using milliseconds quite often). Then you can keep track of when you want to come alive, and immediately return from your update function if sufficient time has not passed. The second best way to enforce timing, and always a good adjunct to the first way, is to have your 'update' function return a delay that you want to be externally implemented. So if you run your update function and decide that the next time you should be called is in 250 ms, you can return a number that means 250ms, and leave it up to the application programmer to wait that long. I have done this with good success in portable code that needs to run in both sorts of environments. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by ●October 10, 20072007-10-10
"Ulf Samuelsson" <ulf@a-t-m-e-l.com> writes:> "John Devereux" <jdREMOVE@THISdevereux.me.uk> skrev i meddelandet > news:87hcl0q9e6.fsf@cordelia.devereux.me.uk... >> Martin Wells <warint@eircom.net> writes: >> >>> David: >>> >>>> Most >>>> importantly, it gives you "stdint.h" and types like "uint32_t" so that >>>> you can avoid unspecific non-standardised types like "long unsigned" >>>> (which should always be written "long unsigned int"). >>> >>> "long unsigned int" is a part of C89. >>> >>> Perhaps you were on about "long long unsigned int"? (which is a part >>> of C89 but not C99) >>> >>> As far as any compliant C89 compiler is concerned, "long unsigned" and >>> "long unsigned int" are the same thing. If the compiler doesn't accept >>> it, then it isn't a C89 compiler. >>> >>> Even if it were worth switching to C99 (which I don't think it is), I >>> still wouldn't because it's so poorly implemented today. >> >> No, he is saying that he prefers being able to use uint32_t, instead >> of, for example, long unsigned. >> >> I personally don't agree though. At least in my own work I have not >> found a real-life situation where these types are an >> improvement. Basically, if you are worried about the exact widths of >> types, then that part of the program is likely non-portable anyway, so >> the new types don't help much. >> >> For example, in C99 I could define a 32 bit hardware register like >> this: >> >> #include <stdint.h> >> #define PORT (*(volatile uint32_t *)(0x1FFF0000)) >> >> But in fact this code will likely be useless on some hypothetical >> other CPU anyway. I can just as easily rely on ints being 32 bit on my >> platform, and do >> >> #define PORT (*(volatile unsigned long *)(0x1FFF0000)) >> >> > > Its a terribly good example to try to access H/W functionality > when we are talking about portability - not.That is kind of what I was saying! Most cases where I want to use exact widths are not portable anyway, so the "exact-width" types are no help.> Main problem is likely to be the int, which is quite often 16 bit > on a small micro and 32 bit on a larger micro. > char's are sometimes set to unsigned char on small micros, > and to be able to guarantee signedness regardless of > compiler options is an improvement.You can already guarantee signedness using "signed" or "unsigned" as needed. I.e., I don't see what is wrong with using "signed char" when you need signed "unsigned char" when you need unsigned "char" when you don't care. If you start using e.g. uint8_t everywhere then you get into trouble with the library functions that expect plain char. -- John Devereux
Reply by ●October 10, 20072007-10-10
John Devereux wrote:> OK. I think I do see the rationale, but I don't find it convincing > enough to want to expunge the native types and "pollute" my code with > *int*_t everywhere.Who said anything about expunging native types? The reason for uint16_t and friends is that there situations (e.g. if you want to control wrap-around behaviour in an expression) where you need a type of exactly that size, or the code won't work. Now, of course "unsigned int" might work just the same, too --- on the platform the code is aimed at now. But the next controller you want to run it on may be a 32-bit one. So you'll have to go over the *entire* code and decide, based on design documentation (lots of it, hopefully), comments and the occasional guess, which of those "unsigned int" was actually meant to be 16-bit unsigned, sharp, and which wasn't. Better to spell it out right there on the first shot, and be done with it. Ultimately, the closer you look, the less useful the traditional, "native" integer types turn out to be.> One other point I did not mention (perhaps someone else did?) would be > interfacing to standard C functions. E.g. What happens when you call > printf with a uint32_t, or a int_atleast_32_and_fast_please_t?You learn about <inttypes.h>. And you make sure the tools (compiler, lint) are up to the task of helping you with these like they hopefully already do with the traditional integer types. The ability to have tools help you with these is actually a major reason why they should be standardized. Lint shouldn't have to learn everybody's and their grandma's private re-invention of uint16_t.
Reply by ●October 10, 20072007-10-10
Martin Wells wrote:> John:>> For example, in C99 I could define a 32 bit hardware register like >> this:>> #include <stdint.h> >> #define PORT (*(volatile uint32_t *)(0x1FFF0000))> This will only work on implementations that actually have an unsigned > integer type that has exactly 32 value representational bits.I suspect you'll find this hard to believe, but: that's actually a good thing. A platform that has no such type can't run that code as designed, so there's no point for it to compile on that platform. It should fail.> On other implementations, it won't compile. Best to use > uint_least32_t (at least 32-Bits).Not in this particular case. If the platform doesn't have 32-bit integers, it can't have 32-bit hardware registers, so it shouldn't compile this code.> unsigned long is guaranteed to be atleast 32-Bit, so that's fine. If > you wanted to go easy on space consumption, you could still use C89 > and use macros: > > #if VALUE_BITS(char unsigned) >= 32 > typedef char unsigned uint_least32_t;It would be rather nice if a VALUE_BITS like that were a C89 standard functionality, wouldn't it? Well, sorry, it's not. And not only that: it can't even be implemented in C89. There's only CHAR_BIT (in <limits.h>), and sizeof() --- but the latter doesn't work in preprocessor instructions. There are reasons <stdint.h> was made part of the standard. One of them is that it's quite hard to implement its functionality unless you're the compiler implementor.
Reply by ●October 10, 20072007-10-10
Chris Hills wrote:> In message <470b8d0c$0$3221$8404b019@news.wineasy.se>, David Brown > <david@westcontrol.removethisbit.com> writes>> Also, if you are going to try and stick to a standard, C99 makes more >> sense (even though few compilers support it completely).> Absolutely NOT All the embedded compilers are based on C95.The above is critically unclear due to a typo. I'll assume what you meant is: > Absolutely NOT. All the embedded compilers are based on C95. And I find that an absolutely stunning statement. So stunning that I find it impossible to believe. "All the embedded compilers", you say? As in each and every single one of them, and you're sure of that?