EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

C++ problem

Started by tim... March 1, 2016
"tim..." <tims_new_home@yahoo.com> writes:
> I like the "C" rule. I like that fact that consts are identified by > them being in Caps, it is useful. I hate the fact that in C++ they > look like normal variables...
Nothing stops you from naming consts in caps: static const uint32_t XYZZY = 3; works just fine.
> I questioned the C++ rule, and because I am less experienced in C++ > than the person who oversees the rules I get told to "take a hike".
Well, that's not very good manners on the other person's part, but C++ has such a wide range of possible styles that (more than in other languages IMHO) project leadership has to enforce a level of uniformity. They should at least be nice about it though.
> a type mismatch has never ever caused a problem which (once > discovered) took more than a little thought to find the source and fix.
Yes, the idea is that it's better to prevent the problem in the first place than require you to discover it (often at the worst possble time) after the code is already running.
> OTOH there have been multiple occasions when some different type of > error that the compiler didn't spot has taken weeks to resolve.
I do have the impression that C static analyzers have gotten a lot better lately, but it's been a while since I wrote very much C.
On 04.3.2016 &#1075;. 11:16, Philip Lantz wrote:
> Dimiter_Popoff wrote: >> >> On 03.3.2016 ?. 11:26, Paul Rubin wrote: >>> Dimiter_Popoff <dp@tgi-sci.com> writes: >>>> The product of SIGNED multiplication (as you repeatedly repeated) is >>>> a SIGNED number and $ffff0001 being a SIGNED number is -65535. >>> >>> You can't multiply a positive number by a positive number and get a >>> negative number, except through overflow. And in C, overflow on signed >>> ints results in UB. >>> >>>> Then in NO working compiler would the product of a signed 16 bit >>>> being -1 ($ffff) and an unsigned $ffff (65535) - being converted to >>>> signed prior to multiplication >>> >>> Where did signed $ffff come from? I thought we were talking about >>> unsigned $ffff being multiplied by itself. >> >> It was there from the very start of the thread and is present >> throughout. >> >> Dimiter > > I think you'll find that no one but you has mentioned a signed 16 > bit number in this thread. This is the reason everyone has been > disagreeing with your comments. >
From the post initiating the thread: "Today, I discovered that if you use: const unsigned int xxx = value; (which in this case happens to be = 0) and then in the code try and compare it with a signed int yyy " "Everyone" amounts to one person really - who disagreed with obvious numeric statements i made, probably out of being hasty to know better without having seriously read them. I had misread his explanation that both operands get promoted from 16 bits whatever to 32 bit signed (he had stated they got to unsigned). In the former case there is no way to get an overflow; in the latter there is only one way to get an overflow and it is if after promoting both to unsigned by some compiler/language error a signed compare is attempted on the two - pretty far fetched but perhaps everyday life for C users, this would be one of the not so significant shortcomings of C though it would be indicative of the mess it is (besides being a clunky language - but this is another matter on which perhaps "everybody" here will indeed disagree with me, "everybody" having never had anything better at their disposal). Either way my posts are clear enough and arithmetically correct, figures like that are not subject to agreement or disagreement. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 03.3.2016 &#1075;. 14:09, George Neuner wrote:
> On Thu, 3 Mar 2016 11:39:16 +0200, Dimiter_Popoff <dp@tgi-sci.com> > wrote: > >> ... I would be seriously stunned if >> it turns out that in C (which I do not use, I use my vpa >> obviously) multiplying these two constants, 16 bit $ffff representing >> a signed number (i.e. being -1) and 16 bit $ffff unsigned (i.e. >> 65535) into a 32 bit result would not work. > > It's not just a question of whether it works ... it's also a question > of _why_ it works. This behavior works de facto due to a quirk[*] of > implementation, not because the C standard says it should work. > > [*] a ubiquitous quirk, but a quirk nonetheless. > > George >
Well I would expect it to work, why a quirk? (If I understand "quirk" correctly here it means some weird circumstances leading to the result?). I would expect 32 bit signed multiply to work on any values fitting within 16 bits in two 32 bit registers. Now say on the 68K one just cannot do 16 signed * 16 unsigned -> 32 whatever, it is either smul.w or umul.w, must be the same on other processors under similar circumstances, may be it has to do with that? Dimiter
On 04.3.2016 &#1075;. 11:21, Philip Lantz wrote:
> Dimiter_Popoff wrote: >> Paul Rubin wrote: >>> Regarding "no working compiler": we're discussing the dark corners of C, >>> and you have to define C as "what the standard says" rather than "what >>> compilers you've looked at happen to do". If the standard says UB then >>> the code is invalid. Real world C programs are full of such bugs by >>> the way: >> >> I know programs are full of bugs and I know legacy may keep wrong >> operations alive for decades but I would be seriously stunned if >> it turns out that in C (which I do not use, I use my vpa >> obviously) multiplying these two constants, 16 bit $ffff representing >> a signed number (i.e. being -1) and 16 bit $ffff unsigned (i.e. >> 65535) into a 32 bit result would not work. Then may be it would >> not work indeed if they do not extend the operands to 32 bit prior >> to the multiplication, a compiler writer beginners error ... > > In C, assuming int is 32 bits, as it has been throughout this > thread, the compiler is *required* to extend the 16-bit numbers > to 32 bits before the multiplication.
Sign extend or zero extend? (hopefully you understand what I am asking).
On 04/03/16 10:02, tim... wrote:
> > "David Brown" <david.brown@hesbynett.no> wrote in message > news:nba61c$vc5$1@dont-email.me... >> On 03/03/16 19:53, tim... wrote: >>> >>> "David Brown" <david.brown@hesbynett.no> wrote in message >>> news:nb7l4g$5m3$1@dont-email.me... >>>> On 02/03/16 18:58, tim... wrote: >> >>>>> >>>>> If I was working in C I wouldn't have the a typed const variable at >>>>> all >>>>> I'd be using an untyped define >>>> >>>> Why? >>> >>> I dunno >>> >> >> The first step to progress is to question conventional wisdom! > > I have: > > I like the "C" rule. I like that fact that consts are identified by > them being in Caps, it is useful. >
That's interesting. Personally, I hate the fact that people often use all caps for constants (whether it be #define, const, or enum values). I can accept all-caps for "dangerous" macros, but using them for simple constants is just ugly and unnecessary IMHO. Of course, if you /want/ to use all-caps for your constants, there is nothing to stop you writing "static const int COUNT = 100;".
> I hate the fact that in C++ they look like normal variables, so I > questioned the C++ rule, and because I am less experienced in C++ than > the person who oversees the rules I get told to "take a hike".
That doesn't sound particularly helpful! Even "it's tradition" or "someone else said it was a good idea" would have been better. And again, personally I /like/ that constants look like normal variables. I don't see a reason to distinguish them.
> > You may think that the stronger typing helps you get to working code. > My experience is different. In 35 years of (mostly) embedded > development, a type mismatch has never ever caused a problem which (once > discovered) took more than a little thought to find the source and fix.
That is not the point of stronger typing. It has nothing to do with preventing type mismatch - indeed, stronger typing is about using type mismatch compile-time errors in order to spot logical errors in the code.
> > OTOH there have been multiple occasions when some different type of > error that the compiler didn't spot has taken weeks to resolve.
One day, perhaps, compilers will be omniscient and omnipotent, and such things will no longer be a problem. But then we will be out of a job as programmers (and will be free to join the resistance to fight against Skynet...).
> >> >>> But I've only ever come across the rule to do it this way in C++ >>> >>> No C shop I have ever worked in has said "never used defines for >>> constant (numeric) values", all of the C++ ones have >>> >> >> A good many "rules" for C, or coding standards, have a long heritage - >> stretching back to when ANSI C was new, and "const" was a strange, >> newfangled concept with poor compiler support. And for a long time, >> many embedded C compilers have been rather limited or inefficient - if >> you made your fixed values as "static const int" the compiler would >> put that value into flash (or even ram) and read it from there, rather >> than optimising the value at compile time. Needless to say, this >> could be very inefficient. While most compilers these days are better >> than that, the myth lives on. >> >> There is also the issue that in C, unlike in C++, a "static const" is >> not a constant expression - you can't use it as a switch label, or for >> giving the size of an array. >> >> So while I fully agree with you that C++ guidelines commonly emphasis >> const's over #define's, and C guidelines commonly do not, I recommend >> using static const in C code too. > > I shall feel free to ignore that advice >
That's fine - it's your choice. You could also consider using enum's for your constants.
On 04/03/16 10:44, Dimiter_Popoff wrote:
> On 04.3.2016 &#1075;. 11:16, Philip Lantz wrote: >> Dimiter_Popoff wrote: >>> >>> On 03.3.2016 ?. 11:26, Paul Rubin wrote: >>>> Dimiter_Popoff <dp@tgi-sci.com> writes: >>>>> The product of SIGNED multiplication (as you repeatedly repeated) is >>>>> a SIGNED number and $ffff0001 being a SIGNED number is -65535. >>>> >>>> You can't multiply a positive number by a positive number and get a >>>> negative number, except through overflow. And in C, overflow on signed >>>> ints results in UB. >>>> >>>>> Then in NO working compiler would the product of a signed 16 bit >>>>> being -1 ($ffff) and an unsigned $ffff (65535) - being converted to >>>>> signed prior to multiplication >>>> >>>> Where did signed $ffff come from? I thought we were talking about >>>> unsigned $ffff being multiplied by itself. >>> >>> It was there from the very start of the thread and is present >>> throughout. >>> >>> Dimiter >> >> I think you'll find that no one but you has mentioned a signed 16 >> bit number in this thread. This is the reason everyone has been >> disagreeing with your comments. >> > > From the post initiating the thread: > > "Today, I discovered that if you use: > > const unsigned int xxx = value; (which in this case happens to be = 0) > > and then in the code try and compare it with a signed int yyy " > > "Everyone" amounts to one person really - who disagreed with obvious > numeric statements i made, probably out of being hasty to know better > without having seriously read them.
You have mixed up two different things here. First, the original poster talked about comparison between an unsigned int (32-bit) and a signed int (32-bit). Then the discussion (on at least one branch) moved to more general differences between signed and unsigned types, overflows, and undefined behaviour. Then I made a post about some of the hidden and unexpected undefined behaviours you can get from unsigned types which are often assumed to have fully defined overflow behaviour. It was in that context that multiplication was introduced, and it was very specifically about multiplying two 16-bit unsigned values. You got those two parts mixed up. That's okay - these things happen. It is unfortunate that no one spotted the mixup until a number of posts had gone back and forth, with you talking about int16_t * uint16_t, and others talking about uint16_t * uint16_t. Now that mixup has been seen, we can leave it all behind us. My hope is that some people will have learned a little about some surprising features of C's integer promotion rules, and that may save someone from bugs or confusion in the future.
On 04/03/16 10:54, Dimiter_Popoff wrote:
> On 04.3.2016 &#1075;. 11:21, Philip Lantz wrote: >> Dimiter_Popoff wrote: >>> Paul Rubin wrote: >>>> Regarding "no working compiler": we're discussing the dark corners >>>> of C, >>>> and you have to define C as "what the standard says" rather than "what >>>> compilers you've looked at happen to do". If the standard says UB then >>>> the code is invalid. Real world C programs are full of such bugs by >>>> the way: >>> >>> I know programs are full of bugs and I know legacy may keep wrong >>> operations alive for decades but I would be seriously stunned if >>> it turns out that in C (which I do not use, I use my vpa >>> obviously) multiplying these two constants, 16 bit $ffff representing >>> a signed number (i.e. being -1) and 16 bit $ffff unsigned (i.e. >>> 65535) into a 32 bit result would not work. Then may be it would >>> not work indeed if they do not extend the operands to 32 bit prior >>> to the multiplication, a compiler writer beginners error ... >> >> In C, assuming int is 32 bits, as it has been throughout this >> thread, the compiler is *required* to extend the 16-bit numbers >> to 32 bits before the multiplication. > > > Sign extend or zero extend? (hopefully you understand what I am > asking). >
Zero-extended to a signed 32-bit int if it was an unsigned type, or sign-extended to a signed 32-bit int if it was a signed type. In other words, the 16-bit (or 8-bit, if that was the original size) gets turned into a standard C "int" type without any change of value. If the original value of 0xffff was an /unsigned/ 16-bit integer, then that represents +65535, and the promoted 32-bit signed int is +65535. If the original value of 0xffff was a /signed/ 16-bit integer (assuming two's complement), then that represents -1 and the promoted 32-bit signed int is -1. The weird bit, and arguably a bad choice in the C standards, is that the 16-bit unsigned integer gets promoted to a 32-bit /signed/ integer.
On 04.3.2016 &#1075;. 13:51, David Brown wrote:
> On 04/03/16 10:54, Dimiter_Popoff wrote: >> On 04.3.2016 &#1075;. 11:21, Philip Lantz wrote: >>> Dimiter_Popoff wrote: >>>> Paul Rubin wrote: >>>>> Regarding "no working compiler": we're discussing the dark corners >>>>> of C, >>>>> and you have to define C as "what the standard says" rather than "what >>>>> compilers you've looked at happen to do". If the standard says UB then >>>>> the code is invalid. Real world C programs are full of such bugs by >>>>> the way: >>>> >>>> I know programs are full of bugs and I know legacy may keep wrong >>>> operations alive for decades but I would be seriously stunned if >>>> it turns out that in C (which I do not use, I use my vpa >>>> obviously) multiplying these two constants, 16 bit $ffff representing >>>> a signed number (i.e. being -1) and 16 bit $ffff unsigned (i.e. >>>> 65535) into a 32 bit result would not work. Then may be it would >>>> not work indeed if they do not extend the operands to 32 bit prior >>>> to the multiplication, a compiler writer beginners error ... >>> >>> In C, assuming int is 32 bits, as it has been throughout this >>> thread, the compiler is *required* to extend the 16-bit numbers >>> to 32 bits before the multiplication. >> >> >> Sign extend or zero extend? (hopefully you understand what I am >> asking). >> > > Zero-extended to a signed 32-bit int if it was an unsigned type, or > sign-extended to a signed 32-bit int if it was a signed type. > > In other words, the 16-bit (or 8-bit, if that was the original size) > gets turned into a standard C "int" type without any change of value. > If the original value of 0xffff was an /unsigned/ 16-bit integer, then > that represents +65535, and the promoted 32-bit signed int is +65535. > If the original value of 0xffff was a /signed/ 16-bit integer (assuming > two's complement), then that represents -1 and the promoted 32-bit > signed int is -1. > > The weird bit, and arguably a bad choice in the C standards, is that the > 16-bit unsigned integer gets promoted to a 32-bit /signed/ integer. > > >
Well if the 16 bit unsigned is correctly extended to a 16 bit signed - i.e. zero extended, to $0000ffff this is OK and a valid choice, expression solvers have to deal with this all the time (e.g. when you interprete a complex expression with multiple parenthesis which includes both logical and arithmetic operations.... you just have to pick to what type the final result should default, when to treat intermediate results as signed or unsigned etc., there are plenty of choices to make, many of them valid while not necessarily compatible with each other). This particular one would result in no error or overflow as we would have to multiply two signed 32 bit numbers (-1 and 65535). I got into the thread when there was a talk exactly about 16 bit integers, one signed and one unsigned, no misunderstanding about this (to your other post). What I objected to was ... well, it can be seen in the thread, I did make myself quite clear. Dimiter
On 04/03/16 13:06, Dimiter_Popoff wrote:
> On 04.3.2016 &#1075;. 13:51, David Brown wrote: >> On 04/03/16 10:54, Dimiter_Popoff wrote: >>> On 04.3.2016 &#1075;. 11:21, Philip Lantz wrote: >>>> Dimiter_Popoff wrote: >>>>> Paul Rubin wrote: >>>>>> Regarding "no working compiler": we're discussing the dark corners >>>>>> of C, >>>>>> and you have to define C as "what the standard says" rather than >>>>>> "what >>>>>> compilers you've looked at happen to do". If the standard says UB >>>>>> then >>>>>> the code is invalid. Real world C programs are full of such bugs by >>>>>> the way: >>>>> >>>>> I know programs are full of bugs and I know legacy may keep wrong >>>>> operations alive for decades but I would be seriously stunned if >>>>> it turns out that in C (which I do not use, I use my vpa >>>>> obviously) multiplying these two constants, 16 bit $ffff representing >>>>> a signed number (i.e. being -1) and 16 bit $ffff unsigned (i.e. >>>>> 65535) into a 32 bit result would not work. Then may be it would >>>>> not work indeed if they do not extend the operands to 32 bit prior >>>>> to the multiplication, a compiler writer beginners error ... >>>> >>>> In C, assuming int is 32 bits, as it has been throughout this >>>> thread, the compiler is *required* to extend the 16-bit numbers >>>> to 32 bits before the multiplication. >>> >>> >>> Sign extend or zero extend? (hopefully you understand what I am >>> asking). >>> >> >> Zero-extended to a signed 32-bit int if it was an unsigned type, or >> sign-extended to a signed 32-bit int if it was a signed type. >> >> In other words, the 16-bit (or 8-bit, if that was the original size) >> gets turned into a standard C "int" type without any change of value. >> If the original value of 0xffff was an /unsigned/ 16-bit integer, then >> that represents +65535, and the promoted 32-bit signed int is +65535. >> If the original value of 0xffff was a /signed/ 16-bit integer (assuming >> two's complement), then that represents -1 and the promoted 32-bit >> signed int is -1. >> >> The weird bit, and arguably a bad choice in the C standards, is that the >> 16-bit unsigned integer gets promoted to a 32-bit /signed/ integer. >> >> >> > > Well if the 16 bit unsigned is correctly extended to a 16 bit > signed - i.e. zero extended, to $0000ffff this is OK and a valid > choice, expression solvers have to deal with this all the time > (e.g. when you interprete a complex expression with multiple parenthesis > which includes both logical and arithmetic operations.... you > just have to pick to what type the final result should default, > when to treat intermediate results as signed or unsigned etc., > there are plenty of choices to make, many of them valid while > not necessarily compatible with each other). > > This particular one would result in no error or overflow as we would > have to multiply two signed 32 bit numbers (-1 and 65535).
In the case of one signed 16-bit and one unsigned 16-bit, I fully agree. In the case of two unsigned 16-bit numbers, that means multiplying two signed 32-bit numbers 65535 and 65535 - and that overflows. And in C, even if you are going to truncate to 16-bit unsigned (or convert to 32-bit unsigned), the damage is done - you have to convert your 16-bit unsigned data to 32-bit unsigned before doing the multiplication.
> > I got into the thread when there was a talk exactly about > 16 bit integers, one signed and one unsigned, no misunderstanding > about this (to your other post). > > What I objected to was ... well, it can be seen in the thread, > I did make myself quite clear.
I was also very clear, but somehow we avoided understanding each other.
Op 04-Mar-16 om 12:35 PM schreef David Brown:
> On 04/03/16 10:02, tim... wrote: >> >> "David Brown" <david.brown@hesbynett.no> wrote in message >> news:nba61c$vc5$1@dont-email.me... >>> On 03/03/16 19:53, tim... wrote: >>>> >>>> "David Brown" <david.brown@hesbynett.no> wrote in message >>>> news:nb7l4g$5m3$1@dont-email.me... >>>>> On 02/03/16 18:58, tim... wrote: >>> >>>>>> >>>>>> If I was working in C I wouldn't have the a typed const variable at >>>>>> all >>>>>> I'd be using an untyped define >>>>> >>>>> Why? >>>> >>>> I dunno >>>> >>> >>> The first step to progress is to question conventional wisdom! >> >> I have: >> >> I like the "C" rule. I like that fact that consts are identified by >> them being in Caps, it is useful. >> > > That's interesting. Personally, I hate the fact that people often use > all caps for constants (whether it be #define, const, or enum values). > I can accept all-caps for "dangerous" macros, but using them for simple > constants is just ugly and unnecessary IMHO.
That is exactly my rule: ALL_CAPS_FOR_AN_IDENTIFIER is an alarm flag that indicates something special. If it behaves just like a normal object it should look like one, even_if_it_is_implemented_as_a_macro. Whether something is a constant is a limitation on the use of that thing: it can't be changed by *that part of the code*. As such, a constant is nothing special, and might be a modifyable variable for other parts of the code. Wouter van Ooijen
The 2026 Embedded Online Conference