EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Code metrics

Started by Don Y March 7, 2015
On 3/10/2015 3:46 AM, Jacob Sparre Andersen wrote:

>> Then you get to the second compiler, and ugh... > > Yes. :-( > > So far I usually don't have the pleasure of developing projects for > compilation with multiple compilers, but it is definitely something that > may make life "interesting" - even if it technically shouldn't be much > worse than handling warnings from various tools (besides the compiler).
Oh, you're in for *loads* of fun, then! :> First thing you'll learn is how all those delightful, COMPILER-SPECIFIC enhancements now are *liabilities*! Sort of like buying a new car and discovering the steering wheel is... IN THE BACK SEAT!
Reinhardt Behm wrote:
> glen herrmannsfeldt wrote: > >> Don Y <this@is.not.me.com> wrote: >> >> (snip on warnings, compilers, and debugging) >> >>>> Yes. But it it is possible to encode it in the source. The problem is >>>> to do it robustly - and in a way that doesn't annoy the programmer. >> >>> I think both of those criteria may prove to be elusive! >>> You can't count on pragmas (portably). >> >>> You *could* use comments -- but how do you tie a particular comment >>> to a particular line of code? What if a line throws multiple warnings? >>> Esp if you want to explain why a warning is unnecessary! (if you're >>> going to go to that length, why not just fix the code??) >> >> But what it the code isn't broken? Just because some compiler >> writer thought that you shouldn't do something doesn't mean that >> it is wrong. >> >> Note that Java requires the compiler to figure out that an assignment >> is made to a scalar variable (not arrays, though) before the value is >> used. Compilers are getting better, but there are still some cases >> that the compiler can't figure out. I then put an initializer on >> it with a comment like >> >> int i=0; /* the compiler didn't figure this out!!! */ >> >> Yes, if the code is right, I would rather go through the extra work >> to explain it, that "fix" it. >> >> -- glen > > That is one reason why I am also very picky when choosing new CPUs. If there > are only adequate tools like compilers I might not use the newest fancy CPU. > It is just too much effort. > > For example I found a compiler from a CPU vendor which complained about a > function defined with an uint8_t parameter and being called with a constant > 1. It saw the constant as an int
Yes, all 'C' compilers do...
> and warned about an int being truncated to > a uint8_t.
Not all do that. Were there any options to turn that warning off?
> After more of such nonsense and no other compiler available the > CPU was out. >
I find that with ARM and PIC, no such problems. -- Les Cargill
Les Cargill wrote:

> Reinhardt Behm wrote: >> glen herrmannsfeldt wrote: >> >>> Don Y <this@is.not.me.com> wrote: >>> >>> (snip on warnings, compilers, and debugging) >>> >>>>> Yes. But it it is possible to encode it in the source. The problem >>>>> is to do it robustly - and in a way that doesn't annoy the programmer. >>> >>>> I think both of those criteria may prove to be elusive! >>>> You can't count on pragmas (portably). >>> >>>> You *could* use comments -- but how do you tie a particular comment >>>> to a particular line of code? What if a line throws multiple warnings? >>>> Esp if you want to explain why a warning is unnecessary! (if you're >>>> going to go to that length, why not just fix the code??) >>> >>> But what it the code isn't broken? Just because some compiler >>> writer thought that you shouldn't do something doesn't mean that >>> it is wrong. >>> >>> Note that Java requires the compiler to figure out that an assignment >>> is made to a scalar variable (not arrays, though) before the value is >>> used. Compilers are getting better, but there are still some cases >>> that the compiler can't figure out. I then put an initializer on >>> it with a comment like >>> >>> int i=0; /* the compiler didn't figure this out!!! */ >>> >>> Yes, if the code is right, I would rather go through the extra work >>> to explain it, that "fix" it. >>> >>> -- glen >> >> That is one reason why I am also very picky when choosing new CPUs. If >> there are only adequate tools like compilers I might not use the newest >> fancy CPU. It is just too much effort. >> >> For example I found a compiler from a CPU vendor which complained about a >> function defined with an uint8_t parameter and being called with a >> constant 1. It saw the constant as an int > > Yes, all 'C' compilers do... > >> and warned about an int being truncated to >> a uint8_t. > > Not all do that. Were there any options to turn that warning off?
Not really. It only had several "warning levels" that could be selected. By lowering the level other useful warnings also went away.
> >> After more of such nonsense and no other compiler available the >> CPU was out. >> > > I find that with ARM and PIC, no such problems.
-- Reinhardt
Hi Glen,

On 3/10/2015 4:35 PM, glen herrmannsfeldt wrote:
> Don Y <this@is.not.me.com> wrote: > >> You *could* use comments -- but how do you tie a particular comment >> to a particular line of code? What if a line throws multiple warnings? >> Esp if you want to explain why a warning is unnecessary! (if you're >> going to go to that length, why not just fix the code??) > > But what it the code isn't broken? Just because some compiler > writer thought that you shouldn't do something doesn't mean that > it is wrong.
Well, it's not as if the compiler writer is trying to enforce "coding guidelines" -- that mechanism should be part of a different HIGHLY CONFIGURABLE tool. Rather, it's (usually) trying to alert you of subtle behaviors of which many programmers may not be cognizant *or* particularly vigilant.
> Note that Java requires the compiler to figure out that an assignment > is made to a scalar variable (not arrays, though) before the value is > used. Compilers are getting better, but there are still some cases > that the compiler can't figure out. I then put an initializer on > it with a comment like > > int i=0; /* the compiler didn't figure this out!!! */ > > Yes, if the code is right, I would rather go through the extra work > to explain it, that "fix" it.
The question becomes, "why couldn't you have come up with an appropriate initializer?" Or, why is your program logic so "unpredictable" that the compiler can't sort this out with a static analysis? (i.e., if the compiler can't sort it out, are you sure *people* will be much better at the task?) E.g., I *don't* like the UNIFORM practice of initializing variables at their declaration (this isn't even possible in all languages!). Instead, I prefer to initialize them closer to their first use. It's tedious to have to scroll back to the top of a function to discover where the variable was declared & initialized. OTOH, if you initialize it "when it becomes of interest", then you are more likely to see that initialization in the "local" code. Gratuitously adding extra nested blocks just so you can declare/define the variable more "locally" OUT OF HABIT clutters up the code. And, for folks not accustomed to this, can confuse esp if you reuse an identifier in this nested scope. The real problem lies in the squishy nature of C inherently being at odds with the idea of "portability" (which requires CONSISTENCY across platforms, etc.)
On 3/10/2015 7:33 PM, Reinhardt Behm wrote:
> glen herrmannsfeldt wrote:
>> Yes, if the code is right, I would rather go through the extra work >> to explain it, that "fix" it. > > That is one reason why I am also very picky when choosing new CPUs. If there > are only adequate tools like compilers I might not use the newest fancy CPU. > It is just too much effort.
+42 The same holds true when walking backwards through time: e.g., trying to support old hardware with old tools. You may not have a choice as to the capabilities of the compiler, etc.
> For example I found a compiler from a CPU vendor which complained about a > function defined with an uint8_t parameter and being called with a constant > 1. It saw the constant as an int and warned about an int being truncated to > a uint8_t. After more of such nonsense and no other compiler available the > CPU was out.
But an explicit cast should have fixed that. Would you complain if the compiler prodded you when you tried: float x; x = sqrt(6); (two "warnings", there)
Am 10.03.2015 um 23:24 schrieb Don Y:
> No. *My* solution is to just not accept warnings. It means that > each time I port code to a different platform, toolchain, etc. I > end up having to tweek the code a bit.
That's no solution. That's wishful thinking. The basic problem is that C compilers are officially and explicitly allowed to "warn" about whatever they damn well please, and they make ample use of that leeway. Trying to keep a lid on that can of worms is an exercise in futility. For all you know, tomorrow's compiler update might add a "warning: no C code should be compiled on a Friday, 13th". There's just no way to ensure that the same source will compile without warnings on more than one compiler at a time. The closest approach that might appear to work would be to attempt to turn warnings off for all compilers you use ... but even setting aside that this would entirely defeat the purpose of the "no warnings!" approach, it may not even work: there's no requirement for compilers to offer a "all disabled" mode. So you face a choice: either you completely give up on re-using source code verbatim from one project to the next (so: parallel maintenance of multiple versions of your "code library", forever!), or you give up on that strict "no warnings!" plan. Well, actually an "accept no warnings" policy can be feasible, but not regarding compiler warnings. You'll have to pick a _different_ source analysis tool whose warnings you're going to actually care about, instead of the compiler's: one whose warnings can be controlled precisely and consistently for all target platforms. One sensible choice for such a tool, in my experience, is Gimpel's lint. Others apparently like QA-C for this.
> The *language* needs to be fixed instead of inventing kludges to > bolt on capabilities that the compiler should have inherently.
This is not at all a question of capabilities (or lack thereof). It's a question of what do you do if, in the only available compiler for the platform someone else has decided you'll be using, there's simply no combination of compiler switches that still yields some truly necessary warnings (e.g. "function called without a prototype declaration"), but keeps quiet about perfectly sane constructs (e.g.: "global pointer initialized with address of a file-scope function/variable"). And of course the other compiler will have "warnings" of the exact opposite meaning ("object defined with excessively wide scope"). And that's before we begin looking at different compilers' predilections about how to flag a function argument as unused (void cast? self-assignment? none?), the "right" amount of parantheses, or actual differences caused by different native data types on different platforms.
Reinhardt Behm <rbehm@hushmail.com> wrote:

(snip)
>> But what it the code isn't broken? Just because some compiler >> writer thought that you shouldn't do something doesn't mean that >> it is wrong.
(snip)
>> Yes, if the code is right, I would rather go through the extra work >> to explain it, that "fix" it.
(snip)
> That is one reason why I am also very picky when choosing new CPUs. If there > are only adequate tools like compilers I might not use the newest fancy CPU. > It is just too much effort.
> For example I found a compiler from a CPU vendor which complained about a > function defined with an uint8_t parameter and being called with a constant > 1. It saw the constant as an int and warned about an int being truncated to > a uint8_t. After more of such nonsense and no other compiler available the > CPU was out.
Another that Java requies is casts for narrowing conversions. Too late to add to C, but it doesn't seem a bad idea to me. -- glen
Don Y wrote:
> Hi Glen, > > On 3/10/2015 4:35 PM, glen herrmannsfeldt wrote: >> Don Y <this@is.not.me.com> wrote: >> >>> You *could* use comments -- but how do you tie a particular comment >>> to a particular line of code? What if a line throws multiple warnings? >>> Esp if you want to explain why a warning is unnecessary! (if you're >>> going to go to that length, why not just fix the code??) >> >> But what it the code isn't broken? Just because some compiler >> writer thought that you shouldn't do something doesn't mean that >> it is wrong. > > Well, it's not as if the compiler writer is trying to enforce > "coding guidelines"
But they are. "There are rules" - Walter Sobchak. Some of the more popular compilers just shaddup already about it unless the constraint violation is significant.
> -- that mechanism should be part of a different > HIGHLY CONFIGURABLE tool. Rather, it's (usually) trying to alert you > of subtle behaviors of which many programmers may not be cognizant > *or* particularly vigilant. > >> Note that Java requires the compiler to figure out that an assignment >> is made to a scalar variable (not arrays, though) before the value is >> used. Compilers are getting better, but there are still some cases >> that the compiler can't figure out. I then put an initializer on >> it with a comment like >> >> int i=0; /* the compiler didn't figure this out!!! */ >> >> Yes, if the code is right, I would rather go through the extra work >> to explain it, that "fix" it. > > The question becomes, "why couldn't you have come up with an > appropriate initializer?" Or, why is your program logic so > "unpredictable" that the compiler can't sort this out with > a static analysis? (i.e., if the compiler can't sort it out, > are you sure *people* will be much better at the task?) >
The compiler is, at its core, dumb. This is a good thing.
> E.g., I *don't* like the UNIFORM practice of initializing > variables at their declaration (this isn't even possible in > all languages!). Instead, I prefer to initialize them closer > to their first use. >
Oh, I think initialized variables are wonderful. But then again, I had to *implement* mutable initialized variables once - which is not nearly as cool as it sounds. The toolchain simply located initialized variables to PROM, and uninitialized variables to RAM. So we made our own segments, then copied from initialized to a peer to BSS in the startup. There was more to it than just that, but that's the gist. It's funny - I see this an extension of the RAII principle, and then people go off on the exact history behind RAII in C++. Yes, but ...
> It's tedious to have to scroll back to the top of a function to > discover where the variable was declared & initialized. OTOH, > if you initialize it "when it becomes of interest", then you > are more likely to see that initialization in the "local" code. >
Modern 'C' means never having to do this. Declare a block for all the temp variables surrounding a thing: double tane = 0.0; double angle = 42.0; const double xepsi = ( 0.00000...1 ); // yadda yadda // at the end of this, tane is completed. { const double sinX = sin(angle); const double cosX = cos(angle); tane = (abs(cosX)>xepsi) ? sinX/cosX : TOOBIG; // or whatever ... } *NOT* doing this leads to all sort of pernicious headaches.
> Gratuitously adding extra nested blocks just so you can declare/define > the variable more "locally" OUT OF HABIT clutters up the code.
I vehemently, fundamentally and absolutely disagree. It organizes the code. The temps all go away at the bottom of the block. You will not get a sore wrist from a little scrolling. But having all references to an identifier be in the same general region *works*. For stuff that's globalish state there is 'find . -name "*.[ch]" | xargs grep -n ...' You need that anyway. No, your favorite IDE don't do that right either. :) Code tells a story. Sequentially organized little paragraphs are as natural as reading. Having to find the subroutine is harder than this. Of course, subroutines are the next logical step - if it's more than a few lines, the declarations are not the major point of it and/or it needs to be reused.
> And, for folks not accustomed to this, can confuse esp if you > reuse an identifier in this nested scope. >
It should take seconds to get used to.
> The real problem lies in the squishy nature of C inherently being > at odds with the idea of "portability" (which requires CONSISTENCY > across platforms, etc.)
Nah. The travails of this are Vastly overrated. People get too excited and overdo it. Again - the declarations are the key. -- Les Cargill
Hans-Bernhard Br&#4294967295;ker wrote:
> Am 10.03.2015 um 23:24 schrieb Don Y: >> No. *My* solution is to just not accept warnings. It means that >> each time I port code to a different platform, toolchain, etc. I >> end up having to tweek the code a bit. > > That's no solution. That's wishful thinking. > > The basic problem is that C compilers are officially and explicitly > allowed to "warn" about whatever they damn well please, and they make > ample use of that leeway. Trying to keep a lid on that can of worms is > an exercise in futility. For all you know, tomorrow's compiler update > might add a "warning: no C code should be compiled on a Friday, 13th". >
I assume a Lebowski reference will work here: "There are rules" - Walter Sobchak. In fact, there is The 'C' Standard and if you use 'C' you should at least become partially familiar with the music of it. It's not exactly Human but it'll help.
> There's just no way to ensure that the same source will compile without > warnings on more than one compiler at a time. The closest approach that > might appear to work would be to attempt to turn warnings off for all > compilers you use ... but even setting aside that this would entirely > defeat the purpose of the "no warnings!" approach, it may not even work: > there's no requirement for compilers to offer a "all disabled" mode. >
Nope.
> So you face a choice: either you completely give up on re-using source > code verbatim from one project to the next (so: parallel maintenance of > multiple versions of your "code library", forever!), or you give up on > that strict "no warnings!" plan. >
Or just do the things that are necessary to make the warnings go away. This will involve understanding the promotion and constraint rules. If you don't keep those in mind, you *will* make *ugly* mistakes that will fail at the worst possible times.
> Well, actually an "accept no warnings" policy can be feasible, but not > regarding compiler warnings. You'll have to pick a _different_ source > analysis tool whose warnings you're going to actually care about, > instead of the compiler's: one whose warnings can be controlled > precisely and consistently for all target platforms. One sensible > choice for such a tool, in my experience, is Gimpel's lint. Others > apparently like QA-C for this. >
Nobody said it was easy.
>> The *language* needs to be fixed instead of inventing kludges to >> bolt on capabilities that the compiler should have inherently. > > This is not at all a question of capabilities (or lack thereof). It's a > question of what do you do if, in the only available compiler for the > platform someone else has decided you'll be using, there's simply no > combination of compiler switches that still yields some truly necessary > warnings (e.g. "function called without a prototype declaration"),
You really should have prototype declarations on general principle.
> but > keeps quiet about perfectly sane constructs (e.g.: "global pointer > initialized with address of a file-scope function/variable").
That's because it cannot know any better. A "lint" tool that does some better probing might.
> And of > course the other compiler will have "warnings" of the exact opposite > meaning ("object defined with excessively wide scope"). > > And that's before we begin looking at different compilers' predilections > about how to flag a function argument as unused (void cast? > self-assignment? none?), the "right" amount of parantheses, or actual > differences caused by different native data types on different platforms.
It depends. This goes back to the mores of project management and is extremely local. -- Les Cargill
Don Y wrote:

> On 3/10/2015 7:33 PM, Reinhardt Behm wrote: >> glen herrmannsfeldt wrote: > >>> Yes, if the code is right, I would rather go through the extra work >>> to explain it, that "fix" it. >> >> That is one reason why I am also very picky when choosing new CPUs. If >> there are only adequate tools like compilers I might not use the newest >> fancy CPU. It is just too much effort. > > +42 > > The same holds true when walking backwards through time: e.g., trying to > support old hardware with old tools. You may not have a choice as to the > capabilities of the compiler, etc.
That reminds me of compilers that where at least error free. For example I used C/80 for Z80 from '83 on until about 2000 intensively and did not find any error. And it did not warn much. ;-) Ok, the compiler was just 40k. So there was just no space for errors. ;-) In contrast to some compiler for another CPU I used around 2000. It was just a collection of bugs. We had to replace about 500 OTP chips because of a bug inserted by the compiler. It was from a company whose owner has several times in the past insisted in this group that open source is not reliable but his tools are thoroughly tested and verified.
>> For example I found a compiler from a CPU vendor which complained about a >> function defined with an uint8_t parameter and being called with a >> constant 1. It saw the constant as an int and warned about an int being >> truncated to a uint8_t. After more of such nonsense and no other compiler >> available the CPU was out. > > But an explicit cast should have fixed that.
func((uint8_t)1); looks nasty. True, but at that place the compiler could have deduced, that there is no loss by truncation.
> > Would you complain if the compiler prodded you when you tried: > > float x; > x = sqrt(6); > > (two "warnings", there)
-- Reinhardt
The 2026 Embedded Online Conference