EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Code metrics

Started by Don Y March 7, 2015
Hi Glen,

On 3/8/2015 6:15 PM, glen herrmannsfeldt wrote:
> Don Y <this@is.not.me.com> wrote:
>>> Then add "No Child Left Behind", a plan from a C student president to >>> make sure that all students are C students. > >> *Any* and *every* plan to "measure" the education system (its components, >> its results, etc.) is a farce. Too many "special interests" -- teachers, >> administrators, parents, etc. And, the system is uncharacterized. Like >> trying to control a loop for which you have no idea of the extents of lag >> present, etc. > > I don't think we will get completely away from testing, or for that > matter, grades, but yes they are never perfect.
Again, there is nothing inherently wrong with the grade/score/metric. It is how it is *used* that begs attention.
>> So, instead, we wait until Johnny is responsible for administering >> that IV drug upon which your health/life depends. When he screws *that* >> up, we fire Johnny (let him move on to some other profession... like >> teaching!), give the patient 9or the patient's family) our condolences, >> pay off some lawyers and lament how the system failed "Johnny" (with >> no mention of the *patient*!) > > There was a case not so many years ago, where a nurse gave the wrong > dose of some medicine to a patient. It required a complicated > calculation to determine the right dose, and it seems that she got > it wrong. She was immediately fired, and not so many days later, > committed suicide. > > Now, certainly we expect nurses to always get it right, but on the > other hand, what did the hospital expect her to do? She had gone to > school for many years, and then had many years of experience as a > nurse. Most likely, no other hospital would hire her.
There are *lots* of "screwups" that happen EVERY DAY in hospitals. SWMBO at one time sat in on the "take no notes" meetings where this sort of stuff was discussed. Some of the horror stories would have you perform your own surgery rather than risk going into a hospital! We forget that "it's just a job" -- to *all* of these people: doctors, cops, nurses, etc. Expecting them to never make mistakes is wishful thinking. SWMBO was in for some out-patient surgery. I accompanied her to the recovery room (still sedated). Nurse came over and gave her some meds. I, of course, asked "what's that?" and "what's it for?". As the answers seemed sensible -- and didn't conflict with any of her known allergies (amazing how often medical professionals fail to read the big letters on your chart warning against these items!) -- I acquiesced to her being dosed. Several minutes later, nurse (same one) came over to give her some meds. "What's that?" I then told her "you already gave it to her, 10 minutes ago!". Nurse got beligerant. "No, I didn't!" "Isn't it used for..." and then I recited what she had told me previously when I had asked the first time. Now she's in a box: what are the chances that Joe Offthestreet happens to KNOW the indications for a particular *odd* pharmaceutical? And, he *claims* it had already been dosed. So, he'd be likely to offer "reliable" testimony on that fact... "Well, it's not written down on her chart!" [Hmmm... three people here: one is unconscious. Another is a lay person/visitor. Third is a paid healthcare professional CHARGED with running this (4 bed) recovery room. Which of us *should* be responsible for making that notation on the chart??]
> It seems reasonably to me that if something is that critical that two > nurses should do the computation and verify that they agree. (That > doesn't eliminate the problem, but maybe reduces it enough.)
Sure. Or, have it predispensed by the hospital's pharmacy. Or, an "app" for it. No guarantee that a second individual will be willing to contradict/correct the first when the calculation is in error. (it is amusing to see how easily people can be coerced into going along with <something>) If one of the professionals is a *doctor*, then all bets are off. Nurses routinely claim that doctors don't take kindly to criticism and tend to be bullies -- as well as making mistakes that the nurses have to catch or correct. And, of course, *two* professionals only increases the cost of that care!
> To get back to coding, I hope that there are strict standards for > those writing control programs for nuclear (or any) power plants. > Most likely, as noted above, with more than one person involved.
Most of these things rely on good practices in place and "lots of eyes". But, the "eyes" have to be motivated to be critical. If they just go through the motions, they're just "excess overhead".
>> To be fair, how would *you* measure teachers? It would be like measuring >> *your* coding performance based on a starting point of some other developer(s)' >> code upon which you've built. WITHOUT even giving you the choice of whose >> codebase you will be supporting! > > I believe that there is a system to measure teachers based on the change > in test scores. That is, from the end of the previous year (and teacher) > to the end of the current year. That should work, but has a lot of > statistical uncertainty.
How do you calibrate that system? E.g., when students get to a "rebellious" age, I imagine there are more "other issues" that interfere with performance. I.e., change from K->1 and 8->9 can be very different sorts of differences.
> As I understand it, some in Washington now have the principal make > decisions based on all data, including test scores, but not with > a fixed proportion. It seems that isn't good enough for NCLB.
No idea. School (system, facilities, curriculum, funding, etc.) has changed a lot since I was a kid. I'm glad that *I* don't have to solve that problem. From talking with educators, they see schools as giant "playgrounds" for politicians, parent groups, etc. to *experiment* in. (Cripes, how many different "initiatives" have their been in this area??)
>> Building on Frank's comment. We all (think!) we know a good/bad teacher >> when we see him/her. But, the verdict is never "in" until long after the >> student has moved beyond. And, van never truly be isolated and identified >> as the *cause* of the student's success/failure. > > Well, yes, but when a teacher has had bad reports for 10 years or so, > and nothing changes, then parents get mad. But then the teacher has > tenure and can't be fired.
Or, concerned parents speak out ON BEHALF OF *their* CHILD and "fix" the problem from *their* perspective (leaving the rest of the kids in that class to deal with the substandard teacher). How do you "prove" there is a problem with the teacher? :-/ Thankfully, my school district was well funded and lots of very capable teachers. Most opened doors for me (opportunity) and then stepped out of the way so I wouldn't be hindered by the "regular curriculum". Or, fought for funding for extracurricular "geek" activities that weren't available in the district at that time. I can't imagine what it would be like with teachers who considered it "just a job"...
>> (Personally, I don't understand the big hullabaloo over testing. I can >> remember taking "big tests" throughout my primary school education. And, >> I'm sure there was *some* effort to guide my studies in such a way that >> I would fare well enough on those. Without drawing attention to the >> fact that this is what was actually being done!) > > Well, I remember tests maybe once every four years. It seems that now > they have two or three tests a year.
Yes, they were infrequent. But, I recall entire days set aside for The Test, etc. And, of course, in JHS & HS you have midterms and finals in each class... (plus weekly quizzes, typically).
Don Y wrote:

> I believe virtually all "rules" are mistakes when it comes to software > development. They should be considered "guidelines".
Agreed.
> Tools should provide *guidance*; the developer should evaluate that > guidance in the context of the problem at hand. > > E.g., I target "no warnings" in my compiles. But, that's because there > isn't a language feature that allows me to insert: > #acknowledge The 'missing cast' error can be ignored, here
I have considered keeping a log of the expected warnings (with documentation of why the warnings are expected) and then making it an error to have a different set of warnings than exactly the documented set. Unfortunately I haven't gotten around to try this idea out in practice, so I'm also (still) in the *no warnings* camp. The Ada compiler I use allows me to identify some warnings (unreferenced parameters and objects) as expected, but the *why* has to be a comment. But yes, it would be nice to be able to document all expected warnings with a required explanation of why they are expected. Keeping a log of expected warnings and checking compilation and tool results against that list is of course a solution, but it feels too much like a hack. Greetings, Jacob -- Infinite loop: n., see loop, infinite. Loop, infinite: n., see infinite loop.
Hi Jacob,

On 3/9/2015 3:35 AM, Jacob Sparre Andersen wrote:
> Don Y wrote: > >> I believe virtually all "rules" are mistakes when it comes to software >> development. They should be considered "guidelines". > > Agreed.
In light of that, you want to keep as many warnings ("advisories") enabled as is possible! But, in practice, this can generate a lot of "advisory output" that, once you've checked everything "the first time", you really want to be able to IGNORE (without turning them "off")
>> Tools should provide *guidance*; the developer should evaluate that >> guidance in the context of the problem at hand. >> >> E.g., I target "no warnings" in my compiles. But, that's because there >> isn't a language feature that allows me to insert: >> #acknowledge The 'missing cast' error can be ignored, here > > I have considered keeping a log of the expected warnings (with > documentation of why the warnings are expected) and then making it an > error to have a different set of warnings than exactly the documented > set.
You could capture the output to a file and then diff that against future output. But, as line numbers can change, this just turns one problem (verifying the exact same warnings persist) into another problem (verifying the warning on line X is really the same warning that is now reported on line Y). In a GUI IDE, one could conceivably tag (click) each warning's corresponding source and have the IDE remember "this warning is OK, here". But, unless you can encode that in the source itself, its not portable to other tools.
> Unfortunately I haven't gotten around to try this idea out in practice, > so I'm also (still) in the *no warnings* camp. > > The Ada compiler I use allows me to identify some warnings (unreferenced > parameters and objects) as expected, but the *why* has to be a comment.
I'd like something akin to bison's %expect/%expect-rr capability. I.e., if the expected warning WOULD BE generated, it is suppressed. And, if the expected warning (or, a *different* warning) would be generated, an ERROR is signaled. Of course, in a yacc (bison) grammar, you can put this sort of directive anywhere in the "source" file to achieve the desired result; there's no need to tie it to a specific *line* number -- or statement! In most languages, that would be sort of useless: "expect 'missing cast'" (yeah, sure. Like *which* one and how *many*??)
> But yes, it would be nice to be able to document all expected warnings > with a required explanation of why they are expected.
And, when the warning fails to materialize, have the compiler *complain*! ("Hey, I know you expected this warning, here. But, it didn't happen. Are you sure? At the very least, your documentation claiming it *would* be here is faulty!")
> Keeping a log of expected warnings and checking compilation and tool > results against that list is of course a solution, but it feels too much > like a hack.
Agreed. And, too "manual"/disciplined. I believe people are inherently lazy. If you *require* them to perform some action, there is a good chance that they will (eventually) fail to do so. E.g., remove the comment alerting the developer to the warning (hence, make that a hard error). OTOH, taking the approach of "treat all warnings as errors" (compiler flag) ends up getting folks to just insert <whatever> to placate the compiler. Without considering the nature of the warning and *why* (if?) their action should compensate, logically. E.g., when my IDL compiler writes a client-side stub, it litters the source with ("superfluous"?) casts as it marshalls the arguments (which may be complex types/structs) and prepares to push them down the wire as "octets". Had someone manually written that stub code, he/she would *probably* get lazy and omit all those casts and perhaps not bothered thinking about whether a simple cast *would* solve the problem warned -- or, if there is a bigger issue that is being glossed over (e.g., endian-ness, network/host byte order, data type encoding variations in a heterogeneous environment, etc.) "Feh. Damn compiler is always warning about that sort of thing. Just ignore it."
On 3/9/2015 9:59 AM, Don Y wrote:

> I'd like something akin to bison's %expect/%expect-rr capability. > I.e., if the expected warning WOULD BE generated, it is suppressed. > And, if the expected warning (or, a *different* warning) would be > generated, an ERROR is signaled.
Grrr... s/would be/would NOT be/
Jacob Sparre Andersen wrote:
> Don Y wrote: >>E.g., I target "no warnings" in my compiles. But, that's because there >>isn't a language feature that allows me to insert: >> #acknowledge The 'missing cast' error can be ignored, here > > I have considered keeping a log of the expected warnings (with > documentation of why the warnings are expected) and then making it an > error to have a different set of warnings than exactly the documented > set.
Some static checking tools (Klocwork, Coverity?) have a database and you can set their warnings to "ignore" there. However, such a tool is too clunky for a edit/compile/test cycle for my taste. And not seeing the forest for the trees in compile output doesn't help too much either, even if the compiler warnings are declared nonexistant by a later step. Stefan
On 10.03.2015 06:27, Stefan Reuther wrote:
> Jacob Sparre Andersen wrote: >> Don Y wrote: >>> E.g., I target "no warnings" in my compiles. But, that's because there >>> isn't a language feature that allows me to insert: >>> #acknowledge The 'missing cast' error can be ignored, here >> >> I have considered keeping a log of the expected warnings (with >> documentation of why the warnings are expected) and then making it an >> error to have a different set of warnings than exactly the documented >> set. > > Some static checking tools (Klocwork, Coverity?) have a database and you > can set their warnings to "ignore" there. > > However, such a tool is too clunky for a edit/compile/test cycle for my > taste. And not seeing the forest for the trees in compile output doesn't > help too much either, even if the compiler warnings are declared > nonexistant by a later step. > > > Stefan
I have a simple rule: There may be not warnings. Simple things like a warning about a missing cast will be fixed. Others in my experience are most often the result of bad programming style and _have_ to be fixed. Possibly by not removing the warning but the programmer. -- Reinhardt
Don Y wrote:

> Hi Paul, > > On 3/8/2015 4:53 PM, Paul E Bennett wrote: >> Don Y wrote: >> >>> I'd like to set up an environment where much of this can be measured >>> automatically and track "performance", over time (drawing any potential >>> "conclusions" at the finish). Unfortunately, I can't see how to >>> instrument the "time" aspect of the effort. It's misleading to note >>> the time between checkout and subsequent check-in as representative >>> (even loosely!) of the time spent working on/staring at a particular >>> piece of code. Even if you could measure the time during which the >>> code was "active" in an editor/IDE, there is no guarantee that the >>> developer is *looking* at it! Or, *thinking* about it! >> >> My own development process fits both waterfall and spiral methods of >> development. In the manner of good "Project Management" there is a >> significant portion of Up-Front work in getting the specs right (these >> documents are "Components of the System" as well and are kept under tight >> version control and change management throughout). >> >> The core element of my development process leaves an audit trail >> automatically (whether you operate it on paper or software tool aided). >> Within this audit trail you will find your metrics (on how many times a >> component has been round its action, review, change loop). How many >> issues were raised in the reviews against it. How many problem reports >> involved the component. How long it took to get an approved component to >> pass the review. When even the act of getting a good specification is >> under such a development regime you can ensure it meets the 6 C's >> criteria for a good requirements specification. > > I can tell you how and when an object is touched, what was done to it, > etc.
That information is recorded as well. My process documentation runs with four forms. A Review Record form (where the issues with a component are recorded and the people involved in the review), A Change Proposal Form, to determine what should be changed (which is also reviewed before progressing), a Work Instruction Form (which details the specific change permitted to be made and the designated insertion point), and a Problem Report Form (that captures any remaining problems that escape notice before deliver). There is a Project Register which records the events of these activities, so there is some semblance of effort time as an upper bound. However, for actual timing expended, there is a reliance on the individual engineers journal if they happen to remember to record such information.
> What I *can't* tell is how much *effort* goes into "creating/changing" it. > (effort is measured in man-hours)
As I am at a Conference (on Provavbly Correct Software) at present I am away from the metrics record. However, I provide an aggregate effort measure for the Inspection, Functional and Limits Testing per software component based on the cyclomatic complexity of the component under eamination. That works out at:- For a component of cyclomatic complexity <3, about 1 hour. For a component of cyclomatic complexity >3 and <7 about 2 hours. For For a component of cyclomatic complexity >7 and <10 about 8 hours For a component of cyclomatic complexity >10 it is a best guess on how many days/weeks it may take. I only remember this because I was recently reviewing the recorded metrics for effort expended in this regard. Inspection and Test are a form of review and are covered by a Review Record Form.
> E.g., how long (hours of labor) did it take to create a spec? > How long (hours of labor) to create the hardware/software to reify > that spec?
Getting to a final requirements specification (that meets the 6 C'S criteria). can utilise up to 60% of the project life-time. However, this upfront effort has a benefit in giving the designers and developers a solid basis from which to work and a big reduction in the number of errors that arise in the initial specification stage.
> While you may have "checked out" an object at a particular time and > checked in the next version at some *later* time, the difference > (in - out) isn't truly representative of the time required to > "do whatever you did" to create that new version from its predecessor. > It just acts as an upper limit on the "time required". > > Did you check it out, work on it (actively) for 10 minutes and then > go on vacation for 2 weeks before checking it back in on your return? > > Did you check it out along with several other objects and work > on other things along with it before checking it back in? > > Did you try several different, unsatisfactory variations of the > revision and only check in the "final" attempt?
You have to look at your metrics regime and design the data collection steps that are important to you. If it is important to you, then you have to work out how you collect it. Time spent on developing an individual component are not that much of a concern for me. I get a reasonably good average figure from my own development rate but that is aggregate data from my own journals and the development process metrics. It takes a bit of effort to extract that data and I do from time to time. However, my most important figure that takes my focus is the number of errors released to the client (which is satisfyingly low).
> I.e., I want to be able to put a number on the *effort* required > (beyond counting effective keystrokes). I can check out a version > and spend a lot of effort refactoring it into an *equivalent* > version with very similar metrics. How is that *effort* measured > and accounted?
I guess only you will be able to answer why you need that data and how important that figure will be for you. -- ******************************************************************** Paul E. Bennett IEng MIET.....<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk> Mob: +44 (0)7811-639972 Tel: +44 (0)1392-426688 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Hi Paul,

On 3/9/2015 5:57 PM, Paul E Bennett wrote:

8<

>> I can tell you how and when an object is touched, what was done to it, >> etc. > > That information is recorded as well. My process documentation runs with > four forms. A Review Record form (where the issues with a component are > recorded and the people involved in the review), A Change Proposal Form, to > determine what should be changed (which is also reviewed before > progressing), a Work Instruction Form (which details the specific change > permitted to be made and the designated insertion point), and a Problem > Report Form (that captures any remaining problems that escape notice before > deliver). There is a Project Register which records the events of these > activities, so there is some semblance of effort time as an upper bound. > However, for actual timing expended, there is a reliance on the individual > engineers journal if they happen to remember to record such information.
... and there's the rub. Reliance on "self-reporting" leaves too much slop in the measurements. E.g., when you're working a 9-to-5 and have to account for your time, you typically jot down all the meetings that you attended in the past week (as those are usually defined durations; plus some "travel time" getting to/from the conference room, bullshitting, etc.). Some amount of time for reading mail (magazines, etc.) and email, phone calls, etc. And, any other "notable events" that come to mind (birthday party for Fred, etc.). The balance you lump in the <whatever-project-I've-been-working-on> account. Most firms don't finely differentiate time *within* that account: writing specs, writing code, designing hardware, troubleshooting hardware, testing code, etc. They really aren't interested (able?) in knowing how much of a project's cost lies in software, hardware, marketing, etc. They'll just assume ALL time charged by a "software jock" is software related, hardware jock is hardware related, etc. (*if* they even parse it at that level of detail!) I want much finer data. How much time went into writing/rewriting *this* specification. And, implementing the code it describes. And testing it against that spec. And fixing bugs uncovered. etc. So, you know what a particular "component" cost you, some indication of how reliable it is LIKELY to be (are there *more* undiscovered bugs? components of similar cost/complexity suggest...) Then, aggregate these costs and use them to project/rationalize higher level uses based on those components/services. I.e., "This component cost X but allowed this other subsystem to be more robust/cheaper/etc."
>> What I *can't* tell is how much *effort* goes into "creating/changing" it. >> (effort is measured in man-hours) > > As I am at a Conference (on Provavbly Correct Software) at present I am away
Cool!
> from the metrics record. However, I provide an aggregate effort measure for > the Inspection, Functional and Limits Testing per software component based > on the cyclomatic complexity of the component under eamination. That works > out at:- > > For a component of cyclomatic complexity <3, about 1 hour. > For a component of cyclomatic complexity >3 and <7 about 2 hours. > For For a component of cyclomatic complexity >7 and <10 about 8 hours > For a component of cyclomatic complexity >10 it is a best guess on how > many days/weeks it may take. > > I only remember this because I was recently reviewing the recorded metrics > for effort expended in this regard. Inspection and Test are a form of review > and are covered by a Review Record Form.
Thanks, I'll make a note of it!
>> E.g., how long (hours of labor) did it take to create a spec? >> How long (hours of labor) to create the hardware/software to reify >> that spec? > > Getting to a final requirements specification (that meets the 6 C'S > criteria). can utilise up to 60% of the project life-time. However, this > upfront effort has a benefit in giving the designers and developers a solid > basis from which to work and a big reduction in the number of errors that > arise in the initial specification stage.
Sorry, my questions were intended to be rhetorical -- the sorts of queries I'd like to be able to answer *after* collecting hard data. Why did this component require so much *more* effort than this other? Why is this component so much more robust than the others? etc.
> You have to look at your metrics regime and design the data collection steps > that are important to you. If it is important to you, then you have to work > out how you collect it.
Yes, that was my point. I can measure complexity in code, size, etc. It can be done after-the-fact. And, in many different ways (so you can see which measures correlate most closely with <whatever> you are trying to deduce. But, measuring the *time*/effort required for a task must be done *while* the effort is being expended. You can't "look back" on past efforts (unless you've videotaped them, etc.). So, I need to come up with a way of *deciding* "what constitutes effort" (having an editor open into a source file doesn't mean you're DOING anything with it! Nor does the fact that you are NOT actively typing mean that you AREN'T expending effort on it!) And, in a way that doesn't lend itself to cheating (subconsciously or not) or "reporting error" (did you forget that you spent 2 hours looking at components for the *next* design today? I.e., your reporting of today;s effort is high by two hours. And, the next project will probably be LOW by two hours!)
> Time spent on developing an individual component are > not that much of a concern for me. I get a reasonably good average figure > from my own development rate but that is aggregate data from my own journals > and the development process metrics. It takes a bit of effort to extract > that data and I do from time to time. However, my most important figure that > takes my focus is the number of errors released to the client (which is > satisfyingly low). > >> I.e., I want to be able to put a number on the *effort* required >> (beyond counting effective keystrokes). I can check out a version >> and spend a lot of effort refactoring it into an *equivalent* >> version with very similar metrics. How is that *effort* measured >> and accounted? > > I guess only you will be able to answer why you need that data and how > important that figure will be for you.
There are (always) many ways to solve a problem. But, seldom many *measures* by which you can evaluate the cost of each solution style alongside it's benefits. I have a rather elaborate distributed RT system. Over time, I expect others to add "modules" (hardware/drivers/software) to extend it to suit their particular needs (e.g., if you have a motorized skylight, you -- or someone else -- would have to design a hardware interface to that mechanism and a hardware/software interface to the rest of my system). I can't FORCE people to do things the same way that I have. OTOH, if I can show (from hard data) the benefits of continuing along that trend, then that could potentially *entice* folks to do as I've done -- for a more consistent implementation (which, in turn, makes it easier for others to build on *their* work -- instead of having to learn 25 different ways of interfacing to my common platform). E.g., I have a doorbell function. Invoke a sensor, activate an annunciator. Piece of cake. Newbie designer would probably do it entirely in hardware. And, be *stuck* with that implementation (how do I know when someone has stopped by in my absence? how does a deaf person "hear" the bell? what if I don't like the noise you've chosen to make? etc.) A newbie *programmer* would write a tight little loop that polled the button and activated the annunciator. Over time, he might add controls to allow the type of annunciator to be "adjusted". Or, the duration of the "ring". etc. Someone a bit more skilled might realize that moving the code into an ISR would allow the machine to do something *else* along with ringing the doorbell. Beyond that, an MTOS might provide even *more* flexibility -- while still allowing the bell to be "serviced". An RTOS implementation could provide timeliness guartantees that the MTOS can't. [See my point? Each enhancement comes with added cost, risk, reliability, etc. But, nowhere are those costs spelled out -- to offset the flexibility provided] The model I've adopted has the "sensor/button polling" handled, by necessity, in a driver running on the hardware that interfaces directly *to* that sensor. Similarly, the annunicator is handled by a driver running on hardware that interfaces to that annunciator (whatever it may be!). The *application* then reads one and, conditionally, writes to the other. Conceptually the same as the "tight loop" described previously. However, in my scheme, the application can run <wherever> -- as long as it can access the two (software) mechanisms described above. I make this possible with lots of "mechanism" that sits between the application and the implementation. All at some *cost*. But, the application can then be as trivial as: input = open("/sensor", "r") if (!input) { // gasp! doorbutton hardware is offline! Or, not available for read die("miserably") } output = open("/actuator", "w") if (!output) { // gasp! actuator hardware is offline! Or, not available for write die("miserably") } while (FOREVER) { if (read(input) == "pressed") { write(output, "dingdong") } } which is conceptually as complex as a tight loop reading an input port and writing an output port. Though a boatload of other code is hiding behind the scenes to make it possible. How do you justify this (bloated?) approach to a future developer? One way is to show him similarly "simple" apps: input = open("/sensor", "r") if (!input) { die("miserably") } while (FOREVER) { if (read(input) == "pressed") { case (doorbell_handling) { "DVR" => output = open("/DVR", "w") if (!output) { die("miserably") } if (!write(output, "select front_door_camera")) { die("miserably") } write(output, "record 5 seconds") "bell" => output = open("/actuator", "w") if (!output) { die("miserably") } write(output, "dingdong") "sms" => output = open("/SMS", "w") if (!output) { die("miserably") } write(output, "send to Don; Someone just knocked on the door") "silent" or nil => visitors++ } } } and challenge him to implement them with his "other" approach! ("Yeah, you can record video with just 2 lines of code! But, only if you buy into this approach! Otherwise, you'll have to write your own interface to the video subsystem and hope you don't break it for the other apps that are using it!") How do you *informatively* tell him the magnitude of the effort that will be required on his part if he chooses to adopt a compatible approach? Ans: empirical data. Show him "this" cost "that" to implement. And. let him make a value judgement as to whether he wants access to these other capabilities *for* that "investment".
Reinhardt Behm <rbehm@hushmail.com> wrote:

(snip)

> I have a simple rule: There may be not warnings.
> Simple things like a warning about a missing cast will be fixed. > Others in my experience are most often the result of bad programming > style and _have_ to be fixed. Possibly by not removing the warning but > the programmer.
Compiler writers keep adding more warnings, no matter how rare the condition warned about. At some point, the warnings are take more time to check than the condition being warned about. -- glen
Don Y wrote:

> In light of that, you want to keep as many warnings ("advisories") > enabled as is possible! But, in practice, this can generate a lot of > "advisory output" that, once you've checked everything "the first > time", you really want to be able to IGNORE (without turning them > "off")
Exactly.
>> I have considered keeping a log of the expected warnings (with >> documentation of why the warnings are expected) and then making it an >> error to have a different set of warnings than exactly the documented >> set. > > You could capture the output to a file and then diff that against > future output. But, as line numbers can change, this just turns one > problem (verifying the exact same warnings persist) into another > problem (verifying the warning on line X is really the same warning > that is now reported on line Y).
My thoughts too.
> In a GUI IDE, one could conceivably tag (click) each warning's > corresponding source and have the IDE remember "this warning is OK, > here". But, unless you can encode that in the source itself, its not > portable to other tools.
Yes. But it it is possible to encode it in the source. The problem is to do it robustly - and in a way that doesn't annoy the programmer.
>> The Ada compiler I use allows me to identify some warnings >> (unreferenced parameters and objects) as expected, but the *why* has >> to be a comment. > > I'd like something akin to bison's %expect/%expect-rr capability. > I.e., if the expected warning WOULD BE generated, it is suppressed. > And, if the expected warning (or, a *different* warning) would [not?] > be generated, an ERROR is signaled.
This sounds like how "my" Ada compiler works: procedure Warnings is Object : constant Boolean := True; pragma Unreferenced (Object); begin if Object then -- "warnings.adb:4" null; end if; end Warnings; Compiling: warnings.adb:4:07: warning: pragma Unreferenced given for "Object"
> And, when the warning fails to materialize, have the compiler *complain*! > ("Hey, I know you expected this warning, here. But, it didn't happen. > Are you sure? At the very least, your documentation claiming it *would* > be here is faulty!")
Definitely!
>> Keeping a log of expected warnings and checking compilation and tool >> results against that list is of course a solution, but it feels too >> much like a hack. > > Agreed. And, too "manual"/disciplined.
What I've considered is to keep the "log" of expected warnings as specially formatted comments in the source, and then write a tool which correlates the compiler and tool output with the expected warning markers in the source files. It is still an extra step to do, but it would be easy to integrate it in my existing build and test framework, once the tool was written.
> I believe people are inherently lazy. If you *require* them to > perform some action, there is a good chance that they will > (eventually) fail to do so. E.g., remove the comment alerting the > developer to the warning (hence, make that a hard error). > > OTOH, taking the approach of "treat all warnings as errors" (compiler > flag) ends up getting folks to just insert <whatever> to placate the > compiler. Without considering the nature of the warning and *why* > (if?) their action should compensate, logically.
Exactly. But what is the solution then? To accept warnings as *warnings* until the reason can be peer-reviewed? Greetings, Jacob -- "Can we feel bad for the universe later?"
The 2026 Embedded Online Conference