EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Modern debuggers cause bad code quality

Started by Oliver Betz December 2, 2014
Hi All,

of course, the subject is just a rant to make you read and comment
this.

Did developers two decades ago think better before they started
coding?

In the early days of embedded computing, most embedded developers
could use a TTY interface at best and instrumented the code with some
print statements if something went wrong.

A build and test cycle took several minutes because erasing and
programming EPROMs took so long.

ICEs were extremly expensive and didn't even provide the capabilities
of modern tools.

Today, you can get some kind of "background debug interface" nearly
for free, and build and upload new code in seconds.

On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in
his keynote about developers spending 50% of their time on debugging.

Could it be that today's sophisticated tools lead to more "try and
error", less thinking before doing?

Oliver
-- 
Oliver Betz, Munich http://oliverbetz.de/
Op Tue, 02 Dec 2014 16:31:52 +0100 schreef Oliver Betz  
<OBetz@despammed.com>:
> Hi All, > > of course, the subject is just a rant to make you read and comment > this. > > Did developers two decades ago think better before they started > coding? > > In the early days of embedded computing, most embedded developers > could use a TTY interface at best and instrumented the code with some > print statements if something went wrong. > > A build and test cycle took several minutes because erasing and > programming EPROMs took so long. > > ICEs were extremly expensive and didn't even provide the capabilities > of modern tools. > > Today, you can get some kind of "background debug interface" nearly > for free, and build and upload new code in seconds. > > On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in > his keynote about developers spending 50% of their time on debugging. > > Could it be that today's sophisticated tools lead to more "try and > error", less thinking before doing?
Don't blame the debuggers, by far the most developers don't even use high-end "sophisticated" debuggers with can make full use of the on-chip debug logic and provide reliable CPU trace. Today's hardware and software are way more complex, however. Below are some cynical remarks. We use hardware with badly documented and/or broken peripherals, which requires debugging. We use libraries with badly documented and/or broken API's, which requires debugging. We use developers who can't RTFM and/or perform proper problem analysis, because the good ones were taken by those with (government) funding. -- (Remove the obvious prefix to reply privately.) Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
On 02/12/14 15:31, Oliver Betz wrote:
> Hi All, > > of course, the subject is just a rant to make you read and comment > this. > > Did developers two decades ago think better before they started > coding? > > In the early days of embedded computing, most embedded developers > could use a TTY interface at best and instrumented the code with some > print statements if something went wrong. > > A build and test cycle took several minutes because erasing and > programming EPROMs took so long. > > ICEs were extremly expensive and didn't even provide the capabilities > of modern tools. > > Today, you can get some kind of "background debug interface" nearly > for free, and build and upload new code in seconds. > > On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in > his keynote about developers spending 50% of their time on debugging. > > Could it be that today's sophisticated tools lead to more "try and > error", less thinking before doing?
Similar points have been made for the past 35 years, to my certain knowledge.
On 12/2/2014 8:31 AM, Oliver Betz wrote:
> Did developers two decades ago think better before they started > coding?
I think it depends a lot on the developer. Some like to do their homework "up front". Others start righting code before the marketing folks have even finished describing their fantasies...
> In the early days of embedded computing, most embedded developers > could use a TTY interface at best and instrumented the code with some > print statements if something went wrong. > > A build and test cycle took several minutes because erasing and > programming EPROMs took so long.
My first commercial project had a build cycle of almost *4* hours! Three developers sharing a codebase on 8" floppies that had to fit in 12KB of EPROM (that's *K*B) each of which (QTY 6) took ~20 minutes to program. Access to the hardware (development tools, prototype, etc.) forced you to spend a lot of time with "hard copy" -- *planning* how you would verify the code's execution *and* how you would get feedback from the system (no "debugger", no serial port, etc. just a collection of digital I/O's that you could try to repurpose for debugging use -- if that use wouldn't then render the *intended* use inoperable!) Lots of discipline, cooperation and communication so your individual pieces of code weren't routinely stomping on each other's progress. Tying up a few hundred dollars of EPROM for each turn of the crank meant you didn't keep old versions lying around to re-evaluate: plug them in, observe the results, then move them under the UV light so they'll be ready to burn when the *other* set are done! Before that, punching cards and submitting jobs "once a day" -- only to find you had a JCL card missing/out of place and the whole job ABEND'ed. Of necessity (i.e., if you wanted to get *any* work done in a given amount of time given the restrictions on the hardware available), you thought more to make every effort "yield results".
> ICEs were extremly expensive and didn't even provide the capabilities > of modern tools.
The last ICE I bought was ~$25,000. However, it was pretty capable given the *lack* of debug support in the silicon of that era.
> Today, you can get some kind of "background debug interface" nearly > for free, and build and upload new code in seconds.
Today, you can prototype code *without* hardware! I now write most of my code before I've even formalized schematics!
> On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in > his keynote about developers spending 50% of their time on debugging.
I budget 40% of a project for specification/design; 20% for coding; and 40% for testing/verification. To me, this is intuitive: Figure out what you want to do IN ALL CASES and be able to verify that it behaves as intended in each of those cases. The coding is just the boring "middle work".
> Could it be that today's sophisticated tools lead to more "try and > error", less thinking before doing?
I know friends who are always looking for faster machines to shrink their code-compile-debug cycle time -- but, they also tend to be the sorts who just stumble on something that *looks* like it may be a bug, change it, rebuild and use the "resulting performance" (which may coincidentally be misleading!) to "do their thinking"... i.e., if it now (appears) to work, then that *must* have been the problem, right??
On Tue, 02 Dec 2014 16:31:52 +0100, Oliver Betz wrote:

> Hi All, > > of course, the subject is just a rant to make you read and comment this. > > Did developers two decades ago think better before they started coding? > > In the early days of embedded computing, most embedded developers could > use a TTY interface at best and instrumented the code with some print > statements if something went wrong. > > A build and test cycle took several minutes because erasing and > programming EPROMs took so long. > > ICEs were extremly expensive and didn't even provide the capabilities of > modern tools. > > Today, you can get some kind of "background debug interface" nearly for > free, and build and upload new code in seconds. > > On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in his > keynote about developers spending 50% of their time on debugging. > > Could it be that today's sophisticated tools lead to more "try and > error", less thinking before doing?
Are you thumping your cane on the floor as you complain about kids these days, and how the world is degenerating into crap because of them? I think if you look, you'll probably find similar complaints in the Bible, or perhaps written in hieroglyphics on stelae in Egypt someplace. Times change. The details of the screwups change. The nature of the thinking leading to the screwups doesn't change, nor does the basic solutions. On a related note, I've been taking more and more to test driven development over the last decade, because it seems to help my development a lot. In the last two years I've gotten much more strict about doing everything I possibly can under TDD, even if I have to add hardware abstraction layers to do it. My code has gotten better as a consequence. Not because so much more of my code is -- perforce -- unit tested, but because the level of detail of testing that TDD calls upon you to do forces you to think about what you're doing in much greater detail, and to verify that your head is screwed on straight as you did the thinking. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
On 03/12/14 02:31, Oliver Betz wrote:
> On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in > his keynote about developers spending 50% of their time on debugging.
Before modern tools, much more time was spent putting the bugs *into* the code. I can only recall a couple of instances of embedded devices (out of the thousands I've ever encountered) that had a human interface element, which did not have flaws and quirks in the interface. HP and Sony seem to be able to mostly avoid this, but most embedded programmers are hopeless. How hard can it be to code a microwave oven timer to work in a sane and correct fashion? Yet I've never seen a single one. Eventually the embedded world will catch up to the leading edge of the web world, which uses TDD/BDD to code end-user expectations as tests that run automatically and quickly, which (as well as actually causing thought about UI state transitions) avoids most debugging.
"Oliver Betz" <OBetz@despammed.com> wrote in message 
news:d6jr7a9pnh62vqs58stulv9qdegjt637a8@4ax.com...
> Hi All, > > of course, the subject is just a rant to make you read and comment > this. > > Did developers two decades ago think better before they started > coding? > > In the early days of embedded computing, most embedded developers > could use a TTY interface at best and instrumented the code with some > print statements if something went wrong. > > A build and test cycle took several minutes because erasing and > programming EPROMs took so long.
OMG what did you work on back in 87, which is not even the early days of embedded, a build cycle could take hours tim
On 12/2/2014 2:09 PM, Clifford Heath wrote:
> I can only recall a couple of instances of embedded devices (out of the > thousands I've ever encountered) that had a human interface element, which did > not have flaws and quirks in the interface.
Most embedded devices (until recently with the abundance of RELATIVELY inexpensive graphical displays) have tightly constrained hardware. And, a design mentality ("from above") that discourages adding any recurring costs that don't have direct ($$) benefits. E.g., our washer/dryer still relies on 7 segment displays -- and a few "indicators". As a result, you get silly "messages" like "nF", "dO", etc. instead of something more informative and HELPFUL like "No Fill. Are you sure the water valves have been opened?" This is *so* 1970's.... C'mon... how many millions of these things (different models, etc.) are they selling WORLDWIDE? Yet, couldn't afford even a set of 10 segment displays? Or, just *two* "digits"-worth? Or, indicators for each of these conditions? "Ah, but what happens when we want to convey an error/condition that we haven't yet ANTICIPATED??" CTL-ALT-DEL??
> HP and Sony seem to be able to > mostly avoid this, but most embedded programmers are hopeless. How hard can it > be to code a microwave oven timer to work in a sane and correct fashion? Yet > I've never seen a single one.
I've encountered two "camps" regarding how users are considered in the design process. One essentially ignores them and concentrates on trying to make the product work. I.e., as if just getting ANYTHING out the door will be a major accomplishment -- "worry about the details (users), later" The other tries to understand the user's needs and thinking. Then, adopts features and mechanisms that "fit" with this understanding. While this *seems* better (at least they are considering the user as part of the "system"), it often results in what I call "The Accountant Mentality": where the user is expected to "perform" in a fixed, "anticipated" manner. There is (allegedly) some logic to The Interface and it's just a question of making it as easy as possible for the user to *accept*/adapt that logic. "Power Level, 9; Time, 1 0 0; START" For the past several projects, I've pursued a different approach: try to let the user do what he wants and *infer* what he *intends*. I.e., encode *minimal* prerequisites that allow the application to guide the user along. E.g., let earlier actions refine the constraints on later ones... I have found there is a LARGE class of users that are VERY uncomfortable with this sort of approach! They want a scripted interface: do this, then this, then that. The freedom I present leaves them uncertain of every action they take -- despite demonstrating the fact that they won't be allowed to "screw up" (if you forgot something, you'll be reminded WHEN YOU TRY TO CONTINUE PAST THE POINT WHERE IT IS REFERENCED)
> Eventually the embedded world will catch up to the leading edge of the web > world, which uses TDD/BDD to code end-user expectations as tests that run > automatically and quickly, which (as well as actually causing thought about UI > state transitions) avoids most debugging.
I've seen a lot of brain-dead web apps that force you to take steps in a very specific sequence -- even when there is no logical reason for doing so. Or, walk you through a series of "screens" (pages) only to discover you want/need to go back and change something on screen #2... but, the only way to do that is to quit and start over! "Why do I have to fill in my name, address, etc. before I can even get a GUESSTIMATE of total cost of this item? Beyond a ZIP code, what more do you need to determine what local tax rates apply and shipping costs?? Heck, you should be able to give me a TYPICAL RANGE of shipping costs (with a footnote that qualifies that estimate so I can see if it is likely to apply to me BEFORE I've even provided a ZIP code. I mean, how many gigabytes and MIPS do you have running this little app??" Give me a hundred MIPS or so and a even a few MB, a graphic display, etc. and you'll be surprised how elegant that Microwave oven interface can be! :> Conversely, let your web app have a few KB and a few KIPS and a few 7-segment displays and tell me how delightful *that* experience will be! Horses for courses
Don Y <this@is.not.me.com> wrote:

(snip)

> Most embedded devices (until recently with the abundance of RELATIVELY > inexpensive graphical displays) have tightly constrained hardware. And, > a design mentality ("from above") that discourages adding any recurring > costs that don't have direct ($$) benefits.
> E.g., our washer/dryer still relies on 7 segment displays -- and a few > "indicators". As a result, you get silly "messages" like "nF", "dO", > etc. instead of something more informative and HELPFUL like "No Fill. > Are you sure the water valves have been opened?"
> This is *so* 1970's....
and designers of appliances should test actual users to see how they respond? I was just noting in another newsgroup about a CSE seminar: https://www.cs.washington.edu/events/colloquia/details?id=2594 "The Programming Language Wars" on how little testing is done to see how users use programming language features. One test he did was to compare an existing language, using actual people to do the test, with a similar language using random ASCII characters in place of keywords. See: http://dl.acm.org/authorize?6968137 (It should work, even without an ACM subscription.) If that doesn't work, or you want to see other papers: http://web.cs.unlv.edu/stefika/Papers.php
> C'mon... how many millions of these things (different models, etc.) are > they selling WORLDWIDE? Yet, couldn't afford even a set of 10 segment > displays? Or, just *two* "digits"-worth?
> Or, indicators for each of these conditions?
Well, I might think that when no water comes out people figure to check the valves, but you never know. But for cost-sensitive items, every cent counts. (snip)
> I've encountered two "camps" regarding how users are considered > in the design process.
> One essentially ignores them and concentrates on trying to make the product > work. I.e., as if just getting ANYTHING out the door will be a major > accomplishment -- "worry about the details (users), later"
I have thought before about how many products come out before the theory is well enough understood. So, yes, in the beginning it is often true that getting anything out is a major accomplishment. (snip)
> "Power Level, 9; Time, 1 0 0; START"
I still have a microwave with knobs. I always forget which order to put the power and time when using digital versions. I can change the power level while it is running, just turn a knob.
> For the past several projects, I've pursued a different approach: try to > let the user do what he wants and *infer* what he *intends*. I.e., encode > *minimal* prerequisites that allow the application to guide the user along. > E.g., let earlier actions refine the constraints on later ones...
The above link also has a paper comparing static typing vs. dynamic typing for programming languages. The latter might correspond to your *infer* case. It seems that users do better with static typing.
> I have found there is a LARGE class of users that are VERY uncomfortable > with this sort of approach! They want a scripted interface: do this, > then this, then that. The freedom I present leaves them uncertain of > every action they take -- despite demonstrating the fact that they won't > be allowed to "screw up" (if you forgot something, you'll be reminded > WHEN YOU TRY TO CONTINUE PAST THE POINT WHERE IT IS REFERENCED)
(snip) -- glen
Oliver Betz wrote:

> Hi All, > > of course, the subject is just a rant to make you read and comment > this. > > Did developers two decades ago think better before they started > coding? > > In the early days of embedded computing, most embedded developers > could use a TTY interface at best and instrumented the code with some > print statements if something went wrong. > > A build and test cycle took several minutes because erasing and > programming EPROMs took so long. > > ICEs were extremly expensive and didn't even provide the capabilities > of modern tools. > > Today, you can get some kind of "background debug interface" nearly > for free, and build and upload new code in seconds. > > On the ESE Kongress in Sindelfingen, Jack Ganssle lamented today in > his keynote about developers spending 50% of their time on debugging. > > Could it be that today's sophisticated tools lead to more "try and > error", less thinking before doing? > > Oliver
Talk about cats amongst pigeons. Like Jack, some of us have learnt the right way to approach any project and do, indeed, do a lot of up-front thinking. It is a trait I have tried to instil in all the apprentices and graduate student intake at the companies I have been involved in over the years. Noting that Don budgets 40% of his time to up-front specification resolution and project approach planning, I consider this to be on the light side. My own figure is closer to 60% of the total project time is getting the spec right (including testing and debugging the spec). During this period the spec can change quite dramatically as problems are highlighted, identifying requirements that could lead to potentially unsafe operations (I am after all in the High Integrity Systems market). The benefit of all this up-front work is that the design task becomes much more straight forward and I can then produce decent certifiable electronics hardware and software. Within the resolution of the requirements specification I will have invested in some significant portion of "play- time". This "play-time" is the exploration of small aspects of the requirements with the aim of improving the requirements specification. Any prototype code or design produced at this stage is milked for information only for the purpose of requirement specification improvement. It is then scrapped. Of the remaining 40% of the project time-table, we test as we build as much as is practicable. The tests specifications will have come out of 60% block and satisfying those tests completely means your design is fulfilling the specifications. In this latter period we might see a few (usually minor) gotcha's but then, no development process will be absolutely perfect. Errors that creep into projects are quite language and technology agnostic. 44% of the projects errors will be inserted within the specification stage (See "Out of Control by the UK Health and Safety Executive). This is why it makes sense to remove those errors before you start the design effort. Of course, in order to remain in control of the project effort and ensure that the team are moving to the same overall plan, you need to have a decently robust Development Process in place. CMMI level 3 is the bare minimum your process should support. Higher, though is better. Correct by Construction is the best (and is improvement beyond CMMI level 5). -- ******************************************************************** Paul E. Bennett IEng MIET.....<email://Paul_E.Bennett@topmail.co.uk> Forth based HIDECS Consultancy.............<http://www.hidecs.co.uk> Mob: +44 (0)7811-639972 Tel: +44 (0)1235-510979 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
The 2026 Embedded Online Conference