EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Modern debuggers cause bad code quality

Started by Oliver Betz December 2, 2014
Don Y wrote:
> On 12/4/2014 6:30 AM, Les Cargill wrote: >> Sometimes the overhead of TTY output ... Heisenbugs the system, but >> these days just hitting a breakpoint probably means a full system >> restart. > > That depends on what you are debugging and whether or not the balance > of the system can tolerate pausing the component under test. In > my systems, the debugger is integrated with the "OS" so that it > effectively suspends the process/task/thread that is being tested. > As such, the rest of the system can be allowed to run -- albeit > "waiting" on the suspended element(s). >
That's good if it works out. But an PFPGA wait for no man :)
>> I'm biased because most of the bugs I chase these days are in released >> product and all the easy debug strategies have been used and failed. >> I spend most of my time developing resources to *reproduce* bugs; some >> are pretty obscure. > > Black boxes are cheap (unless you are in a severely resource constrained > environment -- even there, you can glob on extra resources for the BB > that need not be present in production).
Absolutely.
> I find them invaluable to > troubleshoot real-time activities (where the cost of gathering data > can disrupt the activity that is being profiled). They are lightweight > and can usually be independently sized/resized as you deem fit. > >> Debuggers tend to encourage people to run through it once, then >> mentally line >> through that function. This, of course, varies. It's >> nice to have options. > >
-- Les Cargill
On Tue, 02 Dec 2014 10:37:40 -0700, Don Y <this@is.not.me.com> wrote:

>On 12/2/2014 8:31 AM, Oliver Betz wrote: >> Did developers two decades ago think better before they started >> coding? > >I think it depends a lot on the developer. Some like to do their >homework "up front". Others start righting code before the marketing >folks have even finished describing their fantasies... > >> In the early days of embedded computing, most embedded developers >> could use a TTY interface at best and instrumented the code with some >> print statements if something went wrong. >> >> A build and test cycle took several minutes because erasing and >> programming EPROMs took so long. > >My first commercial project had a build cycle of almost *4* hours! >Three developers sharing a codebase on 8" floppies that had to fit >in 12KB of EPROM (that's *K*B) each of which (QTY 6) took ~20 minutes >to program.
That must have been a quite slow programmer, since usually those small EPROMs could be programmed in 5 minutes. I used jump tables in the beginning of each EPROM, so a modification of the code within that EPROM only required burning that EPROM, not the whole EPROM set. By allocating some unprogrammed areas at the end of each EPROM, you could insert a jump instruction in front of the code to be replaced and put the modified code at th end of the EPROM. Of course, you must find a few bytes with a suitable bit patterns to reburn a jump instruction. Reburning a few bytes from 1 (usually also unprogrammed state) to 0 took only a few seconds instead of several minutes.
Paul E Bennett wrote:

[...]

>>>Errors that creep into projects are quite language and technology >>>agnostic. >> >> Ganssle presented numbers: 50..100 errors/KLOC in C, 5..10 in ADA, >> zero with SPARK. >> >> Of course, this doesn't disagree with your next statement: >> >>>44% of the projects errors will be inserted within the specification stage >>>(See "Out of Control by the UK Health and Safety Executive). This is why >>>it makes sense to remove those errors before you start the design effort. > >I think I know some of the places where Jack might have obtained his numbers >(and I do not necessarily disagree with them). The languages are not used in >isolation from a development process so you have to look at the overall >package for a proper comparison.
I agree. The comparison is - especially if used isolated - misleading, and Ganssle's speaking had a focus on processes. Nevertheless the slides showed the numbers above without a reference to "process".
>If you look at the development environments where each of those languages >are used you will find that the reason for the differences are more to do >with the development process users of those languages go through. The SPARK
of course. BTW, the comparison was against "C without static analysis", because he also stated that <10% of the C programmers use static analysis. I'm sure that "C with properly used Lint" gives much better results, not least because using Lint will correlate with a better development process. Oliver -- Oliver Betz, Munich http://oliverbetz.de/
rickman wrote:

[...]

>>>> In the early days of embedded computing, most embedded developers >>>> could use a TTY interface at best and instrumented the code with some >>>> print statements if something went wrong. >>> >>> What do you mean "early days"? That still works for me. >> >> it's often (not always) inefficient compared to on-chip-debugging. > >Define inefficient. I can do the TTY thing with the absolute minimum of >hardware and nearly no supporting software. How exactly is that
SWD / JTAG / BDM / whatever debugging is usually "for free" if you use this interface also for production programming. Otherwise it has the same hardware cost as TTY. And it gives you extensive access to your system without the need of instrumenting your code. Consider also automated testing with original binaries, no instrumentation. Oliver -- Oliver Betz, Munich http://oliverbetz.de/
On 12/4/2014 11:39 PM, upsidedown@downunder.com wrote:
> On Tue, 02 Dec 2014 10:37:40 -0700, Don Y <this@is.not.me.com> wrote: > >> On 12/2/2014 8:31 AM, Oliver Betz wrote: >>> Did developers two decades ago think better before they started >>> coding? >> >> I think it depends a lot on the developer. Some like to do their >> homework "up front". Others start righting code before the marketing >> folks have even finished describing their fantasies... >> >>> In the early days of embedded computing, most embedded developers >>> could use a TTY interface at best and instrumented the code with some >>> print statements if something went wrong. >>> >>> A build and test cycle took several minutes because erasing and >>> programming EPROMs took so long. >> >> My first commercial project had a build cycle of almost *4* hours! >> Three developers sharing a codebase on 8" floppies that had to fit >> in 12KB of EPROM (that's *K*B) each of which (QTY 6) took ~20 minutes >> to program. > > That must have been a quite slow programmer, since usually those small > EPROMs could be programmed in 5 minutes.
I can't recall if we were using an Intellec8 or had already "upgraded" to the MDS800. The programmer ("UPP") was a dog. I think there was a small squirrel cage inside that (big!) box with a hamster running inside it! (remember, the "development system" is running off 8" floppies so *nothing* is fast -- not even "file not found"! I think they were like 800KB? six letter identifiers? everything in uppercase?? <groan>) OTOH, it was light-years better than "hand assembling" i4004 code! :-/ You loaded individual "object" files (prepared elsewhere) into memory and then used a monitor, of sorts, to command the programmer to move those bytes into the device being programmed. Then, separately entered a corresponding "compare" command. If you were unlucky enough to have a device that hadn't completely erased -- or, had developed a stuck bit (that wasn't stuck the way you needed it to be), you started over again. [Eventually, we started discarding the EPROMs that caused us repeated problems; we'd put tick marks on a device each time it screwed one of us so we could keep track of how flakey it was. Get screwed and see lots of tick marks? Toss it out. Boss wasn't keen on $50 devices (at one point) going into the trash!] Prior to that, the 1702's were even more of a nuisance to program!
> I used jump tables in the beginning of each EPROM, so a modification > of the code within that EPROM only required burning that EPROM, not > the whole EPROM set.
That only works if you have room to spare. We actually wrote a small utility to grep(1) our sources and tabulate the number of each specific "CALL" instruction. The 7 most frequent ones were then assigned to the 7 restart vectors to allow us to save 2 bytes of EPROM for each of those invocations (i.e., if you "CALL foo" in 100 different places, you can save 200 bytes in the image -- handy when you have things like FADD, FSUB, FMUL, etc. littering your codebase!)
> By allocating some unprogrammed areas at the end of each EPROM, you > could insert a jump instruction in front of the code to be replaced > and put the modified code at th end of the EPROM. Of course, you must > find a few bytes with a suitable bit patterns to reburn a jump > instruction. Reburning a few bytes from 1 (usually also unprogrammed > state) to 0 took only a few seconds instead of several minutes.
We each maintained our own set of sources so that we had some control over what was present in our links. E.g., when working on one subsystem, I might choose to elide huge portions of the user interface and just stub all of its actions. That gives me a bit more elbow room, frees me from having to deal with any of the bugs in that code (someone else's responsibility) AND lets me access a few extra bytes of RAM that I would otherwise have to *share* with that other subsystem. With *planning*, you could insert things like: TRY THIS JP IT_WORKED TRY THAT ; dead code JP IT_WORKED TRY AGAIN ; dead code IT_WORKED: Then, when the code was running, if "THIS" didn't work as expected, you could overwrite the bytes that it occupied PLUS the "JP IT_WORKED" that immediately followed with 0x00 (NOP) and get another shot at some other aspect of the problem without having to build a new image AND burn a new set of EPROMS. I.e., you kept a "listing" of your code handy with absolute addresses penciled in for those key locations (obtained from the linkage editor's map) Things were different, then. Software was seen as a special kind of hardware. We described our algorithms AS IF they were implemented with actual dedicated hardware (actually, this was the only practical way of dealing with the patentability of software issue that was just being addressed at the time). So, you looked at debugging as you would debugging a piece of hardwired hardware: I'll try this on this portion of the design; and something else on some other portion; etc. -- before reworking the prototype (hardware) to accommodate the successful changes and elide the unsuccessful ones.
On 05/12/14 08:11, Oliver Betz wrote:
> I'm sure that "C with properly used Lint" gives much better results, > not least because using Lint will correlate with a better development > process.
The old saying was that cc is half a compiler; the other half is lint.
Oliver Betz wrote:
> Paul E Bennett wrote: > > [...] > >>>> Errors that creep into projects are quite language and technology >>>> agnostic. >>> >>> Ganssle presented numbers: 50..100 errors/KLOC in C, 5..10 in ADA, >>> zero with SPARK. >>> >>> Of course, this doesn't disagree with your next statement: >>> >>>> 44% of the projects errors will be inserted within the specification stage >>>> (See "Out of Control by the UK Health and Safety Executive). This is why >>>> it makes sense to remove those errors before you start the design effort. >> >> I think I know some of the places where Jack might have obtained his numbers >> (and I do not necessarily disagree with them). The languages are not used in >> isolation from a development process so you have to look at the overall >> package for a proper comparison. > > I agree. The comparison is - especially if used isolated - misleading, > and Ganssle's speaking had a focus on processes. Nevertheless the > slides showed the numbers above without a reference to "process". > >> If you look at the development environments where each of those languages >> are used you will find that the reason for the differences are more to do >> with the development process users of those languages go through. The SPARK > > of course. BTW, the comparison was against "C without static > analysis", because he also stated that <10% of the C programmers use > static analysis. > > I'm sure that "C with properly used Lint" gives much better results, > not least because using Lint will correlate with a better development > process. > > Oliver >
I am not sure we know all that well where north is w.r.t. development process. There is much to be humble about. -- Les Cargill
On 04.12.2014 &#1075;. 12:51, Oliver Betz wrote:
> Paul E Bennett wrote: > > [...] > >>> Could it be that today's sophisticated tools lead to more "try and >>> error", less thinking before doing? >> >> Talk about cats amongst pigeons. > > causing the foreseeable defensiveness. > > [...] > >> Errors that creep into projects are quite language and technology agnostic. > > Ganssle presented numbers: 50..100 errors/KLOC in C, 5..10 in ADA, > zero with SPARK.
It is the language, not the rest of the toolchain. "C" is the major contributor to the decline in software quality (where there was some quality to decline of course). Nowadays people have no clue where the machine stack is, write IRQ handlers in C etc. etc. - in a way not dissimilar to writing novels in a language for which they need a phrasebook. The thing is, their novels get sold simply because the general public can't even use a phrasebook. And this happened mainly because x86 entered the scene widely, made assembly programming impractical with its messy programming model etc. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ https://www.flickr.com/photos/didi_tgi/
Dimiter_Popoff wrote:
> On 04.12.2014 &#1075;. 12:51, Oliver Betz wrote: >> Paul E Bennett wrote: >> >> [...] >> >>>> Could it be that today's sophisticated tools lead to more "try and >>>> error", less thinking before doing? >>> >>> Talk about cats amongst pigeons. >> >> causing the foreseeable defensiveness. >> >> [...] >> >>> Errors that creep into projects are quite language and technology >>> agnostic. >> >> Ganssle presented numbers: 50..100 errors/KLOC in C, 5..10 in ADA, >> zero with SPARK. > > It is the language, not the rest of the toolchain. > "C" is the major contributor to the decline in software quality (where > there was some quality to decline of course).
That's odd, since 'C' has been there since... well, the start. How can a thing-that-has-not-changed be the cause of decline? Some massive lag? Changes in the populations of practitioners? I have always been a fan of C.A.R. Tony Hoare, but his online video of a talk about "The Billion Dollar Bug" is perfect because someone stands up and notes that Haskell is perfectly safe until you invoke side effects - like the I/O monad.
> Nowadays people have no clue where the machine stack is, write > IRQ handlers in C etc. etc. - in a way not dissimilar to writing > novels in a language for which they need a phrasebook.
We all need phrasebooks.
> The thing is, their novels get sold simply because the general > public can't even use a phrasebook. > And this happened mainly because x86 entered the scene widely, > made assembly programming impractical with its messy programming > model etc. >
I wrote more assembly language in x86 than in any other architecture. You want something to wreck things? Try assembly.
> Dimiter > > ------------------------------------------------------ > Dimiter Popoff, TGI http://www.tgi-sci.com > ------------------------------------------------------ > https://www.flickr.com/photos/didi_tgi/ > > >
-- Les Cargill
On 05.12.2014 &#1075;. 15:41, Les Cargill wrote:
> Dimiter_Popoff wrote: >> On 04.12.2014 &#1075;. 12:51, Oliver Betz wrote: >>> Paul E Bennett wrote: >>> >>> [...] >>> >>>>> Could it be that today's sophisticated tools lead to more "try and >>>>> error", less thinking before doing? >>>> >>>> Talk about cats amongst pigeons. >>> >>> causing the foreseeable defensiveness. >>> >>> [...] >>> >>>> Errors that creep into projects are quite language and technology >>>> agnostic. >>> >>> Ganssle presented numbers: 50..100 errors/KLOC in C, 5..10 in ADA, >>> zero with SPARK. >> >> It is the language, not the rest of the toolchain. >> "C" is the major contributor to the decline in software quality (where >> there was some quality to decline of course). > > > That's odd, since 'C' has been there since... well, the start. How > can a thing-that-has-not-changed be the cause of decline? Some > massive lag? Changes in the populations of practitioners?
It is the popularity growth, not the birth date. Then C does not prevent one from writing decent software, it only makes it more difficult - and much easier to write messy such. People who have known what their compiler does - i.e. those who wrote the compiler - must have been able to write some good code using it.
>> Nowadays people have no clue where the machine stack is, write >> IRQ handlers in C etc. etc. - in a way not dissimilar to writing >> novels in a language for which they need a phrasebook. > > We all need phrasebooks.
Not all of us. I don't, for example.
> >> The thing is, their novels get sold simply because the general >> public can't even use a phrasebook. >> And this happened mainly because x86 entered the scene widely, >> made assembly programming impractical with its messy programming >> model etc. >> > > I wrote more assembly language in x86 than in any other architecture. > You want something to wreck things? Try assembly.
This explains why you see assembly as something impractical. There is no such thing as "assembly" language really, there are worlds of a difference between this or that "assembly". And then there is my VPA (virtual processor assembly) which makes me more efficient by at least an order of magnitude than anyone who uses C when it comes to projects which take more than a month to program (before you ask my code is in the millions of lines, >50M sources over the past 20 years). Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
The 2026 Embedded Online Conference