EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

What's more important optimisations or debugging?

Started by Unknown May 30, 2007
Colin Paul Gloster wrote:
> In news:1180571919.025883.219930@i13g2000prf.googlegroups.com > timestamped 30 May 2007 17:38:39 -0700, Ryan H <rhapgood@gmail.com> > posted: > "On May 31, 9:21 am, BubbaGump <BubbaGump@localhost> wrote: > > > I don't understand the question. Debugging features in a development > > tool or extra debugging information in executable code? > > The question refers to debugging features in a toolsuite. [..] > > [..]" > > > Has anyone experience or impressions of debuggers which allow stepping > backwards in time through program flow, such as apparently provided > for desktops/workstations by UndoDB ( > WWW.Undo-Software.com > ) and Java (Virtual Machine?) debuggers? If so, for which processors > and with which tools? I imagine it would be possible to pay Undo > Limited to port a version of its debugger which would be compatible > with any of the targets supported by the GNU DeBugger GDB as UndoDB is > a wrapper for GDB. > > Curious, > Colin Paul Gloster
Many debuggers can let you look back in time to some extent, by looking at the call stack - this will let you see what called your current function, and the state of local variables in the caller function (and its callers, and so on). But to get true backwards stepping, you need very sophisticated trace buffers. Some processors, combined with expensive hardware debuggers, can give you limited traces (such as indications of program flow), but a full trace requires capturing the databus and all internal data flows. It's easy to do in a simulation, and possible to do in a full hardware emulator, but impossible with modern jtag-type debugging.
"David Brown" <david@westcontrol.removethisbit.com> wrote in message 
news:465e6f47$0$15298$8404b019@news.wineasy.se...
> Dave Hansen wrote: >> On May 30, 5:15 pm, rhapg...@gmail.com wrote: >>> I'm trying to get a feel for what people now consider more important, >>> optimisations or debugging ability. In the past with such tight memory >>> limits I would have said optimisations but now with expanded-memory >>> parts becoming cheaper I would think that debugging ability is more >>> important in a development tool. It's is not exactly a black and >>> white question, debug or smallused, but more a ratio. eg. 50% debug/ >>> 50% optimised or 70% debug/30% optimised, etc. >> >> The rule is "Make it right, _then_ make it fast." Fast enough is fast >> enough. If the optimizer makes your code undebuggable, and you need >> the debugger, don't use the optimizer. >> > > Remember Knuth's golden rules about optimisation: > > 1. Don't do it. > 2. (For experts only) Don't do it yet.
I don't agree with this. For small programs it is easy to implement an efficient algorithm immediately rather than start with an inefficient one. It's hard to improve badly written code, so rewriting it from scratch would be better than trying to fix it. For large programs it is essential that you select the most optimal architecture and algorithms beforehand, as it is usually impossible to change them later. The bottlenecks are typically caused by badly designed interfaces adding too much overhead. In my experience well designed code is both efficient and easy to understand, so it wouldn't need optimization (apart from fine tuning). In other words, if you *need* to optimise an application, you got it wrong.
> That applies to hand-tuning of the source code, rather than automatic optimisations in a > compiler, but it's important to remember that the speed of the code is irrelevant if it > does not work. > >> That said, I generally set my compiler to optimize for space. It >> hasn't really caused me any debugging troubles in at least 5 or 10 >> years. Of course, most of my debug activity resembles inserting >> printf statements rather than stepping through code in an emulator. >> YMMV.
Yes, a debugger is really only required if you have a nasty pointer bug overwriting memory etc.
> In my experience, it is often much easier to debug code when you have at least some > optimising enabled on the compiler. Code generated with all optimisations off is often > hard to read (for example, local variables may end up on a stack, while register-based > variables can be easier to understand).
Indeed turning off all optimization makes things impossible to debug on some compilers. I prefer leaving on most optimizations as well. Wilco
On Thu, 31 May 2007 12:00:56 +0000, Wilco Dijkstra wrote:


>> Remember Knuth's golden rules about optimisation: >> >> 1. Don't do it. >> 2. (For experts only) Don't do it yet. > > I don't agree with this. For small programs it is easy to implement an > efficient algorithm immediately rather than start with an inefficient one. > It's hard to improve badly written code, so rewriting it from scratch > would be better than trying to fix it. > > For large programs it is essential that you select the most optimal > architecture and algorithms beforehand, as it is usually impossible to > change them later. The bottlenecks are typically caused by badly > designed interfaces adding too much overhead. > > In my experience well designed code is both efficient and easy to > understand, so it wouldn't need optimization (apart from fine tuning). > In other words, if you *need* to optimise an application, you got it wrong.
What you are describing is what I would image all experienced software engineers do. But then what do you do if performance isn't good enough? i.e. wrt the design - you got it wrong. I suspect that's when Knuth's golden rules kick in. I could of course be barking up the wrong tree - I haven't read Knuth. Regards, Paul.
Ryan H wrote:

> The question refers to debugging features in a toolsuite. When > choosing a set of tools for a particular project would you place more > emphasis on finding a compiler/IDE combination that makes for easy > accurate debugging or finding one [compiler] that can produce the most > optimal code?
As with all such questions, the answer is that it depends. In this case, it depends on the particular project. Is it a quick-n-dirty hack on a platform you've never worked on before, but can assume is amply powerful enough for the job, so optimal code doesn't make a difference, but getting the job done quickly would? Or is it a tight squeeze of a hard problem into a small controller you already know, where you need all the help you can possibly get to make it fast, deadlines be banned? As any handyman could tell you, it's primarily the job that decides what tools you need, with personal preferences a distant second. The best hammer money can buy won't help you turn a screw. And then of course, there's always the remote chance that you could get a toolchain that's damn near perfect both at debugging *and* at optimization.
> I know it's more complicated than it seems but sometimes you have to > step away from the details and just look at the big picture, that's > what I'm trying to do here.
The only big picture to be had here is that there is no such thing as a big picture. The world of engineering consists entirely of small pictures.
David Brown wrote:

> Remember Knuth's golden rules about optimisation: > > 1. Don't do it. > 2. (For experts only) Don't do it yet.
I remember there being a third: 3. Before you do it, measure.
> That applies to hand-tuning of the source code, rather than automatic > optimisations in a compiler, but it's important to remember that the > speed of the code is irrelevant if it does not work.
While that latter statement applies rather widely, let's keep in mind that this is the embedded programming newsgroup after all, where real-time constraints are a regular old fact of life. That means speed and correctness may not be separable just like that. Sometimes, if code is slow, that alone means it does not work.
On May 30, 6:15 pm, rhapg...@gmail.com wrote:
> I'm trying to get a feel for what people now consider more important, > optimisations or debugging ability. In the past with such tight memory > limits I would have said optimisations but now with expanded-memory > parts becoming cheaper I would think that debugging ability is more > important in a development tool. It's is not exactly a black and > white question, debug or smallused, but more a ratio. eg. 50% debug/ > 50% optimised or 70% debug/30% optimised, etc. > > Any feedback would be greatly appreciated.
1. Debug 2. Debug 3. Debug If it's slow you will have a chance to fix it. If it's broke most customer will have moved on. But the choice is yours. gm
"Paul Taylor" <paul_ng_pls_rem@tiscali.co.uk> wrote in message 
news:pan.2007.05.31.16.35.41.638016@tiscali.co.uk...
> On Thu, 31 May 2007 12:00:56 +0000, Wilco Dijkstra wrote: > > >>> Remember Knuth's golden rules about optimisation: >>> >>> 1. Don't do it. >>> 2. (For experts only) Don't do it yet. >> >> I don't agree with this. For small programs it is easy to implement an >> efficient algorithm immediately rather than start with an inefficient one. >> It's hard to improve badly written code, so rewriting it from scratch >> would be better than trying to fix it. >> >> For large programs it is essential that you select the most optimal >> architecture and algorithms beforehand, as it is usually impossible to >> change them later. The bottlenecks are typically caused by badly >> designed interfaces adding too much overhead. >> >> In my experience well designed code is both efficient and easy to >> understand, so it wouldn't need optimization (apart from fine tuning). >> In other words, if you *need* to optimise an application, you got it wrong. > > What you are describing is what I would image all experienced software > engineers do. But then what do you do if performance isn't good enough? > i.e. wrt the design - you got it wrong. I suspect that's when Knuth's > golden rules kick in. I could of course be barking up the wrong tree - > I haven't read Knuth.
The above rules refer to premature optimization, which is allowing efficiency considerations affect the design (in a presumed negative way). A similar quote is "premature optimization is the root of all evil". Neither is Knuth's, the golden rules are Jackson's, the other is Hoare. However my point is that you precisely have to design for efficiency as it is not something you can add at a later stage. And efficiency matters a lot in the embedded world. Back to your question of what to do when things go wrong. At that point you've got no choice but to optimize in every possible way. I "optimized" a 600K line application by compiling it with the best compiler money can buy and carefully choosing the optimal set of compilation options. There is not much else one can do once all hotspots have been removed. Wilco
What about compiler/assembler optimisations that the toolsuite
performs? Some compiler optimisations can make it quite difficult for
the debuggers (variable watching, stack tracing etc), is it worth
turning these optimisations on if the debug accuracy and ability is
compromised?

I'm noticing there seems to almost be a generational gap, some more
experienced developers never had access to great debug tools and have
learnt to live without, but some newer developers expect flexible
debugging facilities. This could be because they are developing on a
PC (features galore) before targeting more restrictive embedded
devices. Interesting.

Ryan H wrote:
> What about compiler/assembler optimisations that the toolsuite > performs? Some compiler optimisations can make it quite difficult for > the debuggers (variable watching, stack tracing etc), is it worth > turning these optimisations on if the debug accuracy and ability is > compromised?
Same answer as before: that depends entirely on what the critical aspect of the project is. If you need to develop quickly more importantly than achieve utmost speed of the generated code, by all means disable any optimizations that get into the way of debugging. If you need to get the code to run faster no matter what, screw elegance in debugging and suit up for some hard, dirty work. If all else fails, debug by inspecting the generated machine code, then fixate the result of that effort (i.e. replace the critical parts by a known-good assembler subroutine). And that's before we consider coding rules such as NASA's standing rule: Debug what you fly, nothing else. Or the fact that the focus can differ between development speed and runtime speed even within a single project.
> I'm noticing there seems to almost be a generational gap, some more > experienced developers never had access to great debug tools and have > learnt to live without, but some newer developers expect flexible > debugging facilities.
Of course there is. That generational gap is as old as time. Experienced hunters almost certainly felt the same way about those pampered youngsters who think of themselves as "hunters" even though they never 'properly' learned to hunt equipped with nothing but a sharpened stone, within years after bow and arrows had been invented. The real problem is not what those newer developers expect. It's that some of them actually *rely* on such comfortable tools. That will bite them in the private parts rather badly if they ever have to work in a more restricted environment.
On 31 May 2007 15:54:57 -0700, Ryan H <rhapgood@gmail.com> wrote:

>What about compiler/assembler optimisations that the toolsuite >performs? Some compiler optimisations can make it quite difficult for >the debuggers (variable watching, stack tracing etc), is it worth >turning these optimisations on if the debug accuracy and ability is >compromised?
IMHO, any embedded designer should be sufficiently fluent in the actual assembler language of the target system in order to be able to put the breakpoints in the correct positions and interpret the results. Paul

The 2024 Embedded Online Conference