EmbeddedRelated.com
Forums

What's more important optimisations or debugging?

Started by Unknown May 30, 2007

Chris Hills wrote:

>>>> I'm trying to get a feel for what people now consider more important, >>>> optimisations or debugging ability.
> The choice was optimising of debugging.
May be the debugging of optimization would do? VLV
Chris Hills <chris@phaedsys.org> wrote:
> In article <f3u75d$3q3$1@oravannahka.helsinki.fi>, > ammonton@cc.full.stop.helsinki.fi writes > >Chris Hills <chris@phaedsys.org> wrote: > > > >> I did. The choice was optimising of debugging. Sacrificing debugging is > >> a loss of quality,. > >There are several kinds of debugging, from pulsing an unused pin to HLL > >source-level debuggers with data visualizers, edit-and-continue and > >other whiz-bang features. Would you prefer your development system had > >more of the latter, even if it meant having to use more expensive parts? > I usually used a decent ICE for debugging, also unit and system test.
Is that supposed to mean you favour higher-end debugging capabilities? -a
In article <f3umq2$gln$1@oravannahka.helsinki.fi>, 
ammonton@cc.full.stop.helsinki.fi writes
>Chris Hills <chris@phaedsys.org> wrote: >> In article <f3u75d$3q3$1@oravannahka.helsinki.fi>, >> ammonton@cc.full.stop.helsinki.fi writes >> >Chris Hills <chris@phaedsys.org> wrote: >> > >> >> I did. The choice was optimising of debugging. Sacrificing debugging is >> >> a loss of quality,. >> >There are several kinds of debugging, from pulsing an unused pin to HLL >> >source-level debuggers with data visualizers, edit-and-continue and >> >other whiz-bang features. Would you prefer your development system had >> >more of the latter, even if it meant having to use more expensive parts? >> I usually used a decent ICE for debugging, also unit and system test. > >Is that supposed to mean you favour higher-end debugging capabilities?
Debug or test? I have used simulators for unit test and some system tests but I prefer an Ice. After you have done the testing (white box, black box etc) you can debug... It depends on the system as to whether optimisation is permitted or not. Usually I add the optimisation (speed or size) as it is needed. Some times there is no need for either. In any event you need to test well and then, if required debug or modify the system. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
On Jun 4, 12:02 am, Thad Smith <ThadSm...@acm.org> wrote:

> The query was ambiguous, but seemed to be either a question about > importance of skills for selling one's services or importance of > features within a development tool suite. I took it as the latter.
The latter is what I was trying to convey but obviously that wasn't as clear as I first thought (it never is). The responses have been fantastic and I have to thank everyone for their help, I guess I'll provide some context for the question now. I'm a developer for an embedded tools company (no need for name dropping) and will soon be focussing on improving our compilers to support higher level debugging features. As it has been mentioned some optimisations can make this quite difficult and whilst I will be aiming to make high level debugging as accurate and complete as possible I was interested in where peoples preferences are. Thanks again, Ryan.
In article <1180910826.150146.269500@i38g2000prf.googlegroups.com>, Ryan 
H <rhapgood@gmail.com> writes
>On Jun 4, 12:02 am, Thad Smith <ThadSm...@acm.org> wrote: > >> The query was ambiguous, but seemed to be either a question about >> importance of skills for selling one's services or importance of >> features within a development tool suite. I took it as the latter. > >The latter is what I was trying to convey but obviously that wasn't as >clear as I first thought (it never is). > >The responses have been fantastic and I have to thank everyone for >their help, I guess I'll provide some context for the question now. >I'm a developer for an embedded tools company (no need for name >dropping) and will soon be focussing on improving our compilers to >support higher level debugging features. As it has been mentioned some >optimisations can make this quite difficult and whilst I will be >aiming to make high level debugging as accurate and complete as >possible I was interested in where peoples preferences are. > >Thanks again, Ryan.
Test Debug Then optimise Test again. Note some safety critical systems do not permit optimisations. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Wilco Dijkstra wrote:
> "David Brown" <david@westcontrol.removethisbit.com> wrote in message > news:465e6f47$0$15298$8404b019@news.wineasy.se... >> Dave Hansen wrote: >>> On May 30, 5:15 pm, rhapg...@gmail.com wrote: >>>> I'm trying to get a feel for what people now consider more important, >>>> optimisations or debugging ability. In the past with such tight memory >>>> limits I would have said optimisations but now with expanded-memory >>>> parts becoming cheaper I would think that debugging ability is more >>>> important in a development tool. It's is not exactly a black and >>>> white question, debug or smallused, but more a ratio. eg. 50% debug/ >>>> 50% optimised or 70% debug/30% optimised, etc. >>> The rule is "Make it right, _then_ make it fast." Fast enough is fast >>> enough. If the optimizer makes your code undebuggable, and you need >>> the debugger, don't use the optimizer. >>> >> Remember Knuth's golden rules about optimisation: >> >> 1. Don't do it. >> 2. (For experts only) Don't do it yet. >
(Thanks for correcting my source on this quotation, by the way.)
> I don't agree with this. For small programs it is easy to implement an > efficient algorithm immediately rather than start with an inefficient one. > It's hard to improve badly written code, so rewriting it from scratch > would be better than trying to fix it. >
As I noted below, and as you already knew, the context of these rules is for optimisation of the implementation, not choice of algorithm - the whole point is that choosing a better algorithm will make a bigger difference to the result than a finely tuned poor algorithm. Any time spent fine-tuning your bubble-sort implementation is time wasted - switch to a heap sort or quicksort.
> For large programs it is essential that you select the most optimal > architecture and algorithms beforehand, as it is usually impossible to > change them later. The bottlenecks are typically caused by badly > designed interfaces adding too much overhead. >
As you say, it's hard to improve badly written code - but it's far from impossible to improve well-written code. If your code involves sorting (being a nice example of a problem with many different algorithms), then it is perfectly reasonable to use a thrown-together bubble sort in your first prototypes. What's important is that when you start to need something faster, you switch to a faster algorithm rather than trying to optimise the poor algorithm. And as for the hardware, it's important that you do *not* try to pick the most optimal target at the start. You start out your development on hardware that can do more than you need, so that you have room for testing and debugging, and enough overhead so that you are not caught cold by unexpected complexities. If you think your program will take 12K, you start with a 32K device in prototyping and testing - and cut it down to 16K for production. "Premature optimisation is the root of all evil" applies to the hardware too.
> In my experience well designed code is both efficient and easy to > understand, so it wouldn't need optimization (apart from fine tuning). > In other words, if you *need* to optimise an application, you got it wrong. >
The need for hand-optimising the source code (or writing critical sections in assembly) is very much less now than it used to be, as compilers have got better. Well designed code clearly helps too.
>> That applies to hand-tuning of the source code, rather than automatic optimisations in a >> compiler, but it's important to remember that the speed of the code is irrelevant if it >> does not work. >> >>> That said, I generally set my compiler to optimize for space. It >>> hasn't really caused me any debugging troubles in at least 5 or 10 >>> years. Of course, most of my debug activity resembles inserting >>> printf statements rather than stepping through code in an emulator. >>> YMMV. > > Yes, a debugger is really only required if you have a nasty pointer bug > overwriting memory etc. >
The need for debugging tools varies enormously between different types of projects, and a decent hardware debugger can be useful for many things besides the obvious software debugging.
>> In my experience, it is often much easier to debug code when you have at least some >> optimising enabled on the compiler. Code generated with all optimisations off is often >> hard to read (for example, local variables may end up on a stack, while register-based >> variables can be easier to understand). > > Indeed turning off all optimization makes things impossible to debug on > some compilers. I prefer leaving on most optimizations as well. > > Wilco > >
Hans-Bernhard Br&#4294967295;ker wrote:
> David Brown wrote: > >> Remember Knuth's golden rules about optimisation: >> >> 1. Don't do it. >> 2. (For experts only) Don't do it yet. > > I remember there being a third: > > 3. Before you do it, measure. > >> That applies to hand-tuning of the source code, rather than automatic >> optimisations in a compiler, but it's important to remember that the >> speed of the code is irrelevant if it does not work. > > While that latter statement applies rather widely, let's keep in mind > that this is the embedded programming newsgroup after all, where > real-time constraints are a regular old fact of life. That means speed > and correctness may not be separable just like that. Sometimes, if code > is slow, that alone means it does not work.
Time-limits can certainly be part of the requirements in an embedded system, and are thus part of the "correctness" side of some code. And even when it is not strictly required, speed is often important - fast code translates to slower oscillator speeds and/or longer periods of cpu sleep, leading to lower costs and lower power. But your prime concern is always to write code that works according to the requirements.
On Sun, 03 Jun 2007 15:47:06 -0700, Ryan H <rhapgood@gmail.com> wrote:

>I'm a developer for an embedded tools company (no need for name >dropping) and will soon be focussing on improving our compilers to >support higher level debugging features. As it has been mentioned some >optimisations can make this quite difficult and whilst I will be >aiming to make high level debugging as accurate and complete as >possible I was interested in where peoples preferences are.
I know I'm a boring old fart on this topic, but I've been coding embedded systems for 30+ years. The essential issue for me is that the best tools for hardware bring up are not the same as for embedded software development on working hardware with drivers. When I'm bringing up a new chip, what I need are one or two LEDs and test points, an oscilloscope, plus a Flash programmer and/or a JTAG unit. Once I've got as far as proving chip startup code, a serial port and an interrupt-driven timer ticker, a more software oriented debug chain is appropriate. But in either case, barring chip and compiler bugs, the bugs are in the app's source code. Finding them is just application of formal scientific method. While bugs are known to exist observe this bug form a hypothesis derive experiment and test until this bug is fixed repeat Note that there are two loops, the inner one being crucial. The faster you get round the inner loop, the faster you debug. The key part is teaching people to observe, and then teaching them to design experiments with yes/no answers. All the numbers I've seen about software costs indicate that people debug code for two or three times as long as it takes them to write it. I rate debugging as far more important than compiler optimisation. But the people issue is in turn far more important than the tool chain. Paul Bennett's comment is relevant. Some people can debug with their hands in their pockets; others just never learn to debug efficiently. Stephen -- Stephen Pelc, stephenXXX@mpeforth.com MicroProcessor Engineering Ltd - More Real, Less Time 133 Hill Lane, Southampton SO15 5AF, England tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691 web: http://www.mpeforth.com - free VFX Forth downloads
David Brown wrote:
> Wilco Dijkstra wrote: >> "David Brown" <david@westcontrol.removethisbit.com> wrote >>> Dave Hansen wrote: >>>> On May 30, 5:15 pm, rhapg...@gmail.com wrote: >>>> >>>>> I'm trying to get a feel for what people now consider more >>>>> important, optimisations or debugging ability. In the past with >>>>> such tight memory limits I would have said optimisations but >>>>> now with expanded-memory parts becoming cheaper I would think >>>>> that debugging ability is more important in a development tool. >>>>> It's is not exactly a black and white question, debug or >>>>> smallused, but more a ratio. eg. 50% debug/ 50% optimised or 70% >>>>> debug/30% optimised, etc. >>>> >>>> The rule is "Make it right, _then_ make it fast." Fast enough >>>> is fast enough. If the optimizer makes your code undebuggable, >>>> and you need the debugger, don't use the optimizer. >>>> >>> Remember Knuth's golden rules about optimisation: >>> >>> 1. Don't do it. >>> 2. (For experts only) Don't do it yet. > > (Thanks for correcting my source on this quotation, by the way.) > >> I don't agree with this. For small programs it is easy to >> implement an efficient algorithm immediately rather than start >> with an inefficient one. It's hard to improve badly written code, >> so rewriting it from scratch would be better than trying to fix >> it. > > As I noted below, and as you already knew, the context of these > rules is for optimisation of the implementation, not choice of > algorithm - the whole point is that choosing a better algorithm > will make a bigger difference to the result than a finely tuned > poor algorithm. Any time spent fine-tuning your bubble-sort > implementation is time wasted - switch to a heap sort or quicksort.
Here you are missing an option. Also consider mergesort, combined with a malloced list. For an example of this see wdfreq.c, a part of the hashlib distribution. See: <http://cbfalconer.home.att.net/download/> for the full package. -- <http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt> <http://www.securityfocus.com/columnists/423> <http://www.aaxnet.com/editor/edit043.html> <http://kadaitcha.cx/vista/dogsbreakfast/index.html> cbfalconer at maineline dot net -- Posted via a free Usenet account from http://www.teranews.com
CBFalconer wrote:
> David Brown wrote: >> Wilco Dijkstra wrote: >>> "David Brown" <david@westcontrol.removethisbit.com> wrote >>>> Dave Hansen wrote: >>>>> On May 30, 5:15 pm, rhapg...@gmail.com wrote: >>>>> >>>>>> I'm trying to get a feel for what people now consider more >>>>>> important, optimisations or debugging ability. In the past with >>>>>> such tight memory limits I would have said optimisations but >>>>>> now with expanded-memory parts becoming cheaper I would think >>>>>> that debugging ability is more important in a development tool. >>>>>> It's is not exactly a black and white question, debug or >>>>>> smallused, but more a ratio. eg. 50% debug/ 50% optimised or 70% >>>>>> debug/30% optimised, etc. >>>>> The rule is "Make it right, _then_ make it fast." Fast enough >>>>> is fast enough. If the optimizer makes your code undebuggable, >>>>> and you need the debugger, don't use the optimizer. >>>>> >>>> Remember Knuth's golden rules about optimisation: >>>> >>>> 1. Don't do it. >>>> 2. (For experts only) Don't do it yet. >> (Thanks for correcting my source on this quotation, by the way.) >> >>> I don't agree with this. For small programs it is easy to >>> implement an efficient algorithm immediately rather than start >>> with an inefficient one. It's hard to improve badly written code, >>> so rewriting it from scratch would be better than trying to fix >>> it. >> As I noted below, and as you already knew, the context of these >> rules is for optimisation of the implementation, not choice of >> algorithm - the whole point is that choosing a better algorithm >> will make a bigger difference to the result than a finely tuned >> poor algorithm. Any time spent fine-tuning your bubble-sort >> implementation is time wasted - switch to a heap sort or quicksort. > > Here you are missing an option. Also consider mergesort, combined > with a malloced list. For an example of this see wdfreq.c, a part > of the hashlib distribution. See: > > <http://cbfalconer.home.att.net/download/> > > for the full package. >
I was not intending to give a list of possible sorting algorithms - there are many, many more than the four mentioned here, and there are many things to consider if you are looking for an "optimal" solution. Sorting algorithms are fun to work with, since it is easy to understand and specify the problem, while offering a wide variety of algorithms, so a thread on the subject (or algorithm design in general) might be of general interest in c.a.e. - but it's off-topic for this thread.