EmbeddedRelated.com
Forums

Gnu tools for ARM Cortex development

Started by Tim Wescott May 4, 2010
In article <4BFBC92D.9019CE15@bytecraft.com>,
Walter Banks  <walter@bytecraft.com> wrote:
<SNIP>
> >The paper touches on source code ways to improve the quality >of source level debugging information. Source level debugging is >important but in many fundamental ways this is one of the major >aggravating factors in gcc. One of the fundamental ways to ship >reliable code is to ship the code that was debugged and tested. >Code motion and other simple optimizations leaves GCC's >source level debug information significantly broken forcing >many developers to debug applications with much of the >optimization off then recompile later with optimization on but >the code largely untested.
Tanenbaum once said in a lecture: " Global optimisers and symbolic debuggers are each others arch enemies" A moment of thought should be enough to convince one self of the truth of this. I fail to see how this situation is different for GCC than for any compiler. By the way. - The very best code is tested but never debugged, because there is no need. (Chuck Moore the inventor of Forth reportedly never debugs. He checks his code and it works. Mostly subprograms are one line. That makes it easier, of course. ) - I always run tests on shipped code. Don't you? - If you expect the outcomes of different optimisation levels to be different, you're living a dangerous live, because apparently you don't trust your code not to have undefined behaviour.
>Walter Banks
Groetjes Albert -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Walter Banks wrote:
> > David Brown wrote: > >> On 25/05/2010 14:57, Walter Banks wrote: >>> >>> Przemek Klosowski wrote: >>> >>>> On Fri, 21 May 2010 09:43:57 +0100, Chris H wrote: >>>> >>>>> There is no need except for amusement. GCC is a LONG way behind the main >>>>> commercial compilers. >>>> Well, I am sure that some commercial compilers, especially those written >>>> by smart guys like Walter, and the CPU designers like ARM, will beat GCC. >>>> At the same time, here's an example how x86 GCC does quite well in a >>>> contest against Intel, Sun, Microsoft and LLVM compilers: >>>> >>>> http://www.linux-kongress.org/2009/slides/ >>>> compiler_survey_felix_von_leitner.pdf >>>> >>>> It's an interesting paper in several ways---he points out that compilers >>>> are often so good that tactical optimizations often don't make sense. >>> The paper deals with a dozen or so optimizations and shows >>> the variation on the generated code, quite useful. What is missing >>> from the paper is any form of analysis when the compiler should >>> utilize a specific optimization and how each of the compilers >>> made that choice. >>> >> That wasn't really the point of the paper. I believe the author was >> aiming to show that it is better to write logical, legible code rather >> than "smart" code, because it makes the code easier read, easier to >> debug, and gives the compiler a better chance to generate good code. > > He made that point, and I agree > >> There was a time when you had to "hand optimize" your C code to get the >> best results - the paper is just showing that this is no longer the >> case, whether you are using gcc or another compiler (for the x86 or >> amd64 targets at least). It was also showing that gcc is at least as >> smart, and often smarter, than the other compilers tested for these >> cases. > > Not really. The author used very simple examples that for the most > part can be implemented with little more than peep hole optimizers. > He also didn't claim otherwise. >
I think we agree here - the paper was not aiming for an in-depth comparison of optimisation techniques (though I don't think vectorisation counts as a simple peep-hole).
>> But I did not see it as any kind of general analysis of the >> optimisations and code quality of gcc or other compilers - it does not >> make any claims about which compiler is "better". It only claims that >> the compiler knows more about code generation than the programmer. > > Agreed that has been true for quite a while in practically all compilers. >
It's been true for a while with many compilers, but far from "practically all" compilers. It's true for high-end compilers, and it's true for gcc, but it is not always the case for the "cheap and cheerful" market of development tools. These don't have the budget for advanced compiler technology, nor can they get it "for free" like gcc (where small ports like avr-gcc or msp-gcc can benefit from the work done on the more mainstream ports like x86). There are a great many compilers available for a great many different processors which /do/ need "hand optimised C code" to generate the best object code. But the author's main point is that when targeting gcc or other sophisticated compilers, you want to write clear and simple code and let the compiler do the work - there are many developers who want to get the fastest possible code, but don't understand how to work with the compiler to get it.
>>> The paper touches on source code ways to improve the quality >>> of source level debugging information. Source level debugging is >>> important but in many fundamental ways this is one of the major >>> aggravating factors in gcc. One of the fundamental ways to ship >>> reliable code is to ship the code that was debugged and tested. >>> Code motion and other simple optimizations leaves GCC's >>> source level debug information significantly broken forcing >>> many developers to debug applications with much of the >>> optimization off then recompile later with optimization on but >>> the code largely untested. >>> >> I don't really agree with you here. There are three points to remember >> here. One is that /all/ compilers that generate tight code will >> re-arrange and manipulate the code. This includes constant folding, >> strength reduction, inlining, dead-code elimination, etc., as well as >> re-ordering code for maximum pipeline throughput and cache effects (that >> applies more to bigger processors than small ones). You can't generate >> optimal code and expect to be able to step through your code line by >> line in logical order, or view (and change) all local variables. Top >> range debuggers will be able to fake some of this based on debugging >> information from the compiler, but it will be faked. > > We can expect debugging information to tie the code being executed > to the original statement. Inline code may have multiple links > to the source. Code motion may execute code out of source order > >> . . . >> >> Secondly, gcc can generate useful debugging information even when fully >> optimising, without affecting the quality of the generated code. Many >> commercial compilers I have seen give you a choice between no debug >> information and fast code, or good debug information and slower code. > > This isn't true in the commercial compilers I am familiar with. > >> . . . >> >> Thirdly, there are several types of testing and several types of >> debugging. When you are debugging your algorithms, you want to have >> easy and clear debugging, with little regard to the speed. You then >> have low optimisation settings, avoid inlining functions, use extra >> "volatile" variables, etc. When your algorithm works, you can then >> compile it at full speed for testing - at this point, you don't need the >> same kind of line-by-line debugging. But that does not mean your >> full-speed version is not debugged or tested! > > gcc and gcc (The ones with the copyright filed off) based compilers
If you know of any gcc-based compilers with the copyrights filed off, I'm sure the FSF would be very happy to hear about it - just as they would tell you if they knew that your copyrights had been violated.
> often recommend the you suggest. It is the change of optimization > levels that high reliability folks avoid. It has been a big problem for > our customers who also use gcc. >
High reliability folks will aim to write code that is correct and works regardless of optimisation levels and other settings, as well as being as independent as possible of compiler versions and other variables. Then they will fix these at a particular setup and only ever qualify their resulting program for a given build setup. Code that works differently on different optimisation settings, other than in terms of speed, is broken code (or very occasionally, a broken compiler). Of course, knowing that your code is correct and verifying and qualifying it for high reliability requirements is a very different thing, which is why you do your heavy-duty testing with the same settings as shipping versions. But that does not mean you can't use different settings during development! Suggesting that you don't change compiler and debugger settings during development is like suggesting you don't distinguish between prototype card designs and production card designs.
Albert van der Horst wrote:
> In article <4BFBC92D.9019CE15@bytecraft.com>, > Walter Banks <walter@bytecraft.com> wrote: > <SNIP> >> The paper touches on source code ways to improve the quality >> of source level debugging information. Source level debugging is >> important but in many fundamental ways this is one of the major >> aggravating factors in gcc. One of the fundamental ways to ship >> reliable code is to ship the code that was debugged and tested. >> Code motion and other simple optimizations leaves GCC's >> source level debug information significantly broken forcing >> many developers to debug applications with much of the >> optimization off then recompile later with optimization on but >> the code largely untested. > > Tanenbaum once said in a lecture: > " Global optimisers and symbolic debuggers are each others > arch enemies" > A moment of thought should be enough to convince one self of > the truth of this. > > I fail to see how this situation is different for GCC than for > any compiler. > > By the way. > - The very best code is tested but never debugged, > because there is no need. > (Chuck Moore the inventor of Forth reportedly never debugs. > He checks his code and it works. Mostly subprograms are one line. > That makes it easier, of course. ) > - I always run tests on shipped code. Don't you? > - If you expect the outcomes of different optimisation levels > to be different, you're living a dangerous live, because > apparently you don't trust your code not to have undefined behaviour. >
I agree on all of the above. A few more relevant quotations: Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it". (I can't remember who said the following - I think it was either K or R of K&R C fame.) "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
On Tue, 25 May 2010 14:24:00 +0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:

>> >> http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf >> >> It's an interesting paper in several ways > >Is the paper available somewhere?
I entered the URL in firefox and got it. What is your problem ? -- 42Bastian Do not email to bastian42@yahoo.com, it's a spam-only account :-) Use <same-name>@monlynx.de instead !
On Tue, 25 May 2010 08:57:17 -0400, Walter Banks
<walter@bytecraft.com> wrote:


>Code motion and other simple optimizations leaves GCC's >source level debug information significantly broken forcing >many developers to debug applications with much of the >optimization off then recompile later with optimization on but >the code largely untested.
I see not, why "broken debug information" is an excuse for not testing the final version. In an ideal world, there should be no need to debug the final version ;-) And if optimization breaks your code, it is likely your code was broken before ( e.g. missing 'volatile'). -- 42Bastian Do not email to bastian42@yahoo.com, it's a spam-only account :-) Use <same-name>@monlynx.de instead !
On 26/05/2010 08:04, 42Bastian Schick wrote:
> On Tue, 25 May 2010 14:24:00 +0000 (UTC), Grant Edwards > <invalid@invalid.invalid> wrote: > >>> >>> http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf >>> >>> It's an interesting paper in several ways >> >> Is the paper available somewhere? > > I entered the URL in firefox and got it. What is your problem ? >
I suspect he was hoping to find a full text paper, with the transcript of the talk, rather than just the slides.
On 2010-05-26, 42Bastian Schick <bastian42@yahoo.com> wrote:
> On Tue, 25 May 2010 14:24:00 +0000 (UTC), Grant Edwards ><invalid@invalid.invalid> wrote: > >>> >>> http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf >>> >>> It's an interesting paper in several ways >> >>Is the paper available somewhere? > > I entered the URL in firefox and got it. What is your problem ?
I didn't find a paper. All I could find were "powerpoint" slides. -- Grant Edwards grant.b.edwards Yow! Bo Derek ruined at my life! gmail.com
On 2010-05-26, David Brown <david@westcontrol.removethisbit.com> wrote:
> On 26/05/2010 08:04, 42Bastian Schick wrote: >> On Tue, 25 May 2010 14:24:00 +0000 (UTC), Grant Edwards >> <invalid@invalid.invalid> wrote: >> >>>> >>>> http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf >>>> >>>> It's an interesting paper in several ways >>> >>> Is the paper available somewhere? >> >> I entered the URL in firefox and got it. What is your problem ? > > I suspect he was hoping to find a full text paper, with the > transcript of the talk, rather than just the slides.
I don't really care about a transcript of the talk (nor the slides that accompanied the talk), I was just hoping to read the actual paper. -- Grant Edwards grant.b.edwards Yow! And then we could sit at on the hoods of cars at gmail.com stop lights!
On Wed, 26 May 2010 06:09:00 GMT, bastian42@yahoo.com (42Bastian
Schick) wrote:

>On Tue, 25 May 2010 08:57:17 -0400, Walter Banks ><walter@bytecraft.com> wrote: > > >>Code motion and other simple optimizations leaves GCC's >>source level debug information significantly broken forcing >>many developers to debug applications with much of the >>optimization off then recompile later with optimization on but >>the code largely untested. > >I see not, why "broken debug information" is an excuse for not testing >the final version. In an ideal world, there should be no need to debug >the final version ;-) > >And if optimization breaks your code, it is likely your code was >broken before ( e.g. missing 'volatile').
That isn't true ... optimizations frequently don't play well together and many combinations are impossible to reconcile on a given chip. GCC isn't a terribly good compiler and its high optimization modes are notoriously unstable. A lot of perfectly good code is known to break under 03 and even 02 is dangerous in certain situations. George
On Tue, 25 May 2010 08:57:17 -0400, Walter Banks
<walter@bytecraft.com> wrote:

>Code motion and other simple optimizations leaves GCC's >source level debug information significantly broken ...
GCC isn't a terribly good compiler. Nonetheless I think it is misleading to lump code motion with "simple" optimizations. The dependency analyses required to safely move any but the simplest straight-line code are quite involved. George