EmbeddedRelated.com
Forums

Gnu tools for ARM Cortex development

Started by Tim Wescott May 4, 2010
Chris H wrote:

[...]

>There is no need except for amusement. GCC is a LONG way behind the main >commercial compilers.
which Compiler(s) would you recommend for the Coldfire and CM3? Oliver -- Oliver Betz, Munich despammed.com might be broken, use Reply-To:
On 21/05/2010 13:39, Oliver Betz wrote:
> David Brown wrote: > > [...] > >> In my experience, gcc produces very good code for general C (and C++, >> Ada, Fortran, etc.) for the main 32-bit targets, such as m68k/Coldfire, >> ARM, MIPS, x86, and PPC as well as the 64-bit targets PPC, MIPS, and amd64. > > what I have seen in my tests till now looked good, besides a strange > re-ordering of instructions making the generated code not faster but > unreadable (e.g. in the debugger). > > And it could be that the re-ordering affects performance when > accessing slow Coldfire V2 peripherals (consecutive accesses to > peripherals cost more wait states), but I didn't investigate this yet. >
Re-ordering is done for many reasons - confusing the debugger is not an aim, but it is a side-effect! When you want accurate stepping during debugging, it can be useful to reduce optimisation to -O1 to avoid a fair amount of the re-ordering. How much re-ordering affects the running code depends on the target. For some cpus, pipelining of instructions is important for speed, so the compiler does a lot of re-arranging. Typically you have a latency between starting a instruction, and the resulting value being available in a register. If you can fit an unrelated instruction in between the first instruction and the code using that result, you can make use of that "dead" time.
> [...] > >> One area in which gcc has been poor at compared to commercial compilers >> is whole-program optimisation (a.k.a. link time optimisation, inter >> module optimisation, omniscient code generation, etc.). For several > > since this affects mainly code size, this is no problem for me. My > applications are small and time critical, so I need only speed. >
It affects speed too, depending on how your code is structured. In particular, with LTO the compiler is able to inline functions across modules, which is a speed gain. gcc 4.5 is able to do even more fun things - if you have a function that is called often as "foo(1, x)" and "foo(2, x)", but never anything else for the first parameter, it can effectively re-factor your code into "foo1(x)" and "foo2(x)" as two functions with constant values. These constant values can then be used to optimise the implementation of the two copies of foo(). Typically (though not always), this leads to extra code space, but it can speed up some types of code.
> [...] > >>> General code generation problems or library quality? >> >> Libraries also vary a lot in quality. There are also balances to be >> made - libraries aimed for desktop use will put more effort into >> flexibility and standards compliance (such as full IEEE floating point >> support), while those aimed at embedded system emphasis size and speed. > > newlib, uClibc? IMO still bloated for small applications. >
Yes, these are aimed for larger systems (for example, with code space 0.5 MB - 16 MB), or embedded Linux systems.
>> This is an area where the various commercial gcc vendors differentiate >> their products. > > At least Codesourcery doesn't tell much about specific advantages of > their libraries. >
There is a fair amount of information in the documentation - perhaps there's not much in the marketing details. But you can download the documentation if you want - you can also download the free version of the tools as well as getting an evaluation license. I don't actually make much use of the standard C library in small systems, so I can't tell you much about CodeSourcery's implementation.
> And since the libraries have to cover a broad range of applications, > it might be necessary to compile them with specific settings - who > provides sources? >
CodeSourcery gives you the sources, depending on the version of the license that you buy.
> Oliver
David Brown wrote:

[...]

>>> In my experience, gcc produces very good code for general C (and C++, >>> Ada, Fortran, etc.) for the main 32-bit targets, such as m68k/Coldfire, >>> ARM, MIPS, x86, and PPC as well as the 64-bit targets PPC, MIPS, and amd64. >> >> what I have seen in my tests till now looked good, besides a strange >> re-ordering of instructions making the generated code not faster but >> unreadable (e.g. in the debugger). >> >> And it could be that the re-ordering affects performance when >> accessing slow Coldfire V2 peripherals (consecutive accesses to >> peripherals cost more wait states), but I didn't investigate this yet. > >Re-ordering is done for many reasons - confusing the debugger is not an >aim, but it is a side-effect! When you want accurate stepping during >debugging, it can be useful to reduce optimisation to -O1 to avoid a >fair amount of the re-ordering. > >How much re-ordering affects the running code depends on the target. >For some cpus, pipelining of instructions is important for speed, so the
AFAIK not very important for the Coldfire V2, because...
>compiler does a lot of re-arranging. Typically you have a latency >between starting a instruction, and the resulting value being available >in a register.
...this doesn't happen.
> If you can fit an unrelated instruction in between the >first instruction and the code using that result, you can make use of >that "dead" time.
This applies to accesses to chip peripherals in Coldfire microcontrollers. After a write access, subsequent write accesses are delayed for a certain time. If the compiler "collects" these writes (which might happen because they are usually volatile), the result is much slower than immediate (and therefore distributed) writes. But as I wrote, I didn't yet try whether I can construct such cases.
>> [...] >> >>> One area in which gcc has been poor at compared to commercial compilers >>> is whole-program optimisation (a.k.a. link time optimisation, inter >>> module optimisation, omniscient code generation, etc.). For several >> >> since this affects mainly code size, this is no problem for me. My >> applications are small and time critical, so I need only speed. >> > >It affects speed too, depending on how your code is structured. In >particular, with LTO the compiler is able to inline functions across >modules, which is a speed gain. gcc 4.5 is able to do even more fun >things - if you have a function that is called often as "foo(1, x)" and >"foo(2, x)", but never anything else for the first parameter, it can >effectively re-factor your code into "foo1(x)" and "foo2(x)" as two >functions with constant values. These constant values can then be used
I see. Until now, I do this manually for relevant functions. Of course, it would be cleaner to have the compiler doing the optimization. [...]
>> At least Codesourcery doesn't tell much about specific advantages of >> their libraries. > >There is a fair amount of information in the documentation - perhaps >there's not much in the marketing details. But you can download the >documentation if you want - you can also download the free version of >the tools as well as getting an evaluation license.
I did so, but the evaluation version contains the same documentation of newlib (!). The "Getting Started" document tells me: "CSLIBC is derived from Newlib but has been optimized for smaller code size on embedded targets. Additional performance improvements will be added in future releases". Well, I had a brief look at newlib. IMO the "one for all" approach and the attempt to be compatible also with every non-standard environment leads to a rather convoluted coding.
>I don't actually make much use of the standard C library in small >systems, so I can't tell you much about CodeSourcery's implementation.
This seems to be the more efficient approach. Likely I implement the needed (trivial) function in less time than I need to fiddle with newlib (-descendants), uClibc etc.
>> And since the libraries have to cover a broad range of applications, >> it might be necessary to compile them with specific settings - who >> provides sources? > >CodeSourcery gives you the sources, depending on the version of the >license that you buy.
I'm not sure about this (see earlier in this thread), but I didn't yet ask them. Oliver -- Oliver Betz, Munich despammed.com might be broken, use Reply-To:
In article <9O2dndwkDrE9h2vWnZ2dnUVZ8vydnZ2d@lyse.net>,
David Brown  <david.brown@hesbynett.removethisbit.no> wrote:

<SNIP>

> >If this particular story is referring to software development, then it's >a different matter. Trying to make use of existing open source software >in the development of your own products can be a legal minefield, >especially if you want to mix and match code with different licenses. >And in this context, people often consider the GPL to be very >restrictive, especially compared to BSD licenses.
Is this a deliberate misrepresentation? Incorporating open source software in your own software, especially if you want to circumvent the spirit of the license, can be tricky. "Making use of existing open source software" like for instance using Linux and a gcc compiler to develop software for an embedded system has in general fewer restrictions, is easier, involves less risks of breaking license counts (this happens sometimes despite due vigilance) etc. etc. I once ported an embedded 68K system to gcc. 1] I ended up scrapping the (supposedly high quality) SUN C-compiler we bought a license for in order to build the 68K cross-compiler. (In order to not hamper other developments we wanted an extra license.) This was because we already had a legal SUN compiler, but it was less pain to install a gcc SUN compiler than get even the license manager working on a cluster properly with those two licenses. Now who is laying down a "legal minefield"? Building gcc took a fraction of the time and effort to get even a service engineer on the phone. The resulting gcc 68K cross compiler generated the exact same code, but not so fast. Even if a total build would be 10 minutes instead of 5, who cares? (That would be a dramatic influence of a compiler on a total build process.) Regards quality. The 68K gcc was a dramatic improvement and plenty good enough to shrink the code by 30% which meant allowing adding new features to EPROM restricted hardware. A difference of 10% of gcc in "performance" (read speed) is a big deal in advertisements, but much less so in practice. (In this project we didn't care and didn't measure performance. Mechanics was determining the speed.) Groetjes Albert 1] There were no other options than gcc. I needed to change the c-compiler to adapt to existing assembler libraries. So only a source license would do. Oh, and I investigated how to get a source license on the original compiler. I gave up because I didn't even manage to establish who owned the rights to this compiler. Talking of "legal minefields", shees! -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Albert van der Horst wrote:
> In article <9O2dndwkDrE9h2vWnZ2dnUVZ8vydnZ2d@lyse.net>, > David Brown <david.brown@hesbynett.removethisbit.no> wrote: > > <SNIP> > >> If this particular story is referring to software development, then it's >> a different matter. Trying to make use of existing open source software >> in the development of your own products can be a legal minefield, >> especially if you want to mix and match code with different licenses. >> And in this context, people often consider the GPL to be very >> restrictive, especially compared to BSD licenses. > > Is this a deliberate misrepresentation?
I didn't think it was a misrepresentation, and if I was unclear then it certainly wasn't deliberate. Re-reading what I wrote, I can see how it could be misinterpreted, and I thank you for clarifying it. I was trying to say the same thing as you do below.
> Incorporating open source software in your own software, especially > if you want to circumvent the spirit of the license, can be tricky. >
Yes, that's correct. It is clearly /possible/ to incorporate open source software in your own software. You just need to follow the license requirements. And many pieces of open source software aimed at embedded targets come with very developer-friendly licences to make it easier. However, some are more awkward - you have to check carefully. Of course, the same thing applies when you are incorporating closed source software in your own software. While commercially licensed libraries and code seldom has restrictions on the license for code that links to it (unlike the GPL, for example), and seldom requires prominent copyright notices (unlike some BSD licenses), they all have a license with legal requirements and restrictions. This might for example restrict you from selling it on to third parties in part of another library, or perhaps restrict the countries you can export your product to. There might be complicated requirements for royalties, auditing, developer PC node locking, etc. The issues are different from those of open source software, but there are issues nonetheless.
> "Making use of existing open source software" like for instance > using Linux and a gcc compiler to develop software for an > embedded system has in general fewer restrictions, is easier, > involves less risks of breaking license counts (this happens > sometimes despite due vigilance) etc. etc. >
Use of open source developer programs like gcc is very seldom a problem (unless you have company management with bizarre company rules, of course). It is very difficult to violate typical open source licenses by simply /using/ the programs. As in your example below, this is in contrast to commercial programs, some of which can be particularly awkward to use legally and correctly within their licenses.
> I once ported an embedded 68K system to gcc. 1] > I ended up scrapping the (supposedly high quality) SUN C-compiler > we bought a license for in order to build the 68K cross-compiler. > (In order to not hamper other developments we wanted an extra > license.) > This was because we already had a legal SUN compiler, but it > was less pain to install a gcc SUN compiler than get even the > license manager working on a cluster properly with those two licenses. > Now who is laying down a "legal minefield"? > Building gcc took a fraction of the time and effort to get even a > service engineer on the phone. > The resulting gcc 68K cross compiler generated the exact same code, > but not so fast. Even if a total build would be 10 minutes instead > of 5, who cares? (That would be a dramatic influence of a compiler > on a total build process.) >
I've seen similar cases where using gcc was simply much faster and easier than using a commercial compiler. I've also seen cases where using gcc has taken more time and effort than getting a commercial tool in action. All one can say for sure is that there are no easy ways to judge which would be the best tool for a given job - high price is no indication of quality or time-saving, just as free cost price is no indication of low real-world costs.
> Regards quality. The 68K gcc was a dramatic improvement and > plenty good enough to shrink the code by 30% which meant > allowing adding new features to EPROM restricted hardware. > A difference of 10% of gcc in "performance" (read speed) > is a big deal in advertisements, but much less so in practice. > (In this project we didn't care and didn't measure performance. > Mechanics was determining the speed.) >
My own experience with gcc for the 68k is similar - it's of similar code generation quality to the modern big-name commercial compiler I've compared it to (and much better than the older big-name commercial compiler I used previously). The balance between code size and code speed varies a little, and the techniques for squeezing the very best out of the code are compiler dependent, but certainly gcc is a fully appropriate compiler for good code on the 68k. Slower run-time performance for the compiler itself doesn't come as a big surprise. gcc is built up in parts rather than a single speed-optimized tool. Part of this is from its *nix heritage - if you use gcc on a windows machine it can be noticeably slower than using it on a *nix machine, simply because process creation and communication is slower on windows.
> Groetjes Albert > > 1] There were no other options than gcc. > I needed to change the c-compiler to adapt to existing > assembler libraries. So only a source license would do. > Oh, and I investigated how to get a source license on the > original compiler. I gave up because I didn't even manage to > establish who owned the rights to this compiler. > Talking of "legal minefields", shees! > > --
On Fri, 21 May 2010 09:43:57 +0100, Chris H wrote:

> There is no need except for amusement. GCC is a LONG way behind the main > commercial compilers.
Well, I am sure that some commercial compilers, especially those written by smart guys like Walter, and the CPU designers like ARM, will beat GCC. At the same time, here's an example how x86 GCC does quite well in a contest against Intel, Sun, Microsoft and LLVM compilers: http://www.linux-kongress.org/2009/slides/ compiler_survey_felix_von_leitner.pdf It's an interesting paper in several ways---he points out that compilers are often so good that tactical optimizations often don't make sense.

Przemek Klosowski wrote:

> On Fri, 21 May 2010 09:43:57 +0100, Chris H wrote: > > > There is no need except for amusement. GCC is a LONG way behind the main > > commercial compilers. > > Well, I am sure that some commercial compilers, especially those written > by smart guys like Walter, and the CPU designers like ARM, will beat GCC. > At the same time, here's an example how x86 GCC does quite well in a > contest against Intel, Sun, Microsoft and LLVM compilers: > > http://www.linux-kongress.org/2009/slides/ > compiler_survey_felix_von_leitner.pdf > > It's an interesting paper in several ways---he points out that compilers > are often so good that tactical optimizations often don't make sense.
The paper deals with a dozen or so optimizations and shows the variation on the generated code, quite useful. What is missing from the paper is any form of analysis when the compiler should utilize a specific optimization and how each of the compilers made that choice. The paper touches on source code ways to improve the quality of source level debugging information. Source level debugging is important but in many fundamental ways this is one of the major aggravating factors in gcc. One of the fundamental ways to ship reliable code is to ship the code that was debugged and tested. Code motion and other simple optimizations leaves GCC's source level debug information significantly broken forcing many developers to debug applications with much of the optimization off then recompile later with optimization on but the code largely untested. Regards, Walter.. -- Walter Banks Byte Craft Limited http://www.bytecraft.com
On 25/05/2010 14:57, Walter Banks wrote:
> > > Przemek Klosowski wrote: > >> On Fri, 21 May 2010 09:43:57 +0100, Chris H wrote: >> >>> There is no need except for amusement. GCC is a LONG way behind the main >>> commercial compilers. >> >> Well, I am sure that some commercial compilers, especially those written >> by smart guys like Walter, and the CPU designers like ARM, will beat GCC. >> At the same time, here's an example how x86 GCC does quite well in a >> contest against Intel, Sun, Microsoft and LLVM compilers: >> >> http://www.linux-kongress.org/2009/slides/ >> compiler_survey_felix_von_leitner.pdf >> >> It's an interesting paper in several ways---he points out that compilers >> are often so good that tactical optimizations often don't make sense. > > The paper deals with a dozen or so optimizations and shows > the variation on the generated code, quite useful. What is missing > from the paper is any form of analysis when the compiler should > utilize a specific optimization and how each of the compilers > made that choice. >
That wasn't really the point of the paper. I believe the author was aiming to show that it is better to write logical, legible code rather than "smart" code, because it makes the code easier read, easier to debug, and gives the compiler a better chance to generate good code. There was a time when you had to "hand optimize" your C code to get the best results - the paper is just showing that this is no longer the case, whether you are using gcc or another compiler (for the x86 or amd64 targets at least). It was also showing that gcc is at least as smart, and often smarter, than the other compilers tested for these cases. But I did not see it as any kind of general analysis of the optimisations and code quality of gcc or other compilers - it does not make any claims about which compiler is "better". It only claims that the compiler knows more about code generation than the programmer.
> The paper touches on source code ways to improve the quality > of source level debugging information. Source level debugging is > important but in many fundamental ways this is one of the major > aggravating factors in gcc. One of the fundamental ways to ship > reliable code is to ship the code that was debugged and tested. > Code motion and other simple optimizations leaves GCC's > source level debug information significantly broken forcing > many developers to debug applications with much of the > optimization off then recompile later with optimization on but > the code largely untested. >
I don't really agree with you here. There are three points to remember here. One is that /all/ compilers that generate tight code will re-arrange and manipulate the code. This includes constant folding, strength reduction, inlining, dead-code elimination, etc., as well as re-ordering code for maximum pipeline throughput and cache effects (that applies more to bigger processors than small ones). You can't generate optimal code and expect to be able to step through your code line by line in logical order, or view (and change) all local variables. Top range debuggers will be able to fake some of this based on debugging information from the compiler, but it will be faked. I make no claims that gdb is such a "top range" debugger, and it is definitely the case that while many pre-packed gcc toolchains include the latest and greatest compiler version, they are often lax about using newer and more powerful gdb versions. Add to that the fact that many people use a simple "-g" flag with gcc to generate debugging information, rather than flags giving more detailed debugging information (gcc can even include macro definitions in the debugging information if you ask it nicely), and you can see that people often don't use as powerful debugging tools as they might with gcc. That's a failing in the way gcc is often packaged and configured, rather than a failing in gcc or gdb. Secondly, gcc can generate useful debugging information even when fully optimising, without affecting the quality of the generated code. Many commercial compilers I have seen give you a choice between no debug information and fast code, or good debug information and slower code. gcc gives you the additional option of reasonable debug information and fast code. I can't generalise as to how this compares to other commercial compilers - it may be that the ones I used were poor in this regard. Thirdly, there are several types of testing and several types of debugging. When you are debugging your algorithms, you want to have easy and clear debugging, with little regard to the speed. You then have low optimisation settings, avoid inlining functions, use extra "volatile" variables, etc. When your algorithm works, you can then compile it at full speed for testing - at this point, you don't need the same kind of line-by-line debugging. But that does not mean your full-speed version is not debugged or tested! Thus you do some of your development work with a "debug" build at "-O1 -g" or even "-O0 -g", and some with a "release" build at "-Os -g" or "-O3 -g". mvh., David

David Brown wrote:

> On 25/05/2010 14:57, Walter Banks wrote: > > > > > > Przemek Klosowski wrote: > > > >> On Fri, 21 May 2010 09:43:57 +0100, Chris H wrote: > >> > >>> There is no need except for amusement. GCC is a LONG way behind the main > >>> commercial compilers. > >> > >> Well, I am sure that some commercial compilers, especially those written > >> by smart guys like Walter, and the CPU designers like ARM, will beat GCC. > >> At the same time, here's an example how x86 GCC does quite well in a > >> contest against Intel, Sun, Microsoft and LLVM compilers: > >> > >> http://www.linux-kongress.org/2009/slides/ > >> compiler_survey_felix_von_leitner.pdf > >> > >> It's an interesting paper in several ways---he points out that compilers > >> are often so good that tactical optimizations often don't make sense. > > > > The paper deals with a dozen or so optimizations and shows > > the variation on the generated code, quite useful. What is missing > > from the paper is any form of analysis when the compiler should > > utilize a specific optimization and how each of the compilers > > made that choice. > > > > That wasn't really the point of the paper. I believe the author was > aiming to show that it is better to write logical, legible code rather > than "smart" code, because it makes the code easier read, easier to > debug, and gives the compiler a better chance to generate good code.
He made that point, and I agree
> There was a time when you had to "hand optimize" your C code to get the > best results - the paper is just showing that this is no longer the > case, whether you are using gcc or another compiler (for the x86 or > amd64 targets at least). It was also showing that gcc is at least as > smart, and often smarter, than the other compilers tested for these > cases.
Not really. The author used very simple examples that for the most part can be implemented with little more than peep hole optimizers. He also didn't claim otherwise.
> But I did not see it as any kind of general analysis of the > optimisations and code quality of gcc or other compilers - it does not > make any claims about which compiler is "better". It only claims that > the compiler knows more about code generation than the programmer.
Agreed that has been true for quite a while in practically all compilers.
> > The paper touches on source code ways to improve the quality > > of source level debugging information. Source level debugging is > > important but in many fundamental ways this is one of the major > > aggravating factors in gcc. One of the fundamental ways to ship > > reliable code is to ship the code that was debugged and tested. > > Code motion and other simple optimizations leaves GCC's > > source level debug information significantly broken forcing > > many developers to debug applications with much of the > > optimization off then recompile later with optimization on but > > the code largely untested. > > > > I don't really agree with you here. There are three points to remember > here. One is that /all/ compilers that generate tight code will > re-arrange and manipulate the code. This includes constant folding, > strength reduction, inlining, dead-code elimination, etc., as well as > re-ordering code for maximum pipeline throughput and cache effects (that > applies more to bigger processors than small ones). You can't generate > optimal code and expect to be able to step through your code line by > line in logical order, or view (and change) all local variables. Top > range debuggers will be able to fake some of this based on debugging > information from the compiler, but it will be faked.
We can expect debugging information to tie the code being executed to the original statement. Inline code may have multiple links to the source. Code motion may execute code out of source order
> . . . > > Secondly, gcc can generate useful debugging information even when fully > optimising, without affecting the quality of the generated code. Many > commercial compilers I have seen give you a choice between no debug > information and fast code, or good debug information and slower code.
This isn't true in the commercial compilers I am familiar with.
> . . . > > Thirdly, there are several types of testing and several types of > debugging. When you are debugging your algorithms, you want to have > easy and clear debugging, with little regard to the speed. You then > have low optimisation settings, avoid inlining functions, use extra > "volatile" variables, etc. When your algorithm works, you can then > compile it at full speed for testing - at this point, you don't need the > same kind of line-by-line debugging. But that does not mean your > full-speed version is not debugged or tested!
gcc and gcc (The ones with the copyright filed off) based compilers often recommend the you suggest. It is the change of optimization levels that high reliability folks avoid. It has been a big problem for our customers who also use gcc. Regards, Walter.. -- Walter Banks Byte Craft Limited http://www.bytecraft.com
On 2010-05-25, Przemek Klosowski <przemek@tux.dot.org> wrote:
> On Fri, 21 May 2010 09:43:57 +0100, Chris H wrote: > >> There is no need except for amusement. GCC is a LONG way behind the main >> commercial compilers. > > Well, I am sure that some commercial compilers, especially those written > by smart guys like Walter, and the CPU designers like ARM, will beat GCC. > At the same time, here's an example how x86 GCC does quite well in a > contest against Intel, Sun, Microsoft and LLVM compilers: > > http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf > > It's an interesting paper in several ways
Is the paper available somewhere? -- Grant Edwards grant.b.edwards Yow! I am NOT a nut.... at gmail.com