EmbeddedRelated.com
Forums

GCC compiler for ARM7-TDMI

Started by news.inet.tele.dk July 6, 2006
gjgowey@gmail.com wrote:
> I've never used their products, but Green Hills also makes compilers > that compile for the ARM architecture.
They are excellent, but too expensive for many users. Leon
In article <v9Nrg.3132$YI3.1256@amstwist00>, Dr Justice
<sorry@no.spam.wanted> writes
>"Chris Hills" <chris@phaedsys.org> wrote in message >news:EMRzg+Bqg5rEFASb@phaedsys.demon.co.uk... >> In article <6qMrg.3130$YI3.1323@amstwist00>, Dr Justice >> <sorry@no.spam.wanted> writes >> >"Chris Hills" <chris@phaedsys.org> wrote in message >> >news:B9JisABJq4rEFAB1@phaedsys.demon.co.uk... > >> Do you want a technical discussion or just to make personal attacks? > >I want a technical discussion, as I'm sure the OP does. I'm not making any >attacks. However, I had to defend myself against assertions like: > >"Interesting that the GCC fans only like the becnhmarks which show GCC as >one of the better compilers and ignore all the rest." > >"There are quite a few but none you would accept as they don't show the >GCC to be the best. " > >"Now you say the libraries are poor." > >These are all plainly untrue,
The first two are true in my experience. for example in this thread You say the only "trustworthy" benchmark is one from a GCC supplier which puts GCC as the best compiler.
>and none are something that I've ever stated.
Re the libraries you said: "Especially considering that it takes the real world into acount and uses a slimmed down printf() for gcc, putting it on an equal footing benchmarking-wise with the commercially developed /embedded/ offerings. They also measure 'pure' code space and size (not measuring libraries). I've seen some vendors sites where gcc is made to look like it generally uses twice the memory of their product. Yes, out of the box a dedicated embedded compiler may look better than a general one, but that can be (and has been) fixed by using appropriate libraries. " Other commercial ARM compiler libraries use the full Dinkumware which is not a slimed down anything. The only compiler I know of that uses a slimed down printf is the old Keil printf. SO you are comparing like with like and some of the Gcc libraries are not that good.
>It is difficult to discuss when your party is making up attributions like >these.
I was making nothing up.
> I'm sure you're a good bloke, and that this twist of the thread was >just an unfortunate incident. Lets just drop it and not pollute c.s.a.e, >please.
As you say lets just drop it. I would have replied off line but you have a fake email address. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Chris Hills <chris@phaedsys.org> wrote:

> BTW if the GCC libraries are not that good why should the rest of it be > better?
What GCC libraries? The compiler doesn't come with a C library, you must provide that yourself. -a
"Chris Hills" <chris@phaedsys.org> wrote in message
news:T$qGcEDFZ6rEFACU@phaedsys.demon.co.uk...
> In article <v9Nrg.3132$YI3.1256@amstwist00>, Dr Justice > <sorry@no.spam.wanted> writes
It may be that I shouldn't be writing this, but: [snip]
> ...You say the only "trustworthy" benchmark is one from a GCC supplier
which
> puts GCC as the best compiler.
[snip] Perhaps I'm being unclear(?) I specifically said it was the one that I had found and trusted, and I have said that there may be better ones that I do not know about. I have not claimed that it is the only trustworthy benchmark that exists. Furthermore, the conclusion of the app note was more or less that ARMs compiler was best overall, GCC could just about hang with IAR and that KEIL fell somewhat behind. This may be not the case of a GCC fan, as you say, but of a GCC critic/sceptic. I note that you run a company that deals in non-GCC based compilers. That's fine, I 've stated what my impression of GCC is and you have stated yours. The irony is that I'm not in disagreement with you as such: yes, GCC is not as good as the better commercial ones. Still, I reckon it's just fine for many people and projects. I do not have any extensive opinions on the various libraries.
> As you say lets just drop it. I would have replied off line but you have a
fake email address. Yes, I normally conceal real addresses on usenet. If you want to, you're welcome to mail to aleistad xat chello ydot no. DJ --
"Dr Justice" <sorry@no.spam.wanted> wrote in message 
news:EQarg.3061$YI3.555@amstwist00...
> "news.inet.tele.dk" <mj@iadataFJERNMIG.dk> wrote in message > news:44ad19c5$0$14018$edfadb0f@dread15.news.tele.dk...
> My impression is that GCC generally does pretty well; not as good as the > best commercial ones but better than some, depending on the code.
GCC is certainly better than the cheaper compilers, however it is significantly behind the best commercial compilers. On large benchmarks the difference is around 20-30% on both codesize and performance. This includes code that has been written for GCC, like the Linux kernel.
> The only benchmark I've found and trust is this comparison of KEIL, IAR, > GCC and ARM - it may interest you too: > http://www.raisonance.com/products/STR7/benchmark.php. > > I suppose one could be forgiven for thinking that the benchmarks at some > compiler vendors sites seem 'not quite right' ;)
That is very true - and that includes the above benchmark! Most of the benchmarking efforts I've seen are fatally flawed in many aspects. Some obvious flaws in this one: 1. What is codesize? Compilers have different code generation strategies and some compilers inline more data in code than others. It's unfortunate there is no standard way to measure codesize from ELF images, but it's essential it is measured correctly. I use "ROM size", ie. everything that ends up in flash, including code, literal pools, switch tables, strings, constant data and RW data initializers. This counts more than pure code, but it is the only reliable way to measure codesize. 2. It's tiny: total size without libraries is about 22KB Thumb code. That is at least 2 orders of magnitude too small for a codesize benchmark... It is dangerous to make conclusions from something so small as generated code varies a lot depending on the source code. 3. Measuring codesize of performance benchmarks. Benchmark code is typically small, badly written (*) and optimised for performance, so measuring codesize of performance benchmarks is not representative. For example, table 2 shows the ARM compiler generating on average 14% smaller ARM code on average than GCC, yet it only manages to win 4 of the 8 benchmarks. There are large variations in codesize due to the code being very repetitive, so a single optimization can make a huge difference. (*) I keep being amazed by how much people rely on in-house benchmarks when deciding on multi-million dollar deals. The code is typically tiny, written by someone with no programming experience (let alone in writing efficient code), so the score can often be improved significantly with the right compiler options, small source code changes or trivial compiler tweaks... 4. Interworking: It appears some compilers have interworking on, and some don't. Eventhough it is claimed the difference is small, it can have a significant effect on both codesize and performance. To be fair, it should have been turned off (or on) in all compilers tested. 5. Inlining. For dubious reasons, inlining is disabled in the ARM compiler, but not in any of the other compilers. The ARM compiler contains a finely tuned inliner (which helps performance a lot), so turning it off while allowing GCC to inline isn't exactly fair... Benchmarking is seriously difficult, fair benchmarking is next to impossible... Wilco
"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> wrote in message
news:_q5sg.41385$Z61.2297@newsfe4-win.ntli.net...

Some good comments there.

> Benchmarking is seriously difficult, fair benchmarking > is next to impossible...
You are probably right. And of course there will always be biases with compiler vendors testing compilers. Still, IMO there are degrees of trustworthy here. Eg. this: http://www.keil.com/benchmks/carm_v0code.htm I find to be rather implausible (never mind the KEIL buy out). Contrast with this, mentioning Keil at the end. http://www.iar.com/index.php?show=43943_ENG&&page_anchor=http://www.iar.com/p43943/p43943_eng.php Not sure what to make of that, or indeed the two together. For reference here's IARs: http://www.iar.com/index.php?show=43943_ENG&&page_anchor=http://www.iar.com/p43943/p43943_eng.php And here's Imagecrafts take on benchmarking: http://www.imagecraft.com/software/FUD.html The raisonance benchmark was the most detailed I could find, not least in its description of the benchmarking premises. The just-a-bunch-of-numbers-on-a-webpage "benchmarks", sometimes with 'too strange' results, I find harder to put as much trust in. These are the things that are readily available to us mere mortals to judge from, in addition to comments e.g. here in c.s.a.e. It's not easy to know what to think. As aked for previously - if anybody has pointers to good benchmarks, now would be a good time to post them. DJ --
"Dr Justice" <sorry@no.spam.wanted> wrote in message 
news:5w9sg.3167$YI3.2881@amstwist00...
> "Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> wrote in message > news:_q5sg.41385$Z61.2297@newsfe4-win.ntli.net... > > Some good comments there. > >> Benchmarking is seriously difficult, fair benchmarking >> is next to impossible... > > You are probably right. And of course there will always be biases with > compiler vendors testing compilers.
Of course. There are many tricks one can pull. And then there is how you present it which provides even more possibilities to misrepresent the opposition...
> Still, IMO there are degrees of trustworthy here. Eg. this: > http://www.keil.com/benchmks/carm_v0code.htm > I find to be rather implausible (never mind the KEIL buy out). Contrast > with > this, mentioning Keil at the end.
Yes, it's totally bogus indeed. I'm surprised that page is still up since ARM replaced the old Keil compiler. The obvious flaws are: 5 year old compilers used from the competition vs an unreleased compiler, codesize methodology not explained, compiler options not listed, the (modified) sources are not available. It also forgets to mention that 95% of the codesize and performance is from/spent in the C library. So they are really comparing 4 libraries rather than 4 compilers. Since GCC and ARM use optimised ARM assembler in the floating point libraries, much of the code is actually ARM, not Thumb. It is essential that the 128-bit flash interface in the LPC2294 was enabled, or ADS and GNU are heavily penalized. My guess it wasn't, or ADS would have won on Whetstone. Finally I believe CARM doesn't do double and uses float instead, which is an unfair advantage.
> http://www.iar.com/index.php?show=43943_ENG&&page_anchor=http://www.iar.com/p43943/p43943_eng.php > Not sure what to make of that, or indeed the two together.
Indeed. Apart from the flaws mentioned above, the benchmarks appear to be hand picked from a much larger set as there is only one case where ADS wins and none where GCC wins (contrast that with the Raisonance benchmarks which show much more variation). Also the totals graph is obviously wrong (sum of benchmarks should be around 500KBytes, not 350KB for Thumb). The normalized result doesn't seem represented correctly in the totals graph (which I presume was chosen on purpose to be as unclear as possible). The same library issues with math libraries and printf are at play here. Much of the benchmarks consist of libraries, eventhough some benchmarks are a bit larger, most still are small performance benchmarks which are measured for codesize, and the average size is pretty small (around 20KBytes with libraries, so perhaps 10KBytes without). The codesize benchmarking I do involves an average application size of around 250KBytes, with the largest application (actual mobile phone code) being 6 MBytes of Thumb code. One issue not mentioned yet is that of accuracy. The C standard - continuing its great tradition of not standardizing much at all - doesn't set a standard here either, so the accuracy of math functions like sin can vary greatly between implementations. I know the IAR math functions (and I presume the Keil ones too) are not very accurate. Of course an inaccurate version is much smaller and faster...
> And here's Imagecrafts take on benchmarking: > http://www.imagecraft.com/software/FUD.html
Interesting. He is complaining about similar flaws and tricks...
> The raisonance benchmark was the most detailed I could find, not least in > its description of the benchmarking premises. The > just-a-bunch-of-numbers-on-a-webpage "benchmarks", sometimes with 'too > strange' results, I find harder to put as much trust in.
I completely agree. You don't find this often, including all the source code and options used. It doesn't mean it is 100% correct, but at least you can do your own measurement if you want to. So you're right the Raisonance benchmarks are more trustworthy than all the others together.
> These are the things that are readily available to us mere mortals to > judge > from, in addition to comments e.g. here in c.s.a.e. It's not easy to know > what to think. As aked for previously - if anybody has pointers to good > benchmarks, now would be a good time to post them.
We don't need any more benchmarks, there are already enough bad ones available! You simply can't trust most benchmark results even if they were done in good faith - flaws are likely due to incompetence or simply letting marketing do the benchmarking. The solution: don't let anyone who can't explain the advantages and disadvantages of geometric mean anywhere near a benchmark! Official benchmarking consortiums aren't much better either - SPEC attracts a lot of complaints of benchmarks being gamed, and the quality of EEMBC benchmarks is unbelievably bad. It would be better to use Dhrystone as a standard (http://www.arm.com/pdfs/Dhrystone.pdf). My advice is that the best benchmark is the application you're planning to run. People put far too much emphasis on tiny benchmarks when they should simply test their existing code. Benchmarks are never going to be representative, even if you find one that matches the area you are interested in. Most compilers have free evaluation periods, so you can try your code on various compilers before deciding which to use. Wilco
A correction to my own previous post:

The second URL should be
  http://www.mct.net/basics/dhry.html

And why not have a look at this too for fun:
  http://www.compuphase.com/dhrystone.htm

"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> wrote in message
news:Nzdsg.6710$FR.2885@newsfe6-gui.ntli.net...

> We don't need any more benchmarks, there are already enough > bad ones available!
:-)) That may just be so. The next thing would be to benchmark libraries instead, since there may be non-negligible evidence indicating that they are a big performance factor and in part the culprit of current compiler benchmarking... (maybe someone somwhere is doing this).
> My advice is that the best benchmark is the application you're > planning to run.
Yes. Although for some it may not be so quick and easy to collect all the compilers, get their their real-life projects compiled on on the possibly code size crippled eval versions, then gather and interpret the stats. That is what benchmarks were meant to cover for, if I'm not mistaken. DJ --
Dr Justice <sorry@no.spam.wanted> wrote:
> Yes. Although for some it may not be so quick and easy to collect all the > compilers, get their their real-life projects compiled on on the possibly
It's also usually in violation of the license agreements on commerical compilers to publish benchmark results. -p -- Gotch, n. A corpulent beer-jug of some strong ware. Gotch, v. To surprise with a remark that negates or usurps a remark that has just been made. --------------------------------------------------------------------
In article <00q*qDglr@news.chiark.greenend.org.uk>, Paul Gotch
<paulg@at-cantab-dot.net> writes
>Dr Justice <sorry@no.spam.wanted> wrote: >> Yes. Although for some it may not be so quick and easy to collect all the >> compilers, get their their real-life projects compiled on on the possibly > >It's also usually in violation of the license agreements on commerical >compilers to publish benchmark results. >
Only some of them. Quite a few don't have the restriction. As noted in the IAR set GreenHills do have that restriction but the others, Keil, IAR, ARM etc do not. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/