Reply by Chris Hills July 12, 20062006-07-12
In article <1152680207.460766.23810@s13g2000cwa.googlegroups.com>,
david.fowler@gmail.com writes
>Seems like this Green Hills tactic would significantly reduce their >sales.
It doesn't
>Is there product so good that potential customers are not >concerned about this restriction?
Yes it is. Have you never come across the GHS tools?
> >David > >www.uCHobby.com >Microcontrollers for Hobbyist > >Chris Hills wrote: >> In article <00q*qDglr@news.chiark.greenend.org.uk>, Paul Gotch >> <paulg@at-cantab-dot.net> writes >> >Dr Justice <sorry@no.spam.wanted> wrote: >> >> Yes. Although for some it may not be so quick and easy to collect all the >> >> compilers, get their their real-life projects compiled on on the possibly >> > >> >It's also usually in violation of the license agreements on commerical >> >compilers to publish benchmark results. >> > >> >> >> Only some of them. Quite a few don't have the restriction. As noted in >> the IAR set GreenHills do have that restriction but the others, Keil, >> IAR, ARM etc do not. >> >> >> -- >> \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ >> \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ >> /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ >> \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ >
-- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply by July 12, 20062006-07-12
Seems like this Green Hills tactic would significantly reduce their
sales. Is there product so good that potential customers are not
concerned about this restriction?

David

www.uCHobby.com
Microcontrollers for Hobbyist

Chris Hills wrote:
> In article <00q*qDglr@news.chiark.greenend.org.uk>, Paul Gotch > <paulg@at-cantab-dot.net> writes > >Dr Justice <sorry@no.spam.wanted> wrote: > >> Yes. Although for some it may not be so quick and easy to collect all the > >> compilers, get their their real-life projects compiled on on the possibly > > > >It's also usually in violation of the license agreements on commerical > >compilers to publish benchmark results. > > > > > Only some of them. Quite a few don't have the restriction. As noted in > the IAR set GreenHills do have that restriction but the others, Keil, > IAR, ARM etc do not. > > > -- > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ > \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ > /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ > \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply by Anton Erasmus July 11, 20062006-07-11
On Tue, 11 Jul 2006 17:36:09 +0100, Chris Hills <chris@phaedsys.org>
wrote:

>In article <mbi7b2p7fas93q91lkgsapg2o0ri1p1nbs@4ax.com>, Anton Erasmus ><nobody@spam.prevent.net> writes >>On Sun, 09 Jul 2006 20:22:05 GMT, "Wilco Dijkstra" >><Wilco_dot_Dijkstra@ntlworld.com> wrote: >> >>[Snipped] >>> >>>My advice is that the best benchmark is the application you're planning >>>to run. People put far too much emphasis on tiny benchmarks when they >>>should simply test their existing code. Benchmarks are never going to >>>be representative, even if you find one that matches the area you are >>>interested in. Most compilers have free evaluation periods, so you can >>>try your code on various compilers before deciding which to use. >> >>I agree totally. Another advantage of actually testing one's own code >>is to see how easy it will be to port as well. > >This is why you should use the eval versions of the commercial >compilers.
Of course one should use all the available compilers. This is a good way to test the reasonable priced against the megabucks priced commercial compilers.
> >>Lots of code rely on >>hidden assumptions, which breaks very quickly under high optimisation >>levels. > >That depends on the compiler. the more standard the code the less this >should happen. Obviously when you get close to the HW all compilers >have non standard extensions. > >>If it was an assumption which is true for GCC, then GCC might >>be the best option even "IF" the comercial compilers are better. > >Why? I don't follow this logic?
What do you mean. All commercial compilers are not better than GCC. GCC is being improved at a higher rate than most commercial compilers. Some commercial compilers are currently better - mostly in the libraries they provide. For some projects one need the "best" whatever it is at the moment. For most "good enough" is good enough.
> >>The >>first time I had to port my own code from one compiler to another >>(exactely the same hardware), I was amazed by how many things broke. > >What sort of thing? I am curious?
It has been a while so it is a bit difficult to make a list, but I will try. 1. Comparisons between unsigned and signed integers. 2. Using unions to access bytes inside a long. 3. Assuming structure member alignment would be on boundaries of 32bits on a 32-bit architecture. (This could be fixed with a compiler option) 4. Assuming that all compilers would cast and perform the various operators in the same order in an expression such as long a,d, short b,c; a=d+b+c; (This is a simplified expression and might be a bad example. In the original code the one compiler did the operation between the shorts first, then the cast and then the final operation. The other did the casts first) Most of the errors were the typical type of errors lint would have warned one about. Especially relying on implied casts. Regards Anton Erasmus
Reply by Chris Hills July 11, 20062006-07-11
In article <mbi7b2p7fas93q91lkgsapg2o0ri1p1nbs@4ax.com>, Anton Erasmus
<nobody@spam.prevent.net> writes
>On Sun, 09 Jul 2006 20:22:05 GMT, "Wilco Dijkstra" ><Wilco_dot_Dijkstra@ntlworld.com> wrote: > >[Snipped] >> >>My advice is that the best benchmark is the application you're planning >>to run. People put far too much emphasis on tiny benchmarks when they >>should simply test their existing code. Benchmarks are never going to >>be representative, even if you find one that matches the area you are >>interested in. Most compilers have free evaluation periods, so you can >>try your code on various compilers before deciding which to use. > >I agree totally. Another advantage of actually testing one's own code >is to see how easy it will be to port as well.
This is why you should use the eval versions of the commercial compilers.
>Lots of code rely on >hidden assumptions, which breaks very quickly under high optimisation >levels.
That depends on the compiler. the more standard the code the less this should happen. Obviously when you get close to the HW all compilers have non standard extensions.
>If it was an assumption which is true for GCC, then GCC might >be the best option even "IF" the comercial compilers are better.
Why? I don't follow this logic?
>The >first time I had to port my own code from one compiler to another >(exactely the same hardware), I was amazed by how many things broke.
What sort of thing? I am curious? -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply by Anton Erasmus July 11, 20062006-07-11
On Sun, 09 Jul 2006 20:22:05 GMT, "Wilco Dijkstra"
<Wilco_dot_Dijkstra@ntlworld.com> wrote:

[Snipped]
> >My advice is that the best benchmark is the application you're planning >to run. People put far too much emphasis on tiny benchmarks when they >should simply test their existing code. Benchmarks are never going to >be representative, even if you find one that matches the area you are >interested in. Most compilers have free evaluation periods, so you can >try your code on various compilers before deciding which to use.
I agree totally. Another advantage of actually testing one's own code is to see how easy it will be to port as well. Lots of code rely on hidden assumptions, which breaks very quickly under high optimisation levels. If it was an assumption which is true for GCC, then GCC might be the best option even "IF" the comercial compilers are better. The first time I had to port my own code from one compiler to another (exactely the same hardware), I was amazed by how many things broke. Since then I have been MUCH more aware of things one should not assume. Regards Anton Erasmus
Reply by Chris Hills July 10, 20062006-07-10
In article <00q*qDglr@news.chiark.greenend.org.uk>, Paul Gotch
<paulg@at-cantab-dot.net> writes
>Dr Justice <sorry@no.spam.wanted> wrote: >> Yes. Although for some it may not be so quick and easy to collect all the >> compilers, get their their real-life projects compiled on on the possibly > >It's also usually in violation of the license agreements on commerical >compilers to publish benchmark results. >
Only some of them. Quite a few don't have the restriction. As noted in the IAR set GreenHills do have that restriction but the others, Keil, IAR, ARM etc do not. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply by Paul Gotch July 9, 20062006-07-09
Dr Justice <sorry@no.spam.wanted> wrote:
> Yes. Although for some it may not be so quick and easy to collect all the > compilers, get their their real-life projects compiled on on the possibly
It's also usually in violation of the license agreements on commerical compilers to publish benchmark results. -p -- Gotch, n. A corpulent beer-jug of some strong ware. Gotch, v. To surprise with a remark that negates or usurps a remark that has just been made. --------------------------------------------------------------------
Reply by Dr Justice July 9, 20062006-07-09
A correction to my own previous post:

The second URL should be
  http://www.mct.net/basics/dhry.html

And why not have a look at this too for fun:
  http://www.compuphase.com/dhrystone.htm

"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> wrote in message
news:Nzdsg.6710$FR.2885@newsfe6-gui.ntli.net...

> We don't need any more benchmarks, there are already enough > bad ones available!
:-)) That may just be so. The next thing would be to benchmark libraries instead, since there may be non-negligible evidence indicating that they are a big performance factor and in part the culprit of current compiler benchmarking... (maybe someone somwhere is doing this).
> My advice is that the best benchmark is the application you're > planning to run.
Yes. Although for some it may not be so quick and easy to collect all the compilers, get their their real-life projects compiled on on the possibly code size crippled eval versions, then gather and interpret the stats. That is what benchmarks were meant to cover for, if I'm not mistaken. DJ --
Reply by Wilco Dijkstra July 9, 20062006-07-09
"Dr Justice" <sorry@no.spam.wanted> wrote in message 
news:5w9sg.3167$YI3.2881@amstwist00...
> "Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> wrote in message > news:_q5sg.41385$Z61.2297@newsfe4-win.ntli.net... > > Some good comments there. > >> Benchmarking is seriously difficult, fair benchmarking >> is next to impossible... > > You are probably right. And of course there will always be biases with > compiler vendors testing compilers.
Of course. There are many tricks one can pull. And then there is how you present it which provides even more possibilities to misrepresent the opposition...
> Still, IMO there are degrees of trustworthy here. Eg. this: > http://www.keil.com/benchmks/carm_v0code.htm > I find to be rather implausible (never mind the KEIL buy out). Contrast > with > this, mentioning Keil at the end.
Yes, it's totally bogus indeed. I'm surprised that page is still up since ARM replaced the old Keil compiler. The obvious flaws are: 5 year old compilers used from the competition vs an unreleased compiler, codesize methodology not explained, compiler options not listed, the (modified) sources are not available. It also forgets to mention that 95% of the codesize and performance is from/spent in the C library. So they are really comparing 4 libraries rather than 4 compilers. Since GCC and ARM use optimised ARM assembler in the floating point libraries, much of the code is actually ARM, not Thumb. It is essential that the 128-bit flash interface in the LPC2294 was enabled, or ADS and GNU are heavily penalized. My guess it wasn't, or ADS would have won on Whetstone. Finally I believe CARM doesn't do double and uses float instead, which is an unfair advantage.
> http://www.iar.com/index.php?show=43943_ENG&&page_anchor=http://www.iar.com/p43943/p43943_eng.php > Not sure what to make of that, or indeed the two together.
Indeed. Apart from the flaws mentioned above, the benchmarks appear to be hand picked from a much larger set as there is only one case where ADS wins and none where GCC wins (contrast that with the Raisonance benchmarks which show much more variation). Also the totals graph is obviously wrong (sum of benchmarks should be around 500KBytes, not 350KB for Thumb). The normalized result doesn't seem represented correctly in the totals graph (which I presume was chosen on purpose to be as unclear as possible). The same library issues with math libraries and printf are at play here. Much of the benchmarks consist of libraries, eventhough some benchmarks are a bit larger, most still are small performance benchmarks which are measured for codesize, and the average size is pretty small (around 20KBytes with libraries, so perhaps 10KBytes without). The codesize benchmarking I do involves an average application size of around 250KBytes, with the largest application (actual mobile phone code) being 6 MBytes of Thumb code. One issue not mentioned yet is that of accuracy. The C standard - continuing its great tradition of not standardizing much at all - doesn't set a standard here either, so the accuracy of math functions like sin can vary greatly between implementations. I know the IAR math functions (and I presume the Keil ones too) are not very accurate. Of course an inaccurate version is much smaller and faster...
> And here's Imagecrafts take on benchmarking: > http://www.imagecraft.com/software/FUD.html
Interesting. He is complaining about similar flaws and tricks...
> The raisonance benchmark was the most detailed I could find, not least in > its description of the benchmarking premises. The > just-a-bunch-of-numbers-on-a-webpage "benchmarks", sometimes with 'too > strange' results, I find harder to put as much trust in.
I completely agree. You don't find this often, including all the source code and options used. It doesn't mean it is 100% correct, but at least you can do your own measurement if you want to. So you're right the Raisonance benchmarks are more trustworthy than all the others together.
> These are the things that are readily available to us mere mortals to > judge > from, in addition to comments e.g. here in c.s.a.e. It's not easy to know > what to think. As aked for previously - if anybody has pointers to good > benchmarks, now would be a good time to post them.
We don't need any more benchmarks, there are already enough bad ones available! You simply can't trust most benchmark results even if they were done in good faith - flaws are likely due to incompetence or simply letting marketing do the benchmarking. The solution: don't let anyone who can't explain the advantages and disadvantages of geometric mean anywhere near a benchmark! Official benchmarking consortiums aren't much better either - SPEC attracts a lot of complaints of benchmarks being gamed, and the quality of EEMBC benchmarks is unbelievably bad. It would be better to use Dhrystone as a standard (http://www.arm.com/pdfs/Dhrystone.pdf). My advice is that the best benchmark is the application you're planning to run. People put far too much emphasis on tiny benchmarks when they should simply test their existing code. Benchmarks are never going to be representative, even if you find one that matches the area you are interested in. Most compilers have free evaluation periods, so you can try your code on various compilers before deciding which to use. Wilco
Reply by Dr Justice July 9, 20062006-07-09
"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> wrote in message
news:_q5sg.41385$Z61.2297@newsfe4-win.ntli.net...

Some good comments there.

> Benchmarking is seriously difficult, fair benchmarking > is next to impossible...
You are probably right. And of course there will always be biases with compiler vendors testing compilers. Still, IMO there are degrees of trustworthy here. Eg. this: http://www.keil.com/benchmks/carm_v0code.htm I find to be rather implausible (never mind the KEIL buy out). Contrast with this, mentioning Keil at the end. http://www.iar.com/index.php?show=43943_ENG&&page_anchor=http://www.iar.com/p43943/p43943_eng.php Not sure what to make of that, or indeed the two together. For reference here's IARs: http://www.iar.com/index.php?show=43943_ENG&&page_anchor=http://www.iar.com/p43943/p43943_eng.php And here's Imagecrafts take on benchmarking: http://www.imagecraft.com/software/FUD.html The raisonance benchmark was the most detailed I could find, not least in its description of the benchmarking premises. The just-a-bunch-of-numbers-on-a-webpage "benchmarks", sometimes with 'too strange' results, I find harder to put as much trust in. These are the things that are readily available to us mere mortals to judge from, in addition to comments e.g. here in c.s.a.e. It's not easy to know what to think. As aked for previously - if anybody has pointers to good benchmarks, now would be a good time to post them. DJ --