EmbeddedRelated.com
Forums

ARM IDE

Started by flash011 November 12, 2008
On Wed, 12 Nov 2008 20:47:56 +0000, Chris H <chris@phaedsys.org>
wrote:

>In message <rtdmh41gin8f7fi4l3e2u8bvdanan6h5gb@4ax.com>, Anton Erasmus ><nobody@spam.prevent.net> writes >>On Wed, 12 Nov 2008 12:35:00 -0600, "flash011" >><sylvainlarive@gmail.com> wrote: >> >>>Greetings all, >>> >>>I posted a while back concerning help choosing an MCU/DSP and I greatly >>>appreciated the input you guys gave me so I'm back with more questions. >>> >>>I've decided to go the ARM9 way. More specifically, I went with ST's STR9 >>>series since it had the correct processing and ADC sampling rates. >>> >>>Anyhow, I ordered a kit from IAR to test things out and so far so good. >>>My question is should I stick with IAR or go with something like EMBEST >>>(they look like they have a good prices for their dev kits) or GNU. >>> >>>Looks like GCC is pretty well regarded but seems like it is more complex >>>to install / use than other solutions. >>> >> >>Download the free RIDE Ide from Raisonance. It uses gnuarm as the >>compiler. Very easy to get going with the STR9. The free support >>library provided by ST is fully integrated within the Raisonance >>environment, > >I thought EVERYONE has integrated free the ST libraries in with their >compilers?
Anyone are allowed to use the free libraries. Many people have great difficulty in getting started unless there is some "Wizard" where one can click on an option, and the library is automatically added to the link command, and the library headers are added to the include path. RIDE has such a wizard. The actual libraries seem to support Keil, IAR and GCC. So according to my knowledge of available compilers this is far from everyone. How easy it is to use with a particular compiler depends on the person's skill level and user interface support. Anton Erasmus
Chris H wrote:
> David Brown <david@westcontrol.removethisbit.com> writes: >
... snip ...
> >> I don't know what results they have, or certifications, or what >> validation they have for the different parts of their toolkits - >> that's more in the realms of "professional" support. > > It is quite difficult to do for GCC. Also it would only be for a > specific version and build. As so as you change anything you > have to re-test. Also it only applies to the binary. If you > release the source for some one lese to build it is not covered. > (Because any one could change the source or build it with a > different compiler.
I am only speaking from my understanding of the gcc organization. It is divided into various phases, terminating in a code generation (and possible optimization) phase. This is the only phase that requires adjustment in porting. So, if fooling with the syntactical areas is avoided, the port should be relatively easy. Note that a single port covers all the languages handled by gcc, which include at least Ada, C, C++, Fortran. Gnu publishes the validation tests it runs, which should verify all. -- [mail]: Chuck F (cbfalconer at maineline dot net) [page]: <http://cbfalconer.home.att.net> Try the download section.
David Brown wrote:

> CodeSourcery releases compiler builds - you download the pre-packaged > binary and install it just as you would for any closed source tool. They > release new versions about twice a year (with faster updates for paying > customers), just like for closed source tools. They run all their > internal tests and validation (whatever these may be) on these builds, > just like for closed source tools.
Yep. And all the guys who want to compile for the ARMv7 architecture still use the 2007Q3 version (e.g a one year old build). All later versions don't work in one way or another. Some of the builds are so much broken that they can't compile a simple byte-copy loop. I don't complain. I haven't payed anything for their GCC build. However, after your rant and praise for the CodeSourcery packages I can't resist to state that their compiles are not perfect either. A bit more internal quality assurance would not hurt for sure. Nils
> IAR compilers have been validated for Safety Critical use at SIL3. You > can't say that about GCC
Actually, I have used GCC in a SIL3 app. No point trying to validate the compiler, see http://www.nxtbook.com/nxtbooks/cmp/esd-europe0607/index.php?startpage=32 -- Regards, Richard. + http://www.FreeRTOS.org & http://www.FreeRTOS.org/shop 17 official architecture ports, more than 6000 downloads per month. + http://www.SafeRTOS.com Certified by T&#4294967295;V as meeting the requirements for safety related systems.
In message <491c4570$0$20349$8404b019@news.wineasy.se>, David Brown 
<david@westcontrol.removethisbit.com> writes
> >>> They are also happy to run Plum Hall and other such validation >>>suites on their tools. >> Why don't they? Plum Hall or Perennial > > For example, IAR's web site says they test with Plum Hall and >Perennial - they don't give any results for these tests for their >compilers. I'm sure they would tell me if I ask, especially if I'm >offering lots of money - and I'm sure the same applies to CodeSourcery.
Actually they can't publish the results due to the licensing for both test suites .
>Personally, I am not interested in such big-name test suites.
Fine but they are the only recognised ones.
> I have no a priori reason to think that an expensive closed-source >test suite is any better than an open source test suite,
You missed out the "of similar standing". Is there an Open Source compiler test suite of similar standing to Pum-Hall or Perennial?
>and plenty of reason to think that open source test suites are better >in some ways (for example, if a bug is found in gcc, then a test can be >added to the regression test suite to ensure that the bug is not >repeated in future versions).
You are confusing a build test suite and a language test suite. Most compiler companies have test suites for checking the build. Pull-Hall and Perennial are language test suites.
>Certainly there are times when it is legally important to have >certifications from independent well-known third parties
Quite
>- but I don't think it is likely to make any realistic difference to >the reliability of the end product (it is *far* more likely that any >bugs are do to *my* programming, not the compiler).
And you are prepared to stand up in court on a corporate manslaughter charge with that argument? The law in the UK changed on the 6th April 2008 and has a bearing on SW development,
>>>validation they have for the different parts of their toolkits - >>>that's more in the realms of "professional" support. >> It is quite difficult to do for GCC. Also it would only be for a >>specific version and build. As so as you change anything you have to >>re-test. Also it only applies to the binary. If you release the >>source for some one lese to build it is not covered. (Because any one >>could change the source or build it with a different compiler. >> > >CodeSourcery releases compiler builds - you download the pre-packaged >binary and install it just as you would for any closed source tool. >They release new versions about twice a year (with faster updates for >paying customers), just like for closed source tools. They run all >their internal tests and validation (whatever these may be) on these >builds, just like for closed source tools.
Fair enough. These binaries can be tested and validated.
>Have a look at this post - it explains pretty well why you don't see >many gcc Plum Hall results published: > ><http://gcc.gnu.org/ml/gcc/2003-02/msg00652.html> ><http://gcc.gnu.org/ml/gcc/2003-02/msg01206.html> >
I know that. However..... as I have said it ONLY applies to the specific binary you test it with. Not the compiler per say. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Chris H wrote:
> In message <491c4570$0$20349$8404b019@news.wineasy.se>, David Brown > <david@westcontrol.removethisbit.com> writes >> >>>> They are also happy to run Plum Hall and other such validation >>>> suites on their tools. >>> Why don't they? Plum Hall or Perennial >> >> For example, IAR's web site says they test with Plum Hall and >> Perennial - they don't give any results for these tests for their >> compilers. I'm sure they would tell me if I ask, especially if I'm >> offering lots of money - and I'm sure the same applies to CodeSourcery. > > Actually they can't publish the results due to the licensing for both > test suites . >
Yes, I've had a little look around with Google - it seems there is not much anyone can say except that they "test with Plum Hall". I guess Plum Hall wants interested parties to buy their own license and test themselves.
>> Personally, I am not interested in such big-name test suites. > > Fine but they are the only recognised ones. > >> I have no a priori reason to think that an expensive closed-source >> test suite is any better than an open source test suite, > > You missed out the "of similar standing". Is there an Open Source > compiler test suite of similar standing to Pum-Hall or Perennial? > >> and plenty of reason to think that open source test suites are better >> in some ways (for example, if a bug is found in gcc, then a test can >> be added to the regression test suite to ensure that the bug is not >> repeated in future versions). > > You are confusing a build test suite and a language test suite. Most > compiler companies have test suites for checking the build. Pull-Hall > and Perennial are language test suites. >
Yes, I that was my mistake. I asked CodeSourcery about their testing, and they made this point as well. They actively use Plum Hall to test for language conformance, and have found and fixed issues as a result. Because of licensing issues, they can't give out details, of course. Many of the issues that will be found using something like Plum Hall will be for unusual language uses - things that don't occur in normal real-world programming, but are nonetheless part of the language standards. That's why I don't feel these tests are of direct interest for me - if a flaw is so obscure that it is only found by such complete language tests rather than common test suites and common usage, then that flaw will not be triggered by *my* code, because I don't write obfuscated code.
>> Certainly there are times when it is legally important to have >> certifications from independent well-known third parties > > Quite > >> - but I don't think it is likely to make any realistic difference to >> the reliability of the end product (it is *far* more likely that any >> bugs are do to *my* programming, not the compiler). > > And you are prepared to stand up in court on a corporate manslaughter > charge with that argument? >
If I make a system that contains a bug that leads to death, am I less responsible if I can claim that the compiler used passes Plum Hall tests? If the Plum Hall tests are considered proof that the compiler is correct, that only increases the evidence that it was *my* code that caused the failure! The only legal benefit from having the Plum Hall certification is if the fault really was in the compiler - I could claim that I didn't need to check the compiler because Plum Hall said it was OK.
> The law in the UK changed on the 6th April 2008 and has a bearing on SW > development, > >>>> validation they have for the different parts of their toolkits - >>>> that's more in the realms of "professional" support. >>> It is quite difficult to do for GCC. Also it would only be for a >>> specific version and build. As so as you change anything you have to >>> re-test. Also it only applies to the binary. If you release the >>> source for some one lese to build it is not covered. (Because any >>> one could change the source or build it with a different compiler. >>> >> >> CodeSourcery releases compiler builds - you download the pre-packaged >> binary and install it just as you would for any closed source tool. >> They release new versions about twice a year (with faster updates for >> paying customers), just like for closed source tools. They run all >> their internal tests and validation (whatever these may be) on these >> builds, just like for closed source tools. > > Fair enough. These binaries can be tested and validated. > >> Have a look at this post - it explains pretty well why you don't see >> many gcc Plum Hall results published: >> >> <http://gcc.gnu.org/ml/gcc/2003-02/msg00652.html> >> <http://gcc.gnu.org/ml/gcc/2003-02/msg01206.html> >> > > I know that. However..... as I have said it ONLY applies to the specific > binary you test it with. Not the compiler per say.
I agree on that (although where do you stop? Does it only apply when the compiler binary is run on the same kind of processor as when it was tested?)
In message <2ZmdnVQyYdTzab_UnZ2dnUVZ8qjinZ2d@lyse.net>, David Brown 
<david.brown@hesbynett.removethisbit.no> writes
>Chris H wrote: >> In message <491c4570$0$20349$8404b019@news.wineasy.se>, David Brown >><david@westcontrol.removethisbit.com> writes >>> >>>>> They are also happy to run Plum Hall and other such validation >>>>>suites on their tools. >>>> Why don't they? Plum Hall or Perennial >>> >>> For example, IAR's web site says they test with Plum Hall and >>>Perennial - they don't give any results for these tests for their >>>compilers. I'm sure they would tell me if I ask, especially if I'm >>>offering lots of money - and I'm sure the same applies to CodeSourcery. >> Actually they can't publish the results due to the licensing for >>both test suites . >> > >Yes, I've had a little look around with Google - it seems there is not >much anyone can say except that they "test with Plum Hall". I guess >Plum Hall wants interested parties to buy their own license and test >themselves.
Quite. They need to eat too :-) Time and effort costs. These test suites are not small or insignificant. The current Perennial C test suite has over 68000 tests and the C++ has 124,000 (C and C++ are different languages that parted company back in the 170's)
>>> and plenty of reason to think that open source test suites are >>>better in some ways (for example, if a bug is found in gcc, then a >>>test can be added to the regression test suite to ensure that the bug >>>repeated in future versions). >> You are confusing a build test suite and a language test suite. >>Most compiler companies have test suites for checking the build. >>Pull-Hall and Perennial are language test suites. >> > >Yes, I that was my mistake. I asked CodeSourcery about their testing, >and they made this point as well.
It does get confusing. You need to check the thing is built right then did we build the right thing. (Verification and validation) Then you get on to testing the maths libraries, the assembler etc :-)
> They actively use Plum Hall to test for language conformance, and have >found and fixed issues as a result. Because of licensing issues, they >can't give out details, of course.
Quite... BUT it only applies to that specific binary that was tested under a set of specific conditions. When we tested an ARM binary we did 30 sets of tests on the one compiler for 30 different ARM targets.
>Many of the issues that will be found using something like Plum Hall >will be for unusual language uses - things that don't occur in normal >real-world programming, but are nonetheless part of the language >standards.
This is a problem... "don't normally occur" the world has a differing 30% of the language they use. Overall they use about 99% of it. More to the point there are a lot of things they don't know they use. How well do you know the internal workings of the library for your compiler? Do you know what the library uses? Probably best not to in some cases :-) The problem is you either test or you don't. There is no halfway house. If you test you test all of it or not at all. If you test all you can list AND DOCUMENT the areas you do not meet. This is quite common with embedded compilers (and virtually all C99 compilers :-) We meet the standard here BUT in the following places we do something different. More importantly this is what we do where we don't meet the standard.
> That's why I don't feel these tests are of direct interest for me - >if a flaw is so obscure that it is only found by such complete language >tests rather than common test suites and common usage, then that flaw >will not be triggered by *my* code, because I don't write obfuscated code.
Not true...... It depends on how the compiler works internally and how it optimises. Also it depends what you are doing. There are parts of C99 that the majority have not implemented. However they were put in because small pressure groups got them in and you can bet that they use them in the one or two compilers that implemented them. However what does your compiler do when it meets those constructs?
>>> - but I don't think it is likely to make any realistic difference to >>>the reliability of the end product (it is *far* more likely that any >>>bugs are do to *my* programming, not the compiler). >> And you are prepared to stand up in court on a corporate >>manslaughter charge with that argument? > >If I make a system that contains a bug that leads to death, am I less >responsible if I can claim that the compiler used passes Plum Hall >tests?
It depends on the accident. However it does show that you have taken reasonable care to ensure the compiler meets the specification of an ISO C compiler.
> If the Plum Hall tests are considered proof that the compiler is >correct,
Not at all. :-) It is simply that given a set of inputs as described in the standard you will get a set of outputs as specified in the standard. OR not and if not what it does do.
> that only increases the evidence that it was *my* code that caused the >failure!
I am Not a Lawyer Yes. However if you are not using a tested compiler, or you roll your own from source then you are more likely to be seen as liable as have used untested or unsuitable tools. How do we know they are "unsuitable" tools? Because they are not validated or tested. You wan to be hanged for one crime or guillotined for the other? :-)
>The only legal benefit from having the Plum Hall certification is if >the fault really was in the compiler - I could claim that I didn't need >to check the compiler because Plum Hall said it was OK.
Yes. However for safety critical use you have to show due diligence on the tools. Plum Hall (or perennial) is only part of a validation of a compiler. However having taken reasonable steps if the fault is in the compiler I would think that (not being a Lawyer) that it would lessen your liability somewhat. BTW the new Corporate manslaughter Act that came into force this year says that there are fine of up to 15% of *TURNOVER* (not profit) and jail sentences for directors and responsible managers.
>> I know that. However..... as I have said it ONLY applies to the >>specific binary you test it with. Not the compiler per say. > >I agree on that (although where do you stop? Does it only apply when >the compiler binary is run on the same kind of processor as when it was >tested?)
Not sure what you mean here. Normally you will run the compiler binary on for example a windows platform and test it. If that compiler binary is distributed on various versions of windows then there should be no problem. For MAC's I assume you would test (at least we would) on both the PPC and the Intel platforms. If you have a compiler that you distribute on Power PC, Sparc, Intel, Apha, MIPS etc then you build and test on EACH of those. For the Embedded Arm compiler we test on 30 different targets. Though in all cases the compiler binary runs on the same Windows host (XP SP2 so far. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
> If I make a system that contains a bug that leads to death, am I less > responsible if I can claim that the compiler used passes Plum Hall tests?
You are liable only if you have been neglegent. To not be neglegent you have to show that you have used due dilligence to identify and mitigate hazards. The compiler is a potential source of problems, so if you fail to identify this I suppose you could be considered neglegent. Once you have identified the compiler you have to do what is necessary to remove it as a potential problem. Running the compiler through a test suite is one way of showing that you are taking care, but there are others (as per my previous post in this thread) and test suites have their limitations. Somebody very high up in the FAA once said to me - "its impossible to validate a compiler as there are an infinite number of inputs" [BTW: I don't buy that statement myself]. I have spoken with people who have attempted a formal (i.e. mathematical) proof of a compiler, but this is too big a task to be viable unless you seriously restrict the inputs.
> If the Plum Hall tests are considered proof that the compiler is correct,
Its not proof, but it is using the "state of the art", and doing all that is practical *to show language compliance*, and therefore a worth while exercise if you are worried *about language compliance*. If you test your code fully from requirements to object code then the compiler can be completely non "standard" and your code still be shown to be completely conformant to its specified behaviour. On the assumption that, if your code has the potential to cause death, then you are going to test it pretty damn well, then the compiler makes little difference. Bum code generation will be picked up when a test fails.
> that only increases the evidence that it was *my* code that caused the > failure!
If somebody is dead, then an investigation will find the source of the problem no matter whether you try to hind behind somebody elses unpublished results or not. At least this is the case in the risk adverse, knee jerk reaction, British society.
> The only legal benefit from having the Plum Hall certification is if the > fault really was in the compiler - I could claim that I didn't need to > check the compiler because Plum Hall said it was OK.
Hmm. Out of interest, I have seen most bugs in generated code where non standard features are used (like __interrupt qualifiers, etc.) and particularly subtle stack frame problems that can be very reliant on particual events occurring just at the wrong moment (temporal effects) - for example an interrupt being taken just as another interrupt is being exited. The interrupt entry/exit code being very hardware specific. Do the Plum Hall tests pick up on that sort of thing? -- Regards, Richard. + http://www.FreeRTOS.org & http://www.FreeRTOS.org/shop 17 official architecture ports, more than 6000 downloads per month. + http://www.SafeRTOS.com Certified by T&#4294967295;V as meeting the requirements for safety related systems.
Chris H wrote:
> In message <2ZmdnVQyYdTzab_UnZ2dnUVZ8qjinZ2d@lyse.net>, David Brown > <david.brown@hesbynett.removethisbit.no> writes >> Chris H wrote: >>> In message <491c4570$0$20349$8404b019@news.wineasy.se>, David Brown >>> <david@westcontrol.removethisbit.com> writes
I'm snipping a lot of this because it's getting a bit unwieldy - you can assume that I basically agree with your comments if I've snipped them.
> >> Many of the issues that will be found using something like Plum Hall >> will be for unusual language uses - things that don't occur in normal >> real-world programming, but are nonetheless part of the language >> standards. > > This is a problem... "don't normally occur" the world has a differing > 30% of the language they use. Overall they use about 99% of it. More > to the point there are a lot of things they don't know they use. How > well do you know the internal workings of the library for your compiler? > Do you know what the library uses? Probably best not to in some cases :-) > > The problem is you either test or you don't. There is no halfway house. > > If you test you test all of it or not at all. If you test all you can > list AND DOCUMENT the areas you do not meet. This is quite common with > embedded compilers (and virtually all C99 compilers :-) We meet the > standard here BUT in the following places we do something different. > More importantly this is what we do where we don't meet the standard. > >> That's why I don't feel these tests are of direct interest for me - >> if a flaw is so obscure that it is only found by such complete >> language tests rather than common test suites and common usage, then >> that flaw will not be triggered by *my* code, because I don't write >> obfuscated code. > > Not true...... It depends on how the compiler works internally and how > it optimises. Also it depends what you are doing. There are parts of > C99 that the majority have not implemented. However they were put in > because small pressure groups got them in and you can bet that they use > them in the one or two compilers that implemented them. However what > does your compiler do when it meets those constructs? >
First off, I don't care how my compiler reacts to these few language features put in for a small pressure group - I'm not in one of these groups, and I don't use those features. More generally, if you code to a particular standard (say, MISRA), and you have tests and code reviews in place that enforce those standards, then you don't need to consider how your compiler deals with code outside those standards. Consider the clich&#4294967295;d car analogy. If you buy a car in Britain, and only ever drive it in Britain, then you don't care if it has been tested in outside temperatures of over 45 C or under -25 C. As you won't be using it outside these parameters, you don't have to worry about them. If you know that you are always careful about checking the oil levels, then you don't care how the car reacts to a lack of oil - that corner-case situation does not apply to you. I'm sure the car manufacturer will do more testing - but *you* don't care about such tests. <snip>
>>> I know that. However..... as I have said it ONLY applies to the >>> specific binary you test it with. Not the compiler per say. >> >> I agree on that (although where do you stop? Does it only apply when >> the compiler binary is run on the same kind of processor as when it >> was tested?) > > Not sure what you mean here. Normally you will run the compiler binary > on for example a windows platform and test it. If that compiler binary > is distributed on various versions of windows then there should be no > problem. > > For MAC's I assume you would test (at least we would) on both the PPC > and the Intel platforms. > > If you have a compiler that you distribute on Power PC, Sparc, Intel, > Apha, MIPS etc then you build and test on EACH of those. > > For the Embedded Arm compiler we test on 30 different targets. Though > in all cases the compiler binary runs on the same Windows host (XP SP2 > so far. >
Sometimes processors have bugs (like the Pentium FDIV bug), or perhaps differences in the way they handle apparently identical instructions. Do you test your compiler for compliance on all possible processors, or do you assume it works the same on each one? Sometimes the various versions of windows have differences that might affect the compiler behaviour (it's unlikely, of course, but can you be sure? Different system libraries might give different results in odd cases). Do you test them all? What about less usual circumstances, like running the windows binary under Wine, or using some sort of virtualisation software? Perhaps the compiler has a bug in its __DATE__ macro that is only apparent on 29th February - do you have to test the compiler on each day? Libraries need to be validated too - if you have different libraries, do you need to validate each compiler/library combination individually? What about compiler switches - do they also need to be considered? My point is that testing a binary, or testing a binary/target combination, is an arbitrary boundary (albeit a reasonable choice). You could also argue that you should not consider a compiler Plum Hall validated unless you have run Plum Hall on the compiler running on *your* development PC. Or you could argue that it is fine to validate it for a particular source code / configuration combination (as long as it is then compiled with a validated compiler, of course).
In message <dbadnUQoMPAdzL7UnZ2dneKdnZydnZ2d@lyse.net>, David Brown 
<david.brown@hesbynett.removethisbit.no> writes
>Chris H wrote: >> In message <2ZmdnVQyYdTzab_UnZ2dnUVZ8qjinZ2d@lyse.net>, David Brown >><david.brown@hesbynett.removethisbit.no> writes >>> Chris H wrote: >>>> In message <491c4570$0$20349$8404b019@news.wineasy.se>, David Brown >>>><david@westcontrol.removethisbit.com> writes > >I'm snipping a lot of this because it's getting a bit unwieldy - you >can assume that I basically agree with your comments if I've snipped >them.
OK... I was getting a similar problem :-)
>>> Many of the issues that will be found using something like Plum Hall >>>will be for unusual language uses - things that don't occur in normal >>>real-world programming, but are nonetheless part of the language >>>standards. >> This is a problem... "don't normally occur" the world has a >>differing 30% of the language they use. Overall they use about 99% of >>it. More to the point there are a lot of things they don't know they >>use. How well do you know the internal workings of the library for >>your compiler? Do you know what the library uses? Probably best not >>to in some cases :-) >> The problem is you either test or you don't. There is no halfway >>house. >> If you test you test all of it or not at all. If you test all you >>can list AND DOCUMENT the areas you do not meet. This is quite common >>with embedded compilers (and virtually all C99 compilers :-) We meet >>standard here BUT in the following places we do something different. >>More importantly this is what we do where we don't meet the standard. >> >>> That's why I don't feel these tests are of direct interest for me - >>>if a flaw is so obscure that it is only found by such complete >>>language tests rather than common test suites and common usage, then >>>that flaw will not be triggered by *my* code, because I don't write >>>obfuscated code. >> Not true...... It depends on how the compiler works internally and >>how it optimises. Also it depends what you are doing. There are parts >>C99 that the majority have not implemented. However they were put in >>because small pressure groups got them in and you can bet that they >>use them in the one or two compilers that implemented them. However >>what does your compiler do when it meets those constructs? >> > >First off, I don't care how my compiler reacts to these few language >features put in for a small pressure group - I'm not in one of these >groups, and I don't use those features.
How do you know that? It may use some of those constructs indirectly in the library. Which features? Actually there are two sets of testing one for hosted and one for free-standing compilers. However the differences are specified in the standard. This is for compilers that target a system that uses an OS and for systems that target a system without an OS (the majority of cases)
>More generally, if you code to a particular standard (say, MISRA),
MISRA is neither a standard nor a full subset.
> and you have tests and code reviews in place that enforce those >standards, then you don't need to consider how your compiler deals with >code outside those standards.
This is not true. It depends.... The whole point of compiler testing is that you remove the "it depends" Also I am willing to bet that you use many unspecified and undefined parts of the standard... these are the implementation defined parts. For example there are three types of char. The two integer types signed char and unsigned char. Then there is the plain char. Used for characters. Is it signed or unsigned? This depends on your implementation. This has an effect across the whole standard library.
>Consider the clich&#4294967295;d car analogy. If you buy a car in Britain, and >only ever drive it in Britain, then you don't care if it has been >tested in outside temperatures of over 45 C or under -25 C. As you >won't be using it outside these parameters, you don't have to worry >about them.
Yes you will. You can not guarantee that the UK temperature will never go outside those limits. More to the point you can not guarantee that some one will not drive it to the Munich Oktober Fest. It gets below -25 there.
> If you know that you are always careful about checking the oil >levels, then you don't care how the car reacts to a lack of oil - that >corner-case situation does not apply to you.
Correct. However should you knock the sump on a speed hump (and you can't miss them these days) and loose oil no matter how careful you are checking the oil before you travel you will be driving the car without oil. Assuming the world is perfect means you don't have to test at all. However you are talking about fault conditions not does the car perform as specified. Compiler testing is does it perform as specified not what happens if I break parts of it.
>I'm sure the car manufacturer will do more testing - but *you* don't >care about such tests.
So you have a set of tests for a car to be driven in the UK and a set of tests for a car to be driven in Germany... What if I only want to drive it in Surrey on Tuesdays? For many years people could get (unofficially and illegally ) death trap cars MOT'ed because they were only using them locally etc. That is what you are arguing for. Of course the real world and other vehicles and people intruded in to theis local world.
>>>> I know that. However..... as I have said it ONLY applies to the >>>>specific binary you test it with. Not the compiler per say. >>> >>> I agree on that (although where do you stop? Does it only apply >>>when the compiler binary is run on the same kind of processor as when >>>was tested?) >> Not sure what you mean here. Normally you will run the compiler >>binary on for example a windows platform and test it. If that >>compiler binary is distributed on various versions of windows then >>there should be no problem. >> For MAC's I assume you would test (at least we would) on both the >>PPC and the Intel platforms. >> If you have a compiler that you distribute on Power PC, Sparc, >>Intel, Apha, MIPS etc then you build and test on EACH of those. >> For the Embedded Arm compiler we test on 30 different targets. >>Though in all cases the compiler binary runs on the same Windows host >>(XP SP2 so far. >> > >Sometimes processors have bugs (like the Pentium FDIV bug), or perhaps >differences in the way they handle apparently identical instructions. >Do you test your compiler for compliance on all possible processors, or >do you assume it works the same on each one?
Interesting point. Do you mean as a host or as a target? If for targets then yes we test on multiple targets. The compiler targeted for ARM processors is tested on about 30 different Arm targets.
>Sometimes the various versions of windows have differences that might >affect the compiler behaviour (it's unlikely, of course, but can you be >sure?
No we specify the host OS. Normally Windows XP in our case. The compilers only run on windows
> Different system libraries might give different results in odd cases). >Do you test them all?
No. You are thinking GCC again. The compilers we test are supplied with a single standard library. We test that. However it does raise the point that will the GCC you have multiple libraries for many sources This makes it even more difficult to test GCC.
> What about less usual circumstances, like running the windows binary >under Wine, or using some sort of virtualisation software?
We do not test it under those hosts. We state a specific host OS. Emulations and virtual systems are up to you and you would need to test on those systems for higher SIL. Again it makes the use of these systems far more difficult for Safety critical systems. Hence the reason why it is dammed difficult to validate GCC systems.
> Perhaps the compiler has a bug in its __DATE__ macro that is only >apparent on 29th February - do you have to test the compiler on each >day?
Not every day, no. I will see if I can find out what the testing for the __DATE__ macro is since you raise it.
>Libraries need to be validated too - if you have different libraries, >do you need to validate each compiler/library combination individually?
Yes. However most commercial compilers come with a single system library. If you are using additional libraries you have to validate them too. However that would not be part of the compiler validation.
> What about compiler switches - do they also need to be considered?
Definitely. Tests are run for different target memory configurations etc. This is why for C there are over 68000 tests and you run them multiple times for different configurations.
>My point is that testing a binary, or testing a binary/target >combination, is an arbitrary boundary (albeit a reasonable choice).
Not at all.
> You could also argue that you should not consider a compiler Plum >Hall validated unless you have run Plum Hall on the compiler running on >*your* development PC.
For SIL4 that is exactly what you do. For SIL 1-3 you can use a reference platform i.e. WinXP- SP2 as long as you are using a windows host for development..
> Or you could argue that it is fine to validate it for a particular >source code / configuration combination (as long as it is then compiled >with a validated compiler, of course).
No. You validate the binary because the source can be altered. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/