EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

ARM IDE

Started by flash011 November 12, 2008
Chris H wrote:
> In message <dbadnUQoMPAdzL7UnZ2dneKdnZydnZ2d@lyse.net>, David Brown > <david.brown@hesbynett.removethisbit.no> writes >> Chris H wrote: >>> In message <2ZmdnVQyYdTzab_UnZ2dnUVZ8qjinZ2d@lyse.net>, David Brown >>> <david.brown@hesbynett.removethisbit.no> writes >>>> Chris H wrote: >>>>> In message <491c4570$0$20349$8404b019@news.wineasy.se>, David Brown >>>>> <david@westcontrol.removethisbit.com> writes >> >> I'm snipping a lot of this because it's getting a bit unwieldy - you >> can assume that I basically agree with your comments if I've snipped >> them. > > OK... I was getting a similar problem :-) > >>>> Many of the issues that will be found using something like Plum Hall >>>> will be for unusual language uses - things that don't occur in >>>> normal real-world programming, but are nonetheless part of the >>>> language standards. >>> This is a problem... "don't normally occur" the world has a >>> differing 30% of the language they use. Overall they use about 99% >>> of it. More to the point there are a lot of things they don't know >>> they use. How well do you know the internal workings of the library >>> for your compiler? Do you know what the library uses? Probably best >>> not to in some cases :-) >>> The problem is you either test or you don't. There is no halfway >>> house. >>> If you test you test all of it or not at all. If you test all you >>> can list AND DOCUMENT the areas you do not meet. This is quite >>> common with embedded compilers (and virtually all C99 compilers :-) >>> We meet standard here BUT in the following places we do something >>> different. More importantly this is what we do where we don't meet >>> the standard. >>> >>>> That's why I don't feel these tests are of direct interest for me - >>>> if a flaw is so obscure that it is only found by such complete >>>> language tests rather than common test suites and common usage, then >>>> that flaw will not be triggered by *my* code, because I don't write >>>> obfuscated code. >>> Not true...... It depends on how the compiler works internally and >>> how it optimises. Also it depends what you are doing. There are >>> parts C99 that the majority have not implemented. However they were >>> put in because small pressure groups got them in and you can bet that >>> they use them in the one or two compilers that implemented them. >>> However what does your compiler do when it meets those constructs? >>> >> >> First off, I don't care how my compiler reacts to these few language >> features put in for a small pressure group - I'm not in one of these >> groups, and I don't use those features. > > How do you know that? It may use some of those constructs indirectly > in the library. Which features? >
Any constructs used in common libraries automatically fall under the label of commonly used and well-tested features. Incorrect behaviour (either because the compiler writers did not properly interpret the standards, or because the compiler does not implement their interpretation) will be quickly spotted and handled by the build test suites.
> Actually there are two sets of testing one for hosted and one for > free-standing compilers. However the differences are specified in the > standard. This is for compilers that target a system that uses an OS and > for systems that target a system without an OS (the majority of cases) > >> More generally, if you code to a particular standard (say, MISRA), > > MISRA is neither a standard nor a full subset. > >> and you have tests and code reviews in place that enforce those >> standards, then you don't need to consider how your compiler deals >> with code outside those standards. > > This is not true. It depends.... The whole point of compiler testing is > that you remove the "it depends" Also I am willing to bet that you use > many unspecified and undefined parts of the standard... these are the > implementation defined parts. >
My point is that there is no need for Plum Hall validation for common features and common language constructs - the compiler's standard test suites should cover all these features. Don't get me wrong - I am still glad that my compiler suppliers use Plum Hall to test and improve the tools. More testing, especially testing in different and independent ways, is always good. But I don't see any benefit for *me* to know the details of the Plum Hall validation tests. Add to this mix, you have the problem that most embedded compilers have some non-standard additions or extensions that are essential for their use and effectively invalidate independent tests. If a compiler has an extra "flash" keyword, for example, then you must either test with the keyword disabled (which means testing with different settings from when you use the compiler), or with the keyword enabled (which means the compiler is no longer compliant, as you can't use "flash" as an identifier). Even if there is a middle ground such as "__flash" as the keyword, the Plum Hall tests will not cover this feature, which will be heavily used in real programs, meaning that their worth as an independent test tool is greatly diminished.
> For example there are three types of char. > The two integer types signed char and unsigned char. > Then there is the plain char. Used for characters. Is it signed or > unsigned? This depends on your implementation. This has an effect > across the whole standard library. >
Well-written libraries will function identically independently of the sign of plain "char". But it is certainly important to remember that there are parts of the C standards that are not fully specified, and are implementation dependent. Plum Hall cannot validate these (although perhaps it can report on them in some way) - with multiple correct answers, there is no pass/fail test. So source code that is dependent on these features may work on one Plum Hall validated compiler, and fail on another. The only way to get around this is to avoid using constructs that have such dependencies (for example, using "signed char" or "unsigned char" explicitly when it is relevant). That's much the same thing as avoiding obscure (but standards-compliant) language constructs that are unlikely to be well reviewed and tested by the compiler's standard test suites.
>> Consider the clich&#4294967295;d car analogy. If you buy a car in Britain, and >> only ever drive it in Britain, then you don't care if it has been >> tested in outside temperatures of over 45 C or under -25 C. As you >> won't be using it outside these parameters, you don't have to worry >> about them. > > Yes you will. You can not guarantee that the UK temperature will never > go outside those limits. More to the point you can not guarantee that > some one will not drive it to the Munich Oktober Fest. It gets below -25 > there. > >> If you know that you are always careful about checking the oil >> levels, then you don't care how the car reacts to a lack of oil - that >> corner-case situation does not apply to you. > > Correct. However should you knock the sump on a speed hump (and you > can't miss them these days) and loose oil no matter how careful you are > checking the oil before you travel you will be driving the car without oil. > > Assuming the world is perfect means you don't have to test at all. > However you are talking about fault conditions not does the car perform > as specified. > > Compiler testing is does it perform as specified not what happens if I > break parts of it. >
Perhaps I'm not making myself clear. As a compiler user, or car driver, I am concerned that the tools work as I expect them to do, when *I* use them. If I don't drive in temperatures under -25 C, then I am not concerned about how the car reacts in those circumstances. That's all there is to it - you can't start adding fantasies about how I *might* theoretically be able to go outside my assumptions. A car that explodes when you start the engine at -26 C is still within my required specifications. A C++ compiler that generates subtle bugs for a six-layer deep class hierarchy with multiple virtual inheritance and overloaded virtual "friend" operators is also within my required specifications for a compiler - it's not code that I would use or consider safe.
>> I'm sure the car manufacturer will do more testing - but *you* don't >> care about such tests. > > So you have a set of tests for a car to be driven in the UK and a set of > tests for a car to be driven in Germany... > > What if I only want to drive it in Surrey on Tuesdays? For many years > people could get (unofficially and illegally ) death trap cars MOT'ed > because they were only using them locally etc. > > That is what you are arguing for. > > Of course the real world and other vehicles and people intruded in to > theis local world. >
Again, you are misunderstanding me. I'm in favour of compiler manufacturers using Plum Hall (as CodeSourcery and other gcc testers do), and doing as much testing as reasonably practical - I just don't care about validations or certifications that are not relevant in my use of the tools. A car manufacturer will use the same tests of the UK and Germany (although they might distinguish between Norway and Oman markets), but I don't care about the results outside my area of interest.
>>>>> I know that. However..... as I have said it ONLY applies to the >>>>> specific binary you test it with. Not the compiler per say. >>>> >>>> I agree on that (although where do you stop? Does it only apply >>>> when the compiler binary is run on the same kind of processor as >>>> when was tested?) >>> Not sure what you mean here. Normally you will run the compiler >>> binary on for example a windows platform and test it. If that >>> compiler binary is distributed on various versions of windows then >>> there should be no problem. >>> For MAC's I assume you would test (at least we would) on both the >>> PPC and the Intel platforms. >>> If you have a compiler that you distribute on Power PC, Sparc, >>> Intel, Apha, MIPS etc then you build and test on EACH of those. >>> For the Embedded Arm compiler we test on 30 different targets. >>> Though in all cases the compiler binary runs on the same Windows >>> host (XP SP2 so far. >>> >> >> Sometimes processors have bugs (like the Pentium FDIV bug), or perhaps >> differences in the way they handle apparently identical instructions. >> Do you test your compiler for compliance on all possible processors, >> or do you assume it works the same on each one? > > Interesting point. Do you mean as a host or as a target? If for > targets then yes we test on multiple targets. The compiler targeted for > ARM processors is tested on about 30 different Arm targets. >
I was thinking of the host here (as you already talked about different targets). That's what's relevant when you decide that it is the compiler binary that's important.
>> Sometimes the various versions of windows have differences that might >> affect the compiler behaviour (it's unlikely, of course, but can you >> be sure? > > No we specify the host OS. Normally Windows XP in our case. The > compilers only run on windows >
There are dozens of variants of XP (different service packs, different languages, different choice of additional software that can give different versions of system libraries). And what about compilers that run under different OS's? There are plenty of commercial tools that run under a variety of *nix's.
>> Different system libraries might give different results in odd cases). >> Do you test them all? > > No. You are thinking GCC again. The compilers we test are supplied with > a single standard library. We test that. However it does raise the > point that will the GCC you have multiple libraries for many sources > This makes it even more difficult to test GCC. >
Perhaps the compilers you sell are limited in this way. Other tools may come with multiple libraries, or multiple variants of their libraries (perhaps maths libraries designed for small size, high speed, or high accuracy). If you add in support for different operating systems, you get a whole new set of library issues - even the basic C libraries can come in variants for different OS's, and with or without support for things like multi-threading. I think it is fair to say that if you want a compiler that can claim to be third-party "validated" or "certified" in some way, you are talking about a *very* restricted set of circumstances - bare-bones with no operating system, a specific small library, specific target processors, specific compiler settings, and specific host environments (including OS, processor, and installed additional software). It's like Windows NT's famous "C2" security levels - they are only valid on a machine that is so locked-down that it is barely usable, and don't apply in the real world. If my company felt that Plum Hall validation was relevant to our work, there would be no possible choice except to buy the test suite and run it ourselves on our workstations. Of course, it would be useful to know that our compiler suppliers had run the tests themselves - then we would know what to expect. But only our own results would have any real weight.
>> What about less usual circumstances, like running the windows binary >> under Wine, or using some sort of virtualisation software? > > We do not test it under those hosts. We state a specific host OS. > Emulations and virtual systems are up to you and you would need to test > on those systems for higher SIL. > > Again it makes the use of these systems far more difficult for Safety > critical systems. Hence the reason why it is dammed difficult to > validate GCC systems. >
Validating a software design process and a software project for SIL is *always* difficult. I haven't dealt with higher SIL levels, but I did work on a project with lower SIL levels (the software was written in assembly) - continuous and extensive functional testing and a design and development methodology that avoids or detects flaws, are the key elements. Choice of tools is obviously important - but tool validation by Plum Hall is neither necessary nor sufficient.
>> Perhaps the compiler has a bug in its __DATE__ macro that is only >> apparent on 29th February - do you have to test the compiler on each day? > > Not every day, no. I will see if I can find out what the testing for the > __DATE__ macro is since you raise it. >
Don't worry too much about it - it was just a random example.
>> Libraries need to be validated too - if you have different libraries, >> do you need to validate each compiler/library combination individually? > > Yes. However most commercial compilers come with a single system > library. If you are using additional libraries you have to validate > them too. However that would not be part of the compiler validation. > >> What about compiler switches - do they also need to be considered? > > Definitely. Tests are run for different target memory configurations etc. > > This is why for C there are over 68000 tests and you run them multiple > times for different configurations. > >> My point is that testing a binary, or testing a binary/target >> combination, is an arbitrary boundary (albeit a reasonable choice). > Not at all. > >> You could also argue that you should not consider a compiler Plum >> Hall validated unless you have run Plum Hall on the compiler running >> on *your* development PC. > > For SIL4 that is exactly what you do. For SIL 1-3 you can use a > reference platform i.e. WinXP- SP2 as long as you are using a windows > host for development.. > >> Or you could argue that it is fine to validate it for a particular >> source code / configuration combination (as long as it is then >> compiled with a validated compiler, of course). > > No. You validate the binary because the source can be altered. >
So can the binary if you try hard enough (or have a virus on your windows machine, as many do). Clearly any testing or validation on a source code bundle will only be valid as long as the source code is not modified.
In message <4923d0c8$0$2063$8404b019@news.wineasy.se>, David Brown 
<david@westcontrol.removethisbit.com> writes
>Chris H wrote: >>>>onstructs? >>>> >>> >>> First off, I don't care how my compiler reacts to these few language >>>features put in for a small pressure group - I'm not in one of these >>>groups, and I don't use those features. >> How do you know that? It may use some of those constructs >>indirectly in the library. Which features? >> > >Any constructs used in common libraries automatically fall under the >label of commonly used and well-tested features.
Not all are implemented or commonly used. Complex maths?
> Incorrect behaviour (either because the compiler writers did not >properly interpret the standards, or because the compiler does not >implement their interpretation) will be quickly spotted and handled by >the build test suites.
Only if you do the full tests. And document all the unspecified, undefined and implementation specific behaviour the standard permits/
>My point is that there is no need for Plum Hall validation for common >features and common language constructs - the compiler's standard test >suites should cover all these features.
Not at all. Also the Plum-Hall and Perennial are independent tests. Writing your own tests proves little.
> Don't get me wrong - I am still glad that my compiler suppliers use >Plum Hall to test and improve the tools. More testing, especially >testing in different and independent ways, is always good. But I don't >see any benefit for *me* to know the details of the Plum Hall >validation tests.
Correct. There is generally no need for users doing no safety critical or mission critical work to know the test results. You just need to know that the compiler manufacturer is taking all reasonable steps to go a good compiler. When you get to Safety Critical the burden of checking is on you not the tool company to check suitability.
>Add to this mix, you have the problem that most embedded compilers have >some non-standard additions or extensions that are essential for their >use and effectively invalidate independent tests.
Certainly not.
>If a compiler has an extra "flash" keyword, for example, then you must >either test with the keyword disabled (which means testing with >different settings from when you use the compiler), or with the keyword >enabled (which means the compiler is no longer compliant, as you can't >use "flash" as an identifier). Even if there is a middle ground such >as "__flash" as the keyword, the Plum Hall tests will not cover this >feature, which will be heavily used in real programs, meaning that >their worth as an independent test tool is greatly diminished.
Plum-Hall and Perennial test the standard C. Then you document the differences between the compiler and the Standard C. There are also other test suites internal to the compiler company such as build tests etc that you look at. Validating a compiler usually requires an NDA with the compiler manufacturer to look at their other tests, development and build procedures. This is not required for the majority but it is for Safety Critical. If there is an error in the system people get killed and you end up in court.
>> For example there are three types of char. >> The two integer types signed char and unsigned char. >> Then there is the plain char. Used for characters. Is it signed or >>unsigned? This depends on your implementation. This has an effect >>across the whole standard library. > >Well-written libraries will function identically independently of the >sign of plain "char".
They should... but is [plain] char singed or unsigned ? It's not just the libraries. It is YOUR code as well. Do you pass a signed char or unsigned char to the libraries? This is a problem if you use MISRA-C and or static analysis. It *may* also change if you change libraries.
>But it is certainly important to remember that there are parts of the C >standards that are not fully specified, and are implementation >dependent.
Yes.
> Plum Hall cannot validate these (although perhaps it can report on >them in some way) - with multiple correct answers, there is no >pass/fail test.
Quite.
>So source code that is dependent on these features may work on one Plum >Hall validated compiler, and fail on another.
Yes. There is more to it than just the one test suite.
>The only way to get around this is to avoid using constructs that have >such dependencies (for example, using "signed char" or "unsigned char" >explicitly when it is relevant).
Yes. However the libraries use [plain] char. This is something we have debated several times in the MISRA meetings. The problem is the ISO -C teams do not want to enforce it one way or the other as it will break many compilers and much source code.
>That's much the same thing as avoiding obscure (but >standards-compliant) language constructs
I agree. Best to avoid some things.
>that are unlikely to be well reviewed and tested by the compiler's >standard test suites.
No if specified it will be well tested.
>Perhaps I'm not making myself clear. As a compiler user, or car >driver, I am concerned that the tools work as I expect them to do, when >*I* use them.
Yes but there are several thousand "I"s who all want to use the compiler differently
> If I don't drive in temperatures under -25 C, then I am not concerned >about how the car reacts in those circumstances.
But what about those who do and never drive it in conditions over +2? The problem is (and I have see this many times) is what is "normal" for one users is abnormal for another. There is no "normal" that 95% of users agree on. Otherwise we would ditch the hosted versions of C. The vast majority of C development is on embedded systems.....
>Again, you are misunderstanding me. I'm in favour of compiler >manufacturers using Plum Hall (as CodeSourcery and other gcc testers >do), and doing as much testing as reasonably practical - I just don't >care about validations or certifications that are not relevant in my >use of the tools.
You don't need them. Only those doing safety critical work do.
> A car manufacturer will use the same tests of the UK and Germany >(although they might distinguish between Norway and Oman markets), but >I don't care about the results outside my area of interest.
You don't but others do. The compiler company needs to test to all users. The validation for critical use also has to test the whole tool
>I was thinking of the host here (as you already talked about different >targets). That's what's relevant when you decide that it is the >compiler binary that's important.
You can only test the binary. The source is as everyone points out modifiable by the user.
>There are dozens of variants of XP (different service packs, different >languages, different choice of additional software that can give >different versions of system libraries).
Standard compiler system libraries?
>And what about compilers that run under different OS's? There are >plenty of commercial tools that run under a variety of *nix's.
Then you specify which you test on.
>>> Different system libraries might give different results in odd >>>cases). Do you test them all? >> No. You are thinking GCC again. The compilers we test are supplied >>with a single standard library. We test that. However it does raise >>the point that will the GCC you have multiple libraries for many >>sources This makes it even more difficult to test GCC. >> > >Perhaps the compilers you sell are limited in this way. Other tools >may come with multiple libraries, or multiple variants of their >libraries (perhaps maths libraries designed for small size, high speed, >or high accuracy).
Then you test all of them if that is what is supplied. BTW we do I misunderstood what you meant.
> If you add in support for different operating systems, you get a whole >new set of library issues - even the basic C libraries can come in >variants for different OS's, and with or without support for things >like multi-threading.
Yes. That is why compiler Validation is not a simple thing.
>I think it is fair to say that if you want a compiler that can claim to >be third-party "validated" or "certified" in some way, you are talking >about a *very* restricted set of circumstances - bare-bones with no >operating system, a specific small library, specific target processors, >specific compiler settings, and specific host environments (including >OS, processor, and installed additional software).
No. Not VERY restricted. For example we test the IAR compilers. They run on Windows. We use XP for that. We test with all variants of their standard library. Usually on multiple targets. This can take a couple of days.
>It's like Windows NT's famous "C2" security levels - they are only >valid on a machine that is so locked-down that it is barely usable, and >don't apply in the real world.
No idea.
>If my company felt that Plum Hall validation was relevant to our work, >there would be no possible choice except to buy the test suite and run >it ourselves on our workstations.
For SIL4 you need to do that. For SIL0-3 you can use independent validation on a reference platform similar to the one you use.
> Of course, it would be useful to know that our compiler suppliers had >run the tests themselves - then we would know what to expect. But only >our own results would have any real weight.
Yes.
>Validating a software design process and a software project for SIL is >*always* difficult.
Yes. Not simple either. For the tools you are not validating the process used to build the tools just that for the following set of legal inputs you get the following legal outputs. However looking at the compiler manufactures system does increase confidence and due diligence.
> I haven't dealt with higher SIL levels, but I did work on a project >with lower SIL levels (the software was written in assembly) - >continuous and extensive functional testing and a design and >development methodology that avoids or detects flaws, are the key >elements. Choice of tools is obviously important - but tool validation >by Plum Hall is neither necessary nor sufficient.
Plum Hall does not do assembler testing AFAIK Neither does Perennial
>>> Perhaps the compiler has a bug in its __DATE__ macro that is only >>>apparent on 29th February - do you have to test the compiler on each >>> >> Not every day, no. I will see if I can find out what the testing for >>the __DATE__ macro is since you raise it. >Don't worry too much about it - it was just a random example.
Of course I will worry about it... I am curious. Not you have mentioned it :-) -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

David Brown wrote:

> With commercial closed source tools, it's a rarity that you can get in > touch with the developers directly. Often you deal with dedicated > support staff who know less about the tools and target than expert users
The compiler business is small and it is generally quite possible to contact the developers directly. Support staff deals with the day to day stuff but developers are generally very much involved in dealing with real issues. Unlike open source it is generally easy to determine who actually is responsible for specific design and implementation decisions.
> Personally, I am not interested in such big-name test suites. I have no > a priori reason to think that an expensive closed-source test suite is > any better than an open source test suite, and plenty of reason to think > that open source test suites are better in some ways (for example, if a > bug is found in gcc, then a test can be added to the regression test > suite to ensure that the bug is not repeated in future versions). Of > course, it is always better to test with as many testsuites as > conveniently possible (none of them will cover everything).
"big-name test suites" offer organized methodical language testing to a specific standard. There are no similar test FOSS test suites available that I know of. GCC's regression tests the last time I looked were a small scatted collection of random code fragments. One of the biggest problems facing compiler testing is the ability to deal with testing multiple code generation sequences that come from syntactically identical sources. Code generation for example that deals with variable placement context for example. Outside commercial tools very little serious work has been done to address this problem. Regards -- Walter Banks Byte Craft Limited http://www.bytecraft.com
In message <492418D3.51E02ED@bytecraft.com>, Walter Banks 
<walter@bytecraft.com> writes
> > >David Brown wrote: > >> With commercial closed source tools, it's a rarity that you can get in >> touch with the developers directly. Often you deal with dedicated >> support staff who know less about the tools and target than expert users > >The compiler business is small and it is generally quite >possible to contact the developers directly. Support staff >deals with the day to day stuff but developers are generally >very much involved in dealing with real issues. Unlike >open source it is generally easy to determine who actually >is responsible for specific design and implementation decisions.
Quite so I can directly get in touch with all the developers and project managers for any of the compilers I sell. Then again I have NDAs with them as a distributor. So far I have not had any legitimate question from a user that we have not been able to answer. More to the point with the complete version control they have they can tell me EXACTLY who did which source file when and the full history. Also they have all the supporting documentation. I doubt you can do this with any version of GCC
>> Personally, I am not interested in such big-name test suites. I have no >> a priori reason to think that an expensive closed-source test suite is >> any better than an open source test suite, and plenty of reason to think >> that open source test suites are better in some ways (for example, if a >> bug is found in gcc, then a test can be added to the regression test >> suite to ensure that the bug is not repeated in future versions). Of >> course, it is always better to test with as many testsuites as >> conveniently possible (none of them will cover everything). > >"big-name test suites" offer organized methodical language testing to >a specific standard.
Quite. The Big Name test suites are developed to a standard as high as safety critical software. Full histories and documentation with formal procedures.
>There are no similar test FOSS test suites >available that I know of. GCC's regression tests the last time I looked >were a small scatted collection of random code fragments.
Quite. The problem is that whilst the big name suites have some provenance, as the authors are also on the C and C++ standards bodies and helped shape the standard they are writing a test suite fro the GCC test suite does not have that similar standing. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Chris H wrote:
> In message <4923d0c8$0$2063$8404b019@news.wineasy.se>, David Brown >> My point is that there is no need for Plum Hall validation for common >> features and common language constructs - the compiler's standard test >> suites should cover all these features. > > Not at all. Also the Plum-Hall and Perennial are independent tests. > Writing your own tests proves little.
Of course it does. Neither Plum Hall nor Perennial are perfect. During the last five years, I've found two or three code generation bugs in a commercial compiler claiming to be verified by those two, with embarrassingly simple pieces of code (like a backwards for loop). Of course I hope my failing simple code snippets were now added to the compiler maker's test suite, to make sure this doesn't happen again. Ironically, my score on gcc bugs is just one in 10 years. No, I don't want to draw any statistically significant conclusion from that, but I wouldn't say a compiler is significantly better than another just because the first's test suite has a "Plum Hall" label on it.
> Validating a compiler usually requires an NDA with the compiler > manufacturer to look at their other tests, development and build > procedures. > > This is not required for the majority but it is for Safety Critical. If > there is an error in the system people get killed and you end up in court.
Define Safety Critical. We're doing automotive stuff, and view almost everything as safety-critical. No people are instantly killed if the MPEG decoder in your car stereo crashes, the RDS interpreter tunes to a wrong frequency, or the display goes black. But it may irritate the driver (especially if the bug is accompanied with loud noise), distract him from traffic, and *boom*. Still we hope to be able to avoid having to validate the compiler. We're doing classic testing, with all sorts of black-box and white-box tests, to get the chance of a software failure low enough. After all, software isn't the only thing that can fail. Compiler and software can be perfect if the CPU has a bug, or a resistor falls off the PCB. Having a validated compiler just increases the probability that the problem was not introduced in the ".c"-to-".o" conversion, but does not eliminate it.
>> There are dozens of variants of XP (different service packs, different >> languages, different choice of additional software that can give >> different versions of system libraries). > > Standard compiler system libraries?
Of course a compiler will use things like kernel32.dll, which might behave differently on different versions of XP. One advance beta version of the linker we use behaved differently on XP and 2000. Stefan
Chris H wrote:
> In message <492418D3.51E02ED@bytecraft.com>, Walter Banks >> David Brown wrote: >>> With commercial closed source tools, it's a rarity that you can get in >>> touch with the developers directly. Often you deal with dedicated >>> support staff who know less about the tools and target than expert users >> >> The compiler business is small and it is generally quite >> possible to contact the developers directly. Support staff >> deals with the day to day stuff but developers are generally >> very much involved in dealing with real issues. Unlike >> open source it is generally easy to determine who actually >> is responsible for specific design and implementation decisions. > > Quite so I can directly get in touch with all the developers and project > managers for any of the compilers I sell. Then again I have NDAs with > them as a distributor. So far I have not had any legitimate question > from a user that we have not been able to answer.
Well, you *think* you've answered their questions satisfactorily. Maybe they just gave up trying to explain the problem. Multi-level support has its disadvantages when you want to go into deep details. Like, you want to know whether __mulli3 has been designed with CPU erratum #1009823 and application note #512 in mind, and whether you can safely call it from inline assembly (completely made-up example). If all you have is a first-level support contact, you'll have a hard time trying to explain them what you want. Here, some phone companies implement the last mile using VoIP and DSL. Try calling them to find out whether they use G.711 or G.729. With one, fax will work, with the other, it will not. Now explain that to a support agent, who is trained to tell people to check that their devices are correctly plugged in.
> More to the point with the complete version control they have they can > tell me EXACTLY who did which source file when and the full history. > Also they have all the supporting documentation. I doubt you can do this > with any version of GCC
As far as I know, most open-source projects (including gcc) have public, world-readable version control systems. But if I have a problem I don't care who *did* X, I want to know who can *fix* X. This isn't in the VCS. Stefan
In message <gg1o0s.to.1@stefan.msgid.phost.de>, Stefan Reuther 
<stefan.news@arcor.de> writes
>Chris H wrote: >> Quite so I can directly get in touch with all the developers and project >> managers for any of the compilers I sell. Then again I have NDAs with >> them as a distributor. So far I have not had any legitimate question >> from a user that we have not been able to answer. > >Well, you *think* you've answered their questions satisfactorily. Maybe >they just gave up trying to explain the problem.
No.
>Multi-level support has its disadvantages when you want to go into deep >details. Like, you want to know whether __mulli3 has been designed with >CPU erratum #1009823 and application note #512 in mind, and whether you >can safely call it from inline assembly (completely made-up example). If >all you have is a first-level support contact, you'll have a hard time >trying to explain them what you want.
Why? -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Chris H wrote:
> In message <492418D3.51E02ED@bytecraft.com>, Walter Banks > <walter@bytecraft.com> writes >> >> >> David Brown wrote: >> >>> With commercial closed source tools, it's a rarity that you can get in >>> touch with the developers directly. Often you deal with dedicated >>> support staff who know less about the tools and target than expert users >> >> The compiler business is small and it is generally quite >> possible to contact the developers directly. Support staff >> deals with the day to day stuff but developers are generally >> very much involved in dealing with real issues. Unlike >> open source it is generally easy to determine who actually >> is responsible for specific design and implementation decisions. > > Quite so I can directly get in touch with all the developers and project > managers for any of the compilers I sell. Then again I have NDAs with > them as a distributor. So far I have not had any legitimate question > from a user that we have not been able to answer. >
An end-user does not normally have these NDAs or this sort of influence with the compiler companies - getting developer contact can be hard or impossible if you are a small company with only one or a few licenses. Going through a distributor (such as yourself) is one way to get contact with the experts - assuming that all steps in the chain are happy to help. Of course, that's a generalisation and there are plenty of exceptions. I'm sure Bytecraft will give excellent support directly from the developers, if we can judge by Walter Banks' contributions to this public newsgroup. I have also had excellent technical support from ImageCraft - not exactly a big player, but famous for their support. On the other hand, I have had technical issues with a big name compiler in which I found a bug in their library. I even sent them a fix for it. I was basically told that they don't care about the problem, and that they were not going to fix it. Technical support quality is definitely a widely varying priority for different suppliers - part of the job of a distributor is to know and recommend suppliers on such criteria.
> More to the point with the complete version control they have they can > tell me EXACTLY who did which source file when and the full history. > Also they have all the supporting documentation. I doubt you can do this > with any version of GCC >
I don't need to ask - I can simply look at the publicly available source code repository: http://gcc.gnu.org/viewcvs/
In message <qu-dnVzkTpHOwbnUnZ2dnUVZ8vKdnZ2d@lyse.net>, David Brown 
<david.brown@hesbynett.removethisbit.no> writes
>Chris H wrote: >>> is responsible for specific design and implementation decisions. >> Quite so I can directly get in touch with all the developers and >>project managers for any of the compilers I sell. Then again I have >>NDAs with them as a distributor. So far I have not had any legitimate >>question from a user that we have not been able to answer. >An end-user does not normally have these NDAs or this sort of influence >with the compiler companies - getting developer contact can be hard or >impossible if you are a small company with only one or a few licenses. >Going through a distributor (such as yourself) is one way to get >contact with the experts - assuming that all steps in the chain are >happy to help.
Yes. That is how it works. However the problem is that some questions (and I have seen these questions sent simultaneously to several distis and the compiler company) are answered on page 2 of the manual. In many cases the person asking the question has very limited experience and the language and the target and is asking the wrong question completely. This is why compiler companies use first line support and distis to weed out the idiots, simpletons and those who seem unable to read the manual The other point is I can discuss the problem with the end user and the developer in more depth having an NDA in place. It helps I am a qualified and experienced SW Engineer. The problem is that once a user has the developers email address they thing the have a personal friend and help line for life. The first port of call (before the manual) is the developer. Where I get any support from inside a company I always remove any direct email addresses to the developers before passing on the information. Getting direct access tot he developers doesn't happen in any other industry. BTW can the users of your companies products talk DIRECTLY to you about the software functionality of your companies products?
>Of course, that's a generalisation and there are plenty of exceptions. >I'm sure Bytecraft will give excellent support directly from the >developers, if we can judge by Walter Banks' contributions to this >public newsgroup
Where required they do.
>. I have also had excellent technical support from ImageCraft - not >exactly a big player, but famous for their support.
Quite so.
>On the other hand, I have had technical issues with a big name compiler >in which I found a bug in their library. I even sent them a fix for >it. I was basically told that they don't care about the problem, and >that they were not going to fix it.
Interesting. Love to know who that was....
>Technical support quality is definitely a widely varying priority for >different suppliers - part of the job of a distributor is to know and >recommend suppliers on such criteria.
Distributors recommend specific tools on many criteria. (btw there is no universal "best" compiler for anything.... well that's not strictly true. For some of the arcane MCU there are some times "the best" but in other cases there are many factors to weigh up. None of the factors are size of margin before you ask!
>> More to the point with the complete version control they have they >>can tell me EXACTLY who did which source file when and the full >>history. Also they have all the supporting documentation. I doubt you >>can do this with any version of GCC >I don't need to ask - I can simply look at the publicly available >source code repository: >http://gcc.gnu.org/viewcvs/
For EVERY version of GCC? You know what all the code does and why it is there? -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Chris H wrote:
> In message <gg1o0s.to.1@stefan.msgid.phost.de>, Stefan Reuther >> Chris H wrote: >>> Quite so I can directly get in touch with all the developers and project >>> managers for any of the compilers I sell. Then again I have NDAs with >>> them as a distributor. So far I have not had any legitimate question >>> from a user that we have not been able to answer. >> >> Well, you *think* you've answered their questions satisfactorily. Maybe >> they just gave up trying to explain the problem. > > No.
How do you know?
>> Multi-level support has its disadvantages when you want to go into deep >> details. Like, you want to know whether __mulli3 has been designed with >> CPU erratum #1009823 and application note #512 in mind, and whether you >> can safely call it from inline assembly (completely made-up example). If >> all you have is a first-level support contact, you'll have a hard time >> trying to explain them what you want. > > Why?
Granted, developer support is a *lot* more helpful than phone company support, but I have already got back bug reports for formal reasons without anyone having looked at the problem. One that I remember is that a filesystem library was unmarshalling stuff using 'char', not 'unsigned char'. Everyone sees that this is wrong by looking at the code; I sent file name and line number. The answer I got back was "please send a self-contained test case including an image of the failing file system". Had I been talking to a developer familiar with the code, not a support guy, that would probably have been fixed that in a second. (Various events led to the situation that we don't use that library; hence I have not dug much further.) Stefan

The 2024 Embedded Online Conference