Reply by CBFalconer July 15, 20052005-07-15
Lanarcam wrote:
>
... snip ...
> > This is true for linear calls of functions, but when > you have asynchronous execution this is not sufficient. > For instance when you have interrupts or preemptive tasks.
Those can generally be characterized with loop invariants, or just plain invariants. A producer/consumer relationship is typical. The real bear is when you have to guarantee worst case delays.
> > It is true however that what you suggest should be used > for safety critical systems. For these systems you have > generally a main loop that executes functions cyclically > and no interrupts. It is much easier to prove the > correctness of such designs.
You may note I cited Per Brinch Hansen earlier. He and his staff could write a complete multiprocessing operating system and be virtually convinced of correctness out of the gate. He had (has) a genius for this sort of organization and simplification. Yet the overall system can be highly complex and deal with many interrupts and processes. -- "If you want to post a followup via groups.google.com, don't use the broken "Reply" link at the bottom of the article. Click on "show options" at the top of the article, then click on the "Reply" at the bottom of the article headers." - Keith Thompson
Reply by Lanarcam July 15, 20052005-07-15

CBFalconer wrote:
> Not Really Me wrote: > > > ... snip ... > > > > You are correct. You cannot prove correctness, but you must try > > to prove incorrectness. > > > > We use this adage, "A developers job is to make their software > > work. A testers job is to try to prove that it doesn't!" > > I consider that fundamentally flawed. A developer should write > code that is obviously correct. That means short routines that can > be described in a few sentences, with close type checking on > parameters. Brinch Hansen was a master at this.
I tend to agree about the idea that testing is looking for errors, not verifying the correctness. But this is more of a philosophical debate and I should perhaps stick to facts. I agree that what you suggest is sound advice and if programmers respected that there would be fewer bugs. But even then you can never guarantee that you have checked every conditions except in functions that are trivial. You could do that but you would have to use formal methods and they are not pratical in general. I know some who used them in rail transport systems but they were gurus. Even if you have proved the correctness of functions, when you assemble the whole thing you can find unexpected errors. If the design is simple they should be caught easily. But if you have several interrelated tasks of some complexity, you can't assume you have no bugs simply because you have written your code carefully. There are what are called real time bugs caused by unexpected events that no one had suspected. And the problem is when do you know you have corrected the last one?
> This leaves the tester checking boundary conditions, and for proper > usage of those routines.
This is true for linear calls of functions, but when you have asynchronous execution this is not sufficient. For instance when you have interrupts or preemptive tasks. It is true however that what you suggest should be used for safety critical systems. For these systems you have generally a main loop that executes functions cyclically and no interrupts. It is much easier to prove the correctness of such designs. But for other types of systems this is not always practical.
Reply by CBFalconer July 15, 20052005-07-15
Not Really Me wrote:
>
... snip ...
> > You are correct. You cannot prove correctness, but you must try > to prove incorrectness. > > We use this adage, "A developers job is to make their software > work. A testers job is to try to prove that it doesn't!"
I consider that fundamentally flawed. A developer should write code that is obviously correct. That means short routines that can be described in a few sentences, with close type checking on parameters. Brinch Hansen was a master at this. This leaves the tester checking boundary conditions, and for proper usage of those routines. -- "If you want to post a followup via groups.google.com, don't use the broken "Reply" link at the bottom of the article. Click on "show options" at the top of the article, then click on the "Reply" at the bottom of the article headers." - Keith Thompson
Reply by Lanarcam July 15, 20052005-07-15

Not Really Me wrote:
> "Arno Nuehm" <arno@localhost> wrote in message > news:42d77d27_2@news.arcor-ip.de... > > In article <1121374174.402532.73020@g44g2000cwa.googlegroups.com>, > > "Lanarcam" <lanarcam1@yahoo.fr> writes: > >> > <snip> > > Right (except that (please excuse my nitpicking...) functional > > correctness cannot be *proven* by testing) > > > > Thanks for your help > > > > Arno > > You are correct. You cannot prove correctness, but you must try to prove > incorrectness. > > We use this adage, "A developers job is to make their software work. A > testers job is to try to prove that it doesn't!"
This is right, my use of the verb prove was sloppy;) This leads to the nightmare of the conscientious programmer who can never get a good sleep while his creation is in the wild. You stop testing sometimes because of the project schedule or because you become short of funds. In this case you can never be sure that all bugs have been removed. When you write code that will be part of a certified system you have at least the approval of the official certification body. A colleague said that a software is free of bugs only when the customer has got rid of it;)
Reply by Not Really Me July 15, 20052005-07-15
"Arno Nuehm" <arno@localhost> wrote in message 
news:db62to$cue$00$1@news.t-online.com...
> Hi Scott, > > In article <3jnbdmFquvd9U1@individual.net>, > "Not Really Me" <scott@exoXYZtech.com> writes: >>
<snip>
> If you could shed some light on the process that was used to define those > tests for MicroC/OS-II, it would probably help me a lot. But I can > see pretty well that you can't do that as it's probably the basis of > your company's business. > > Thanks > > Arno
The response by Lanarcam says it pretty well. Regardless that it is not a safety-critical app, I highly recommend that you get copies of the RTCA specs DO-178B and DO-248. While not aimed directly at security, they do identify the steps that you will need to follow. Simply the rule of thumb is "have a plan and test everything". If your company doesn't have them, generate configuration management procedures and plans, software QA procs and plans, specs for requirements, designs and tests, test plans, and keep everything under source/document control. Test the requirements - are they complete and correct. Test the design - does it match the requirements. Test the code - does it match the design. Test the white box tests - do they do an adequate job, do they test all the low-level (design) requirments. Test the build and test procedures - Can you repeat everything if necessary? Test the black box tests - do they test every requirement? Argh! Generate a tracability matrix. This is a matrix that correlates the requirments with the design with the code with tests. Oh, and good luck. (Contact us if you need help/advise on any of the above) Scott
Reply by Not Really Me July 15, 20052005-07-15
"Arno Nuehm" <arno@localhost> wrote in message 
news:42d77d27_2@news.arcor-ip.de...
> In article <1121374174.402532.73020@g44g2000cwa.googlegroups.com>, > "Lanarcam" <lanarcam1@yahoo.fr> writes: >>
<snip>
> Right (except that (please excuse my nitpicking...) functional > correctness cannot be *proven* by testing) > > Thanks for your help > > Arno
You are correct. You cannot prove correctness, but you must try to prove incorrectness. We use this adage, "A developers job is to make their software work. A testers job is to try to prove that it doesn't!" Scott
Reply by July 15, 20052005-07-15
In article <1121374174.402532.73020@g44g2000cwa.googlegroups.com>,
	"Lanarcam" <lanarcam1@yahoo.fr> writes:
> > Black box testing is not testing individual C functions > one by one. You must test functionalities.
.. and functionalities should be stated as requirements in the specification. I think I'm beginning to understand now. The problem I was facing is that my specification has some requirements stated along with the functions they correspond to, but there are also requirements which do not correspond directly to any single function. So, the obvious approach of using the list of functions as the main structure for the test document won't work here. I'll have to start from the list of requirements.
> If your system exhibits a state, you must test all > C functions in every accessible state.
OK. I could treat the state affecting a function's behavior as if it were another "virtual" parameter to the function. Sounds good.
> This seems a lot of work but you can't avoid it if you > need to prove the functionnal correctness.
Right (except that (please excuse my nitpicking...) functional correctness cannot be *proven* by testing) Thanks for your help Arno
Reply by Lanarcam July 14, 20052005-07-14

Arno Nuehm wrote:
> In article <1121359563.096610.231190@g14g2000cwa.googlegroups.com>, > "Lanarcam" <lanarcam1@yahoo.fr> writes: > > > > Software tests are performed as black box tests > > and/or white box tests. > > > > I see white box tests as a supplemental thing, i.e. you first do > a requirements-based black-box testing and then, to ensure that > nothing was missed, you do a coverage analysis. If the coverage > analysis finds any white spots, it means that either the code in > question can not be made to execute, which means that it is > superfluous and can be removed, or there must be some requirement > that this code fulfils and that is missing from the specification > or has been overlooked. > > IMHO, it is important to do it this way around (i.e. black box > followed by white box test). IOW, I think it is plain wrong to > define test cases with the sole purpose of achieving code coverage. > > > You usually perform black box tests first where you consider > > the system under test as a set of functions with external > > access. You are interested only in externally visible > > behaviour, you don't test explicitely internal features. > > You must ensure that every function call is tested, > > that all possible combinations of parameters are used, > > you define ranges of values for this, and if states diagrams > > are known from the outside, you must ensure that every > > combination of states and conditions is tested. > > A black box testing is in short a test against external > > requirements. > > My problem is that there are certain functions that manipulate > the system's state, and, depending on that state, some *other* > functions change in behavior. Therefore, I suspect that I will > not catch all necessary test cases by just exercising every single > function on its own. I was hoping for some advice how to deal with > such situations.
Black box testing is not testing individual C functions one by one. You must test functionalities. If your system exhibits a state, you must test all C functions in every accessible state. This seems a lot of work but you can't avoid it if you need to prove the functionnal correctness. If you can show that some functions behave independantly of the various states and that they don't modify the state, you can exclude them from the set. But in order to do so you must perform some analysis of the code and prove it.
Reply by gooch July 14, 20052005-07-14

> You usually perform black box tests first where you consider > the system under test as a set of functions with external > access. You are interested only in externally visible > behaviour, you don't test explicitely internal features. > You must ensure that every function call is tested, > that all possible combinations of parameters are used, > you define ranges of values for this, and if states diagrams > are known from the outside, you must ensure that every > combination of states and conditions is tested. > A black box testing is in short a test against external > requirements. > > White box testing is used to ensure that every path in > the code is taken. You must exercise all output branch > for every condition. For loops, depending on the criticity > of the system you test one or several pass. There are > tools available for the testing of the code coverage.
This is generally correct except for the fact that you are almost never going to be able to test all possible combinations of parameters. In almost all cases this would take many more years than you, or even your great grandchildren, are going to be alive. You need to select an apropriate subset of possible combinations that has a reasonable likelyhood of discovering any problems in the SW. You generally want to keep good track of the number of defects you are finding along the way and perform regression tests as needed. The number of defects being found should begin to approach zero as you progress. At some point, usually dictated more by contractual schedule restraints than anything else, you decide that you have reached a point of diminishing returns. In other words you get to a point where you are putting in a lot of effort to locate a relatively small number of defects that are very unlikely to occur in a real life situation, at this point you stop. As far as how to go about developing test cases, that is pretty dependent on the system in question and without seeing it I don't know what real advice anyone here is going to be able to give. Generally you want to define some set of incremental builds that expand upon each other adding new components and functionalities to the system as you go. If there are interdependencies between component the order of integration becomes a key consideration and you must fully test a component and then add the new one and if needed change the original component in such a way so as to test the new one. This becomes difficult if there are large numbers of circular dependencies which sounds like what you are desribing. In general the best approach is to remove the circular dependencies at design time so they will not become an issue when you test. This is one reason why a test engineer or someone similar should be involved in the process early on. If problems with testability can be caught early they are easier and cheaper to fix.
Reply by July 14, 20052005-07-14
In article <1121359563.096610.231190@g14g2000cwa.googlegroups.com>,
	"Lanarcam" <lanarcam1@yahoo.fr> writes:
> > Software tests are performed as black box tests > and/or white box tests. >
I see white box tests as a supplemental thing, i.e. you first do a requirements-based black-box testing and then, to ensure that nothing was missed, you do a coverage analysis. If the coverage analysis finds any white spots, it means that either the code in question can not be made to execute, which means that it is superfluous and can be removed, or there must be some requirement that this code fulfils and that is missing from the specification or has been overlooked. IMHO, it is important to do it this way around (i.e. black box followed by white box test). IOW, I think it is plain wrong to define test cases with the sole purpose of achieving code coverage.
> You usually perform black box tests first where you consider > the system under test as a set of functions with external > access. You are interested only in externally visible > behaviour, you don't test explicitely internal features. > You must ensure that every function call is tested, > that all possible combinations of parameters are used, > you define ranges of values for this, and if states diagrams > are known from the outside, you must ensure that every > combination of states and conditions is tested. > A black box testing is in short a test against external > requirements.
My problem is that there are certain functions that manipulate the system's state, and, depending on that state, some *other* functions change in behavior. Therefore, I suspect that I will not catch all necessary test cases by just exercising every single function on its own. I was hoping for some advice how to deal with such situations. Cheers Arno