EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Help devising software tests

Started by Unknown July 14, 2005
Hi all,

I have to write a test specification to define tests for a piece of
software (a small kernel for an embedded system) that is currently
under development. Given is an API specification (basically function
calls specified as C prototypes, along with textual descriptions of
what these calls are supposed to do).

Now I have to devise a strategy to define tests for this software.

One obvious procedure is to go through the API functions one by
one, identify requirements from the C prototypes and the textual
descriptions and define a test for every single requirement.

But then there are also "cross cutting" concepts, i.e. concepts
that manifest themselves at different places throughout the API.
As an example, there is the concept of a set of rights that is
attached to an object (a task, in this special case). The API
has functions to manipulate this set of rights, and, depending on the
current settings of these rights, the object may or may not be entitled
to access certain other API functions.

I feel that by just looking at each API function one by one I would
miss these "cross cutting" effects, and that consequently, my test
spec would be incomplete.

I wonder if people here can give me some insights or recommend
any books/online documents worth reading.

Thanks in advance for any help!

Arno
"Arno Nuehm" <arno@localhost> wrote in message 
news:db59jv$l0o$00$1@news.t-online.com...
> Hi all, > > I have to write a test specification to define tests for a piece of > software (a small kernel for an embedded system) that is currently > under development. Given is an API specification (basically function > calls specified as C prototypes, along with textual descriptions of > what these calls are supposed to do). > > Now I have to devise a strategy to define tests for this software. > > One obvious procedure is to go through the API functions one by > one, identify requirements from the C prototypes and the textual > descriptions and define a test for every single requirement. > > But then there are also "cross cutting" concepts, i.e. concepts > that manifest themselves at different places throughout the API. > As an example, there is the concept of a set of rights that is > attached to an object (a task, in this special case). The API > has functions to manipulate this set of rights, and, depending on the > current settings of these rights, the object may or may not be entitled > to access certain other API functions. > > I feel that by just looking at each API function one by one I would > miss these "cross cutting" effects, and that consequently, my test > spec would be incomplete. > > I wonder if people here can give me some insights or recommend > any books/online documents worth reading. > > Thanks in advance for any help! > > Arno
If your goal is a tested kernel, my company www.validatedsoftware.com offers a complete set of tests for the MicroC/OS-II RTOS. It might save you a lot of time and effort over developing a new kernel. Scott
On Thu, 14 Jul 2005 08:33:30 -0600, "Not Really Me"
<scott@exoXYZtech.com> wrote:

>"Arno Nuehm" <arno@localhost> wrote in message >news:db59jv$l0o$00$1@news.t-online.com... >> >> I have to write a test specification to define tests for a piece of >> software (a small kernel for an embedded system) that is currently >> under development. Given is an API specification (basically function >> calls specified as C prototypes, along with textual descriptions of >> what these calls are supposed to do). >> >> Now I have to devise a strategy to define tests for this software. >> >> One obvious procedure is to go through the API functions one by >> one, identify requirements from the C prototypes and the textual >> descriptions and define a test for every single requirement. >> >> But then there are also "cross cutting" concepts, i.e. concepts >> that manifest themselves at different places throughout the API. >> As an example, there is the concept of a set of rights that is >> attached to an object (a task, in this special case). The API >> has functions to manipulate this set of rights, and, depending on the >> current settings of these rights, the object may or may not be entitled >> to access certain other API functions. >> >> I feel that by just looking at each API function one by one I would >> miss these "cross cutting" effects, and that consequently, my test >> spec would be incomplete. >> >> I wonder if people here can give me some insights or recommend >> any books/online documents worth reading. > >If your goal is a tested kernel, my company www.validatedsoftware.com offers >a complete set of tests for the MicroC/OS-II RTOS. It might save you a lot >of time and effort over developing a new kernel.
I agree, Scott, that it may be just the thing for them. So I'm glad you post this option. Still, at the risk of you and I talking at cross-purposes to the OP's need, I'm motivated to add a few thoughts stimulated by the idea of using a 3rd party kernel in an embedded application. ... (1) If the product is for some critical area, such as medical use, it may not be less effort to source a 3rd party 'kernel.' The OP did say "small kernel" and the kind of exhaustive testing required for some medical purposes (such as where every conditional branch edge must be exercised and tested for impact) would make this *much* easier to do for a small, narrowly focused fragment of code designed and written for the specific system than it would be for something targeted at a vague, non-specific marketplace. Making an honest effort of testing every branch edge in every part of the object code of an O/S would require understanding an often-much-larger-than-needed body of code. I suppose that if the O/S were carefully crafted so that, at compile time, exactly and only those parts needed would be included that then this might be not so bad. But otherwise, it adds a LOT of extra and unnecessary work on the testing side. (2) I just recently wrote from scratch a small, cooperative kernel with thread priorities, an accurate delta-time sleep queue with a guaranteed start time relative to the clock edge, non-synchronized thread messages, the ability to support exception handling local to the thread stack, and a variety of support functions between the time of 9AM and 2PM on the same day. In the interim, it has undergone two walkthroughs with skilled programmers, and it has been running two months without a line of code change since that day other than adding the feature of supporting thread-local heap storage and changing the hardware timer source. It does exactly and no more than is required for the application. Writing a small kernel tailored for embedded applications should be a personal skill almost as unconsciously applied to a job as is not having to look at the keys as one types out program code. Certainly, not as something thought of as too big and too fearful that one must buy from a 3rd party who really doesn't and can't have any idea what's important and what's not important for the application at hand and where they must instead struggle to broaden their own market by growing feature sets (and, of course, expanding the documentation required to understand it all.) ... I wrote the above essentially because the OP mentioned a "small kernel." Some folks *want* tested TCP/IP and networking support, tested FAT file support, tested wear-leveled flash file systems, tested <<insert your big, complex feature set here>> support. But that isn't what the OP started out saying. Small was the guide. Or, at least, that is how I first read the OP. But on looking back again, I see more clearly the OP talking about the rights of tasks to the API. And this smells of a larger-than-small kernel. Worse, it isn't something I'd add to a kernel for critical applications -- there would be no clear point in doing so. The code should be right, regardless of a task's rights to the API. So I see no point in providing this feature, which itself only pointlessly adds extra testing requirements, if it were a critical application like medical. So maybe you are right to suggest a well-tested 3rd party system. Jon
On Thu, 14 Jul 2005 11:00:47 +0200, arno@localhost (Arno Nuehm) wrote:

><snip> >I feel that by just looking at each API function one by one I would >miss these "cross cutting" effects, and that consequently, my test >spec would be incomplete. > >I wonder if people here can give me some insights or recommend >any books/online documents worth reading.
Is this a medical or similarly critical application? Jon
Hi Scott,

In article <3jnbdmFquvd9U1@individual.net>,
	"Not Really Me" <scott@exoXYZtech.com> writes:
> > > If your goal is a tested kernel, my company www.validatedsoftware.com offers > a complete set of tests for the MicroC/OS-II RTOS. It might save you a lot > of time and effort over developing a new kernel.
Thanks for the offer. Unfortunately, using MicroC/OS-II is not an option for this project (it simply doesnt fit the requirements). Besides, my personal part in this project is not to develop a kernel, but to define tests for a given kernel. So, I guess what I'm looking for is more a methodology than a result produced by it. If you could shed some light on the process that was used to define those tests for MicroC/OS-II, it would probably help me a lot. But I can see pretty well that you can't do that as it's probably the basis of your company's business. Thanks Arno
Hi Jonathan,

In article <fv0dd19per5qd393lveuqcona7av7ugkav@4ax.com>,
	Jonathan Kirwan <jkirwan@easystreet.com> writes:
> On Thu, 14 Jul 2005 08:33:30 -0600, "Not Really Me" > <scott@exoXYZtech.com> wrote: > > ......
[snip good discussion about why not to use a custom kernel, most of which hits the nail on the head -- Thanks!]
> > I wrote the above essentially because the OP mentioned a "small > kernel." Some folks *want* tested TCP/IP and networking support, > tested FAT file support, tested wear-leveled flash file systems, > tested <<insert your big, complex feature set here>> support. But > that isn't what the OP started out saying. Small was the guide. Or, > at least, that is how I first read the OP. > > But on looking back again, I see more clearly the OP talking about the > rights of tasks to the API. And this smells of a larger-than-small > kernel. Worse, it isn't something I'd add to a kernel for critical > applications -- there would be no clear point in doing so.
Believe me, there is quite some point in doing so....
> The code > should be right, regardless of a task's rights to the API.
Well, the kernel's main (only) job is to implement tasks and to provide secure execution environments (i.e. isolate the tasks from each other). The idea is to allow programs with different trust levels to coexist (i.e. potentially malicious, non-trusted programs along with trusted ones). I've seen this this concept being referred to as a "seperation kernel", though I'm more inclined to just call it a "microkernel". So, yes, it does implement tasks, but no, the kernel is definitely not "larger-than-small".
> So I see > no point in providing this feature, which itself only pointlessly adds > extra testing requirements, if it were a critical application like > medical.
BTW, I would prefer not to tell the application, but it is not medical, and it is security (as opposed to safety) related. Thanks Arno

Arno Nuehm wrote:
> Hi all, > > I have to write a test specification to define tests for a piece of > software (a small kernel for an embedded system) that is currently > under development. Given is an API specification (basically function > calls specified as C prototypes, along with textual descriptions of > what these calls are supposed to do). > > Now I have to devise a strategy to define tests for this software. > > One obvious procedure is to go through the API functions one by > one, identify requirements from the C prototypes and the textual > descriptions and define a test for every single requirement. > > But then there are also "cross cutting" concepts, i.e. concepts > that manifest themselves at different places throughout the API. > As an example, there is the concept of a set of rights that is > attached to an object (a task, in this special case). The API > has functions to manipulate this set of rights, and, depending on the > current settings of these rights, the object may or may not be entitled > to access certain other API functions. > > I feel that by just looking at each API function one by one I would > miss these "cross cutting" effects, and that consequently, my test > spec would be incomplete. > > I wonder if people here can give me some insights or recommend > any books/online documents worth reading.
Software tests are performed as black box tests and/or white box tests. You usually perform black box tests first where you consider the system under test as a set of functions with external access. You are interested only in externally visible behaviour, you don't test explicitely internal features. You must ensure that every function call is tested, that all possible combinations of parameters are used, you define ranges of values for this, and if states diagrams are known from the outside, you must ensure that every combination of states and conditions is tested. A black box testing is in short a test against external requirements. White box testing is used to ensure that every path in the code is taken. You must exercise all output branch for every condition. For loops, depending on the criticity of the system you test one or several pass. There are tools available for the testing of the code coverage.
In article <1121359563.096610.231190@g14g2000cwa.googlegroups.com>,
	"Lanarcam" <lanarcam1@yahoo.fr> writes:
> > Software tests are performed as black box tests > and/or white box tests. >
I see white box tests as a supplemental thing, i.e. you first do a requirements-based black-box testing and then, to ensure that nothing was missed, you do a coverage analysis. If the coverage analysis finds any white spots, it means that either the code in question can not be made to execute, which means that it is superfluous and can be removed, or there must be some requirement that this code fulfils and that is missing from the specification or has been overlooked. IMHO, it is important to do it this way around (i.e. black box followed by white box test). IOW, I think it is plain wrong to define test cases with the sole purpose of achieving code coverage.
> You usually perform black box tests first where you consider > the system under test as a set of functions with external > access. You are interested only in externally visible > behaviour, you don't test explicitely internal features. > You must ensure that every function call is tested, > that all possible combinations of parameters are used, > you define ranges of values for this, and if states diagrams > are known from the outside, you must ensure that every > combination of states and conditions is tested. > A black box testing is in short a test against external > requirements.
My problem is that there are certain functions that manipulate the system's state, and, depending on that state, some *other* functions change in behavior. Therefore, I suspect that I will not catch all necessary test cases by just exercising every single function on its own. I was hoping for some advice how to deal with such situations. Cheers Arno

> You usually perform black box tests first where you consider > the system under test as a set of functions with external > access. You are interested only in externally visible > behaviour, you don't test explicitely internal features. > You must ensure that every function call is tested, > that all possible combinations of parameters are used, > you define ranges of values for this, and if states diagrams > are known from the outside, you must ensure that every > combination of states and conditions is tested. > A black box testing is in short a test against external > requirements. > > White box testing is used to ensure that every path in > the code is taken. You must exercise all output branch > for every condition. For loops, depending on the criticity > of the system you test one or several pass. There are > tools available for the testing of the code coverage.
This is generally correct except for the fact that you are almost never going to be able to test all possible combinations of parameters. In almost all cases this would take many more years than you, or even your great grandchildren, are going to be alive. You need to select an apropriate subset of possible combinations that has a reasonable likelyhood of discovering any problems in the SW. You generally want to keep good track of the number of defects you are finding along the way and perform regression tests as needed. The number of defects being found should begin to approach zero as you progress. At some point, usually dictated more by contractual schedule restraints than anything else, you decide that you have reached a point of diminishing returns. In other words you get to a point where you are putting in a lot of effort to locate a relatively small number of defects that are very unlikely to occur in a real life situation, at this point you stop. As far as how to go about developing test cases, that is pretty dependent on the system in question and without seeing it I don't know what real advice anyone here is going to be able to give. Generally you want to define some set of incremental builds that expand upon each other adding new components and functionalities to the system as you go. If there are interdependencies between component the order of integration becomes a key consideration and you must fully test a component and then add the new one and if needed change the original component in such a way so as to test the new one. This becomes difficult if there are large numbers of circular dependencies which sounds like what you are desribing. In general the best approach is to remove the circular dependencies at design time so they will not become an issue when you test. This is one reason why a test engineer or someone similar should be involved in the process early on. If problems with testability can be caught early they are easier and cheaper to fix.

Arno Nuehm wrote:
> In article <1121359563.096610.231190@g14g2000cwa.googlegroups.com>, > "Lanarcam" <lanarcam1@yahoo.fr> writes: > > > > Software tests are performed as black box tests > > and/or white box tests. > > > > I see white box tests as a supplemental thing, i.e. you first do > a requirements-based black-box testing and then, to ensure that > nothing was missed, you do a coverage analysis. If the coverage > analysis finds any white spots, it means that either the code in > question can not be made to execute, which means that it is > superfluous and can be removed, or there must be some requirement > that this code fulfils and that is missing from the specification > or has been overlooked. > > IMHO, it is important to do it this way around (i.e. black box > followed by white box test). IOW, I think it is plain wrong to > define test cases with the sole purpose of achieving code coverage. > > > You usually perform black box tests first where you consider > > the system under test as a set of functions with external > > access. You are interested only in externally visible > > behaviour, you don't test explicitely internal features. > > You must ensure that every function call is tested, > > that all possible combinations of parameters are used, > > you define ranges of values for this, and if states diagrams > > are known from the outside, you must ensure that every > > combination of states and conditions is tested. > > A black box testing is in short a test against external > > requirements. > > My problem is that there are certain functions that manipulate > the system's state, and, depending on that state, some *other* > functions change in behavior. Therefore, I suspect that I will > not catch all necessary test cases by just exercising every single > function on its own. I was hoping for some advice how to deal with > such situations.
Black box testing is not testing individual C functions one by one. You must test functionalities. If your system exhibits a state, you must test all C functions in every accessible state. This seems a lot of work but you can't avoid it if you need to prove the functionnal correctness. If you can show that some functions behave independantly of the various states and that they don't modify the state, you can exclude them from the set. But in order to do so you must perform some analysis of the code and prove it.

The 2024 Embedded Online Conference