EmbeddedRelated.com
Forums

Embedded Software Regression Testing

Started by EL November 28, 2006
Hi:

I need some advice about how to perform unit testing and regression
testing on embedded software.

- Do you perform unit testing on host (PC) using a different compiler
or on target with the actual compiler??? Do you think an "on host" test
is useful at all??

- In the case of "on target" tests, how do you interact with the
system??? Do you modify the code to only perform the function under
test??? Do you pre-load the test cases before compiling or use a tool
(emulator, debugger) to modify the variables and parameters on the
run??

- How do you define the "unit" to test?? Do you test functions
independently or test them by groups of functions of a given feature???

- How to set the test cases?? Do you modify the variables and
parameters thru the whole range according to its type???

- How do you set the PASS/FAIL criteria if there are no documents with
the specific function characteristics, but the whole feature???

- What about automating the tests???? Do you use a code-analysis
software the collect data about each function (global and local
variables, parameters, called functions, etc.) or do it manually??

- What about regression tests??? Does anybody have experience with
those??

Thank you.

EL

EL wrote:
> Hi: > > I need some advice about how to perform unit testing and regression > testing on embedded software. > > - Do you perform unit testing on host (PC) using a different compiler > or on target with the actual compiler??? Do you think an "on host" test > is useful at all?? > > - In the case of "on target" tests, how do you interact with the > system??? Do you modify the code to only perform the function under > test??? Do you pre-load the test cases before compiling or use a tool > (emulator, debugger) to modify the variables and parameters on the > run?? > > - How do you define the "unit" to test?? Do you test functions > independently or test them by groups of functions of a given feature??? > > - How to set the test cases?? Do you modify the variables and > parameters thru the whole range according to its type??? > > - How do you set the PASS/FAIL criteria if there are no documents with > the specific function characteristics, but the whole feature??? > > - What about automating the tests???? Do you use a code-analysis > software the collect data about each function (global and local > variables, parameters, called functions, etc.) or do it manually?? > > - What about regression tests??? Does anybody have experience with > those?? > > Thank you. > > EL >
Are you testing Software or a device. The key is to start with a spec. What does it do. You can not give a pass fail if you do not know what it does. ( It explodes -> iPod = Fail, Bomb = Pass) Then come up with test cases to prove it works. Do not forget to provide bad cases to prove its error checking works. Automation is good. Better, if you have to run the test several times. You have to ask will it save time to automate, or just do the test. Modifying code to prove functions it is not as good, It is labor intensive and does not show true operation. It does have its place. Some times built in debugger type functions can be handy.
In article <1164758617.088668.160060@j72g2000cwa.googlegroups.com>, EL 
<enrique.lizarraga@gmail.com> writes
>Hi: > >I need some advice about how to perform unit testing and regression >testing on embedded software.
Is this home work?
>- Do you perform unit testing on host (PC) using a different compiler >or on target with the actual compiler??? Do you think an "on host" test >is useful at all??
This is strange... usually the Unit testing is done on the PC using the target compiler. Most come with a simulator. In some cases where the target and the host are the same size ie both 32 bit it is possible to test on the PC using a PC compiler.
>- In the case of "on target" tests, how do you interact with the >system??? Do you modify the code to only perform the function under >test???
Modifying the code defeats the object of testing. You should be writing the code so that you can load only the module under test.
>Do you pre-load the test cases before compiling or use a tool >(emulator, debugger) to modify the variables and parameters on the >run??
Usually use an ICE
>- How do you define the "unit" to test?? Do you test functions >independently or test them by groups of functions of a given feature???
Good question. There is no single answer.
>- How to set the test cases?? Do you modify the variables and >parameters thru the whole range according to its type???
Elucidate.
>- How do you set the PASS/FAIL criteria if there are no documents with >the specific function characteristics, but the whole feature???
There are pass fail criteria That is why you are doing the test. Look at the test spec that was written before the code was written.
>- What about automating the tests????
Good idea.
> Do you use a code-analysis >software the collect data about each function (global and local >variables, parameters, called functions, etc.) or do it manually??
Use static checkers and code analysis tools.
>- What about regression tests??? Does anybody have experience with >those??
Yes. -- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/\ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
EL wrote:

> I need some advice about how to perform unit testing and regression > testing on embedded software.
I had, like Chris, the feeling that this might be a homework question. However, in this particular instance, I consider it may be better to provide some response that should help to guide the OP to find his own words to provide the answers to the questions. Excuse me for shuffling the questions around a bit but the order of answering might be helpful.
> - How do you set the PASS/FAIL criteria if there are no documents with > the specific function characteristics, but the whole feature???
If the development process is properly formulated the answer would have been very obvious. However, the embedded world is far from perfect so you will find that resolving PASS/FAIL criteria will be a bit harder to do without decent structured documentation of the design. If, as you indicate, the whole feature is documented but the component functions that make up the feature are not you have to determine the PASS/FAIL criteria on the whole function (whether or not it meets its published specification). If there is not enough information in the specification to do this then you will need to seek further clarification from the spec writers. This costs extra money and time. better to have a decent structured specification to start with.
> - Do you perform unit testing on host (PC) using a different compiler > or on target with the actual compiler??? Do you think an "on host" test > is useful at all??
This can depend on the level of the component. In a lot of cases an underlying, operational, model of the product basis can be simulated on the host. So long as you have a very clear specification of the interfaces of the simulation model, and these have been verified to conform to the actual real target, you should have little problem. For the very low level components and for final performance confirmation, testing on the target is often more useful.
> - In the case of "on target" tests, how do you interact with the > system??? Do you modify the code to only perform the function under > test??? Do you pre-load the test cases before compiling or use a tool > (emulator, debugger) to modify the variables and parameters on the > run??
[%X]
> - What about regression tests??? Does anybody have experience with > those??
[%X]
> - What about automating the tests????
I'll answer the above three together. As I run Forth I quite often have "umbilical" or "host to target" communication links which enable me to load new code and test on the target in a function by function basis. No special software needed and at full operating speed. The link traffic is monitored and logged into a file to ease generation of automated scripts for future regression testing. In my development process regression testing may be only minutes away from being needed or a few days.
> - How do you define the "unit" to test?? Do you test functions > independently or test them by groups of functions of a given feature???
Defining a "Unit to Test" should be based on the units described in the specification. A specification should also give you some clues about the overal system architectural structures and be of assistance in determining what constitutes a unit. Units can exist at various levels. It is a matter of drawing the boundaries.
> - How to set the test cases?? Do you modify the variables and > parameters thru the whole range according to its type???
Do you have a "Clear Box" knowledge of the unit? Can you only treat the unit as a "Black Box"? Whether or not you apply a full range of values for the parameters or just the close to limits values (straddling the limits) will depend on how good a view you have of the unit.
> - Do you use a code-analysis > software the collect data about each function (global and local > variables, parameters, called functions, etc.) or do it manually??
My Forth functions are so simple that manual techniques are all that are required. All code undergoes a visual code inspection, function test and limits tests. The visual inspection ensures that functional intention implemented in the code is as stated in the specification for the code. Functional testing ensures that function is carried out and that all logical paths through the code are executable. The limits test is an attempt to cause incorrect operation of the code by providing out of bounds parameters. In other programming languages/environments mileage may vary. -- ******************************************************************** Paul E. Bennett ....................<email://peb@amleth.demon.co.uk> Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> Mob: +44 (0)7811-639972 Tel: +44 (0)1235-811095 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************
Paul E. Bennett wrote:

> EL wrote: > > >>I need some advice about how to perform unit testing and regression >>testing on embedded software. > > > I had, like Chris, the feeling that this might be a homework question. > However, in this particular instance, I consider it may be better to > provide some response that should help to guide the OP to find his own > words to provide the answers to the questions. Excuse me for shuffling the > questions around a bit but the order of answering might be helpful. > > >>- How do you set the PASS/FAIL criteria if there are no documents with >>the specific function characteristics, but the whole feature??? > > > If the development process is properly formulated the answer would have been > very obvious. However, the embedded world is far from perfect so you will > find that resolving PASS/FAIL criteria will be a bit harder to do without > decent structured documentation of the design. > > If, as you indicate, the whole feature is documented but the component > functions that make up the feature are not you have to determine the > PASS/FAIL criteria on the whole function (whether or not it meets its > published specification). If there is not enough information in the > specification to do this then you will need to seek further clarification > from the spec writers. This costs extra money and time. better to have a > decent structured specification to start with.
[snip] The OP appears to be very new to the business. Yes, "structured deocumentation to start with" sounds great. The reality may be a bit different. A project usually starts with a few wishes, "could we .....?". An iterative process consisting of "could we ..?" and trials to actually do it drag on. Whatever the concept, whatever the structure in the program, it sooner or later turns to spaghetti. The product is sold long before the last spaghetti are untangled, and the wishes are still coming. Moment, the wishes weren't really wishes, rather requirements. Just before you finally decide to rewrite everything from scratch, the lot has to be shipped, overdue. Testing ... oh it does what it should. Ok. That was worst case. Rene
EL wrote:
> Hi: > > I need some advice about how to perform unit testing and regression > testing on embedded software. > > - Do you perform unit testing on host (PC) using a different compiler > or on target with the actual compiler??? Do you think an "on host" test > is useful at all?? > > - In the case of "on target" tests, how do you interact with the > system??? Do you modify the code to only perform the function under > test??? Do you pre-load the test cases before compiling or use a tool > (emulator, debugger) to modify the variables and parameters on the > run?? > > - How do you define the "unit" to test?? Do you test functions > independently or test them by groups of functions of a given feature??? > > - How to set the test cases?? Do you modify the variables and > parameters thru the whole range according to its type??? > > - How do you set the PASS/FAIL criteria if there are no documents with > the specific function characteristics, but the whole feature??? > > - What about automating the tests???? Do you use a code-analysis > software the collect data about each function (global and local > variables, parameters, called functions, etc.) or do it manually?? > > - What about regression tests??? Does anybody have experience with > those?? > > Thank you. > > EL >
Assuming that the modules under test have a well defined call interface, parameter range and return values / status codes, it can be quite easy to write short test harness code to excecise the module with both valid and invalid data. Part of this a design stage problem though. Testing is made more difficult if it's not part of the initial design process... Chris
Rene Tschaggelar wrote:

> Paul E. Bennett wrote:
[%X]
>> If the development process is properly formulated the answer would have >> been very obvious. However, the embedded world is far from perfect so you >> will find that resolving PASS/FAIL criteria will be a bit harder to do >> without decent structured documentation of the design. >> >> If, as you indicate, the whole feature is documented but the component >> functions that make up the feature are not you have to determine the >> PASS/FAIL criteria on the whole function (whether or not it meets its >> published specification). If there is not enough information in the >> specification to do this then you will need to seek further clarification >> from the spec writers. This costs extra money and time. better to have a >> decent structured specification to start with. > > [snip] > > The OP appears to be very new to the business. Yes, "structured > deocumentation to start with" sounds great. The reality may be > a bit different. A project usually starts with a few wishes, > "could we .....?". An iterative process consisting of > "could we ..?" and trials to actually do it drag on. Whatever > the concept, whatever the structure in the program, it sooner > or later turns to spaghetti. The product is sold long before > the last spaghetti are untangled, and the wishes are still > coming. Moment, the wishes weren't really wishes, rather > requirements. Just before you finally decide to rewrite > everything from scratch, the lot has to be shipped, > overdue. Testing ... oh it does what it should. > Ok. That was worst case.
Part of the art of developing any system is managing the customer and his wish-list. -- ******************************************************************** Paul E. Bennett ....................<email://peb@amleth.demon.co.uk> Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/> Mob: +44 (0)7811-639972 Tel: +44 (0)1235-811095 Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. ********************************************************************