Forums

Tests in embedded

Started by pozz May 16, 2020
I'm reading TDD book by Grenning, but I don't know if it is really 
up-to-date.

Do you use unit tests, automatic testing and/or TDD approach in your 
embedded projects? Which tools do you really use?

Writing some tests for isolated units is very simple, they can be 
manually written with some asserts. When the unit is connected to other 
units (i.e., uses services from another module or control a driver) this 
is much more complex and I understood you need to write some mocks in 
the tests. In this case, manually writing mocks without using any tools 
can be cumbersome.

What do you think of ceedling, that is Unity + Cmock + CException?

Do you usually run tests on simulator/emulator on development machine or 
directly on the target board?

Could you suggest books (or online resources) on unit testing in 
general, not strictly related to embedded?

Another question. Usually tests are software that checks other software 
(the final application). They can be run automatically if the build 
system is well organized.
What about when you need to test the firmware/hardware? For example, how 
do you test a SPI GPIO expander driver? The only way to test if the 
driver is able to pull high or low a pin on the expander is to inspect 
the voltage with a voltmeter. Of course this isn't automatic.


pozz <pozzugno@gmail.com> writes:

> Another question. Usually tests are software that checks other > software (the final application). They can be run automatically if the > build system is well organized. > What about when you need to test the firmware/hardware? For example, > how do you test a SPI GPIO expander driver? The only way to test if > the driver is able to pull high or low a pin on the expander is to > inspect the voltage with a voltmeter. Of course this isn't automatic.
Having hardware in the loop testing takes more work than pure software testing, but can certainly be done. You can set up a laptop or something like Raspberry Pi to program and communicate with the target. Then run external test code on that host. Eventually you might end up with a whole rack full of these.. Simplest case is testing that drivers work and access peripherals that can somehow identify them and possibly even report their state. In my view the next level is having something that simulates something in the 'world' around the target. Or even real sensors and actuators. If you're doing micropower stuff, you can setup a supply current measurement to the test host so you can see if the code stays within energy budget. Then you can start adding external measurements and stimuli controlled by the host. Sometimes I've used a modified target board acting as IO simulator, too. LabJacks are also great for this. Sometimes I use GPIB/LXI controlled test gear, too, but those can get expensive if you want to store the whole test setup for future use. I have not seen too much good frameworks for this, but I've also not been looking for those for a few years. Typical setup is building the code using the usual CI pipelines (I'm using Bitbucket and Gitlab), running software tests there and then SSHing to the HIL test host to run HW-connected tests there. Tests are usually in Python. The design companies I've been working with have their own in-house test systems, which they're using for production testers - I've stolen a few good ideas from them on how to organize the test code structure. Some of the tests can then be run in the production testers, too! -- mikko
Mikko OH2HVJ <mikko.syrjalahti@nospam.fi> writes:
> Having hardware in the loop testing takes more work than pure software > testing, but can certainly be done.
It helps if you simulate everything you can in the test system. That lets you test the main code on a normal computer with no special hardware. Of course that's not always feasible, and it doesn't help much with testing the hardware you are simulating. In general, yes, test automation is a huge win and is basically mandatory these days.
Paul Rubin <no.email@nospam.invalid> writes:

> Mikko OH2HVJ <mikko.syrjalahti@nospam.fi> writes: >> Having hardware in the loop testing takes more work than pure software >> testing, but can certainly be done. > > It helps if you simulate everything you can in the test system. That > lets you test the main code on a normal computer with no special > hardware. Of course that's not always feasible, and it doesn't help > much with testing the hardware you are simulating.
Right, whatever can be tested SW only is usually much easier that way. Device drivers, energy budget and peripheral interactions and silicon revisions.. I've been thinking if MAME could be used for some of these!
> In general, yes, test automation is a huge win and is basically > mandatory these days.
And repeatable builds. Once I set up Docker-based builds for some things there's no going back. Now I can compile a bit-perfect copy and start any troubleshooting or new versions from that. -- mikko
Il 16/05/2020 19:08, Mikko OH2HVJ ha scritto:
[...]
> And repeatable builds. Once I set up Docker-based builds for some things > there's no going back. Now I can compile a bit-perfect copy and start > any troubleshooting or new versions from that.
Docker-based builds... interesting, could you explain?
On 17/5/20 4:51 am, pozz wrote:
> Il 16/05/2020 19:08, Mikko OH2HVJ ha scritto: > [...] >> And repeatable builds. Once I set up Docker-based builds for some things >> there's no going back. Now I can compile a bit-perfect copy and start >> any troubleshooting or new versions from that. > > Docker-based builds... interesting, could you explain?
We used to do the same thing by archiving entire machine snapshots (ghost images), and later using virtual machines. Docker just makes that more storage-efficient. CH
pozz <pozzugno@gmail.com> writes:

> Il 16/05/2020 19:08, Mikko OH2HVJ ha scritto: > [...] >> And repeatable builds. Once I set up Docker-based builds for some things >> there's no going back. Now I can compile a bit-perfect copy and start >> any troubleshooting or new versions from that. > > Docker-based builds... interesting, could you explain?
If the compiler is available for Linux, you can create a Docker image, which is basically a stored image of disk. Then, you attach the source directory as a directory inside that 'image' and run your compile there. Since it uses Linux layered filesystem, any changes that are done inside the image only affect the top layer. After compile you start again from the original build image, which is exactly the same. Here's an example command I'm compiling one STM32 project from a Makefile in src/ directory: docker run -it --mount type=bind,source="$(pwd)"/src,target=/work/src \ stronglytyped/arm-none-eabi-gcc:latest /bin/bash -c "cd /work/src; make \ clean-all" That runs in interactive (-i) mode image "stronglytyped/arm-none-eabi-gcc:latest", which contains ARM GCC environment. The src/ directory is attached as /work/src inside that image. Then, bash is used to run "cd /work/src; make clean-all" inside that image. The makefile does the build, tests and outputs the binaries and programming files for me under /work/src/dist", which I can see as src/src/dist outside the docker. I'm actually running Mac and using Docker to do builds. And the exactly same Docker image+source code is run on the CI/CD/test server when I push my changes, so the binary compiled by CI/CD is exactly what I had while developing on my Mac. CI/CD server does the same compile and in addition runs HIL tests, code signing, delivers the package to staging area etc. This is how almost all modern (cloud) software is done, there's a lot to apply in embedded world. Of course, the integration/HIL testing and software delivery are usually very different! (To do production builds, use version controlled build image, not the :latest) -- mikko
On Saturday, May 16, 2020 at 5:04:51 AM UTC-4, Paul Rubin wrote:
> It helps if you simulate everything you can in the test system. That > lets you test the main code on a normal computer with no special > hardware. Of course that's not always feasible, and it doesn't help > much with testing the hardware you are simulating.
Absolutely. Here's an example from a small project I finished recently: http://www.nadler.com/embedded/20200530_ButterflySimulator.mp4
> In general, yes, test automation is a huge win and is basically > mandatory these days.
Right, what the above example is missing and I hardly ever see is: - record/playback for capturing and replaying tests - visualization/comparison tools for checking and verifying results That's more on the TDD side. I'm thinking about building a framework for the above. Any thoughts? Examples covered in this course (a long one, good stuff near the end): https://www.youtube.com/watch?v=mOKpyXtO-3k&t=5103s