## Assert() If You Test Pre/Post Conditions

Started by 2 years ago6 replieslatest reply 2 years ago190 views

I'm trying out test driven development (TDD) and would appreciate some advice. If you are an engineer who practices TDD or unit testing, I would like to know if you also test your function pre and postconditions or class/struct invariants (i.e. contracts).

I was watching a very interesting talk by John Lakos about how his team at Bloomberg treats out-of-contract use as undefined behavior, but internally they verify their defensive checks catch pre and postcondition violations as a matter of design rigor. Here is part 1 and part 2 of the talk.

Reading online, there seem to be various opinions about testing for contract violations. Some explain that calling a function out of contract is a programming error (I agree), but the purpose of a unit test is to evaluate only well defined (or in contract) function behavior. It seems to me, however, that these kinds of defensive checks are important - especially when the code is a library - and not verifying defensive checks makes them less valuable.

What do you think? Are you using defensive checks in you libraries or main logic? Do you check pre or post conditions? How do you verify these checks are correct when running tests?

Thank you,

Daniel

[ - ]

I'm always glad to see discussion about assertions in embedded code, because, as Bertrand Meyer (the inventor of Design by Contract) said: "... this puts the whole discussion of software errors in a completely new light...".

Regrading the relationship between TDD and assertions, I've heard from the TDD community that they don't use assertions that much, "because TDD produces only *tested* code, which does not need assertions". Consequently, most unit testing frameworks don't allow you to test assertion failures. By this I mean writing unit tests that would intentionally violate assertions, so a test would succeed when an assertion fails. The ability to test for assertion failures is also closely related to the notion of resetting the CUT (code under test), because any tests following a failing assertion typically make no sense unless the system is brought back to the known-good state. Alas, most unit testing frameworks don't allow you to cleanly reset the CUT (the setup()/teardown() functions are not good enough)

Regarding the belief that "the purpose of a unit test is to evaluate only well defined (or in contract) behavior", I think it's counterproductive. The purpose of unit testing is to thoroughly *test* the CUT, including the out-of-contract situations.

Finally, the really interesting question for me (and I'm not sure if John Lakos' presentation made it clear) is to leave the assertions in the production code. I've written and blogged about it on numerous occasions (e.g., see "A nail for a fuse" or "An Exception or a Bug").

[ - ]

Hi Daniel,

TDD is an excellent tool to add to your engineering skill set. I personally love it.

I think for most consumer grade firmware/software projects with typical calendar and budget constraints, authoring unit tests specifically for asserts/DBC is probably not delivering the best bang for the buck. That being said, I would expect safety related code to consider unit testing the asserts/contracts.

Are you using defensive checks in you libraries or main logic?

Yes.

Do you check pre or post conditions?

Personally I focus mainly on pre conditions. Most of my projects are consumer grade, so that tends to deliver the needed value/quality.

How do you verify these checks are correct when running tests?

I generally have not. Obviously they are being tested by unit tests for the "in contract" or "non-assert" case. That being said, on a few minor occasions, I have written tests specifically for the assert (or exception). Make sure your unit testing framework is able to handle testing for asserts.

And, of course, make sure that when the firmware does hit an assert, that something useful (and safe) follows. One project I worked on a few years ago accidentally switched the debug/production build flags, and the production version ended up hitting a debugger breakpoint instead of performing a proper reset. Ugh.

All that being said, DBC/asserts are a great way to add another bug prevention layer to your projects. A few more ideas (including asserts) I recently cataloged are here:

Hope that helps,

Matthew

[ - ]

I do mission-critical systems and heavily use asserts. In my experience, 70..80% (a guess, but roughly speaking) are from data structures not being setup properly, or not walking them correctly. I agree with what Maguire advocates (for the most part)

Writing Solid Code (20th Anniversary 2nd Edition) Paperback – January 1, 2013 by Steve Maguire  (Author) https://www.amazon.com/Writing-Solid-Code-20th-Ann...

For example:

#define TYPE_FOO   (0xDEAD)
#define TYPE_BAR   (0xBEEF)

typedef struct
{
uint16_t  type;   // TYPE_BAR;
(body)
} BAR;

typedef struct
{
uint16_t type;    // TYPE_FOO
BAR  *bar;
(body)
} FOO;

void use_foo( FOO *foo )
{
BAR *bar;
assert( foo );     // non-null
assert( foo->type == TYPE_FOO );  // and what I expected
bar = foo->bar;
assert( bar );
assert( bar->type == TYPE_BAR );
(body)
}

This follows through: notice the check on bar

Example case: I was writing some code for a demo program && thought "ah - I will ignore the whole assert data struct thing." Chased my tail for 3 days, went in && put the checks - found the bug in a few minutes - I was walking the structure tree wrong && had some setup wrong.

The place I disagree with Maguire is: he advocates the sequence:

• compile with the asserts in
• run your test suite to success
• compile with asserts off

I just leave the asserts in.

NOTE: the values you choose for the TYPE should be large because small numbers are the norm in most systems. By choosing large numbers (eg 0xDEnn) you minimize the probability of selecting a programatic value for the TYPE value.

Case in point: I had a gig for a T1 clock sync box. I talked to a colleague at the client a couple of years later. They were adding some functionality and they banged into one of my asserts. Without it, they would have had a subtle bug.

Now: If you want to test out-of-contract cases, you can define assert() anyway you want. Instead of the default action, you can have it do something that fits the framework. Then, after the tests, define it in a way that is useful to the system.

For example, I defined it on one system to log the assert in EEPROM && then reboot the system.

There is another aspect to asserts. I first used it on a gig for a public safety system. It was baked into the client's existing code and ethos. They had a large code base, which assumed a number of project-specific tables. The process ended up run/assert/fix/run/..  Once you made it all the way through, you had a high degree of confidence the code was correct. You found all of the internal tables && bad assumptions. (They also did a complete compile on a daily basis, starting at 2am.)

(BTW, on that gig, we had an hour-long discussion with 4..5 core developers if we really needed to use a ***ptr in a particular case. It was decided it was the cleanest way to do it. That line of code had 40..50 lines of comments describing what & why.)

[ - ]

Thank you, Matthew, Miro, and mr_bandit, for your advice and new resources to check out. As a follow up, are there any testing frameworks you would recommend? I've been looking at CppUTest - Which James Grenning uses in Test-Driven Development for Embedded C - and Catch2 which has a nice API (but would likely never fit on an target device). If there's a framework you use that can play nicely with assertions, I'd appreciate a recommendation.

Thank you,

Daniel

[ - ]

> If there's a framework you use that can play nicely with assertions, I'd appreciate a recommendation.

Yes. If you are willing to go a bit off the beaten path, I would recommend to check out the QUTest unit testing framework (in full disclosure, QUTest is provided by my company Quantum Leaps).

As described in the QUTest documentation, this framework breaks up with the xUnit tradition. (And virtually all frameworks, including CppUTest and Unity, are based on xUnit). Instead, working with QUTest is similar to "debugging with printf" (or sprintf or similar), where you instrument the code to output information about its execution. You then run the code with a controlled set of inputs, and examine the produced output from the printfs to determine whether the code under test operates correctly. The main differences from using printfs are: (1) that the much more efficient QP/Spy output mechanism is used instead and (2) that both generating the inputs and the checking of the test outputs are automated.

This testing strategy allows QUTest to cleanly separate exercising the CUT (test fixture) from checking the correctness (test script). This means that the test scripts can be developed in a different language than the CUT (C or C++). In QUTest, the test scripts are written in Python, and to my knowledge this is the only unit testing framework for embedded systems, where Python is such an integral part of unit testing.

Specifically for testing assertions, QUTest has been designed to support testing Design by Contract (assertions in C or C++, not to be confused with "test assertions"). Also, QUTest supports resetting the Target for each individual test, if needed. This goes far beyond providing test setup() and teardown() functions that other testing frameworks offer (and of course QUTest supports as well).

[ - ]

I don't have a specific recommendation on assert friendly frameworks. Chances are you might need a build option to replace the standard assert with a macro that hits the framework appropriately and ensures no code after the assert is executed, when appropriate.

That being said, I've used all of the frameworks you mention in commercial software projects and the framework mentioned by Samek (QL). Here are some thoughts.

My favorite for most embedded/firmware projects is CppUTest, where the tests run on the host PC, not on the target. I've used this on several projects and am generally happy with the results.

I've also used Catch2, which was nice and easy. However, it was/is missing a mock() framework, which I think will be needed in most embedded software projects. In my case the code being written/tested was closer to PC style code than embedded, so it worked nicely.

I've also used QUTest, on a few projects. I generally do not recommend on-target based frameworks, but only because I'm a TDD advocate, which means I'm running associated units tests frequently. Any on-target framework, like QUTest, will be substantially slower because at a minimum, each test cycle execution will incur the costs of delivering the test build to the target, which tends to be slow. In the projects I worked on I replaced various QUTest based tests with CPPUTest, exact same logic being tested, and went from 1 minute test cycles to 3 seconds or less test cycles. That being said, on those projects, we ended up with a blend. All high level and mid level modules exclusively used CppUTest and therefore host PC based unit testing. Our lowest level modules (drivers) continued to use QUTest. We felt this was the right balance and it was worth the effort to support both approaches. BTW: I love Samek's books and software, I just don't prefer the on-target testing approach.

The other benefit to host based unit testing is that you can easily include all of those tests into standard continuous integration build pipelines, ensuring all available tests are executed with each commit. Very nice. This is much harder to do with on-target test frameworks.

I make these points and more in my last presentation on this matter, available here:

Best regards,

Matthew