EmbeddedRelated.com
Blogs
Memfault Beyond the Launch

Acceptance Tests vs. TDD

Steve BranamJuly 17, 2021

Contents:


The Question

Our software book club at work is reading Michael Feathers' Working Effectively with Legacy Code. This is an outstanding book that's worth re-reading every few years.

This week, we went over Chapter 8, "How Do I Add a Feature", which covers Test-Driven Development (TDD). One of the discussion items that came up was writing tests in the Jira ticket for a story (Jira is a tool for managing stories, the planning unit for software development in the Scrum methodology for the Agile management practices).

The idea is that as part of writing the ticket, you write the tests that you'll use to determine its completion. Since this is "writing tests first", is that a form of TDD?

This article is available in PDF format for easy printing

In the process of answering that question, I'll give a summary of TDD.


References

I'm referencing the following resources in this post, using the listed nicknames. For books, titles in this table are affiliate links to them on Amazon; the year is the copyright year printed in the book (which in some cases is the next year following the listed publication date). The last row of the table is a live online course, with a link to the course page.

Title Author Nickname Year
Extreme Programming Explained: Embrace Change
Kent Beck XPExplained 2000
Test Driven Development: By Example Kent Beck TDDByExample 2003
Working Effectively with Legacy Code
Michael C. Feathers LegacyCode 2005
Test Driven Development for Embedded C James W. Grenning TDDEmbeddedC 2011
Modern C++ Programming with Test-Driven Development: Code Better, Sleep Better Jeff Langr TDDC++ 2013
TDD, Where Did It All Go Wrong Ian Cooper TDDWrong 2017
Does TDD Really Lead to Good Design? Sandro Mancuso TDDGoodDesign 2018
A Philosophy of Software Design John Ousterhout PhilDesign 2018
Clean Agile: Back to Basics Robert C. Martin CleanAgile 2020
Test-Driven Development For C or C++ (Remotely delivered via Web-Meeting) James W. Grenning TDDCourse Live

I have reviews of several of these:

If you're serious about TDD, I highly recommend that yes, you read, watch, and do all of these. See my concluding section "Why Bother?" below if you're wondering.

Remember that TDD is a tool. Like any other tool, it can be misused, abused, and confused. Using it poorly will diminish its value. Take the time to learn to use it well so you can maximize its benefit.


My Answer

My response to the question above is that tests written before the code as part of the ticket are acceptance tests. These are different from TDD, which produces unit tests.

While TDD results in having and running unit tests, it isn't strictly a test technique. It's a design and development technique, driven by tests. That serendipitously provides double benefit, design and tests.

Yet a third benefit is providing executable documentation of how to use the code, automatically updated when the tests are updated. Want to know how to use the code? Go read the tests.

This is where things can get confusing, because the terminology isn't always well-defined, and has been used differently in some cases.

The confusion produces these questions:

  1. Aren't the acceptance tests the same as unit tests, because they are testing a single story, which is just a small unit from a larger system?
  2. Who should write these tests?
  3. How detailed should the test descriptions be?
  4. If we have acceptance tests, do we need TDD? If we have TDD, do we need acceptance tests?

One of the areas of confusion is what constitutes a unit. In classical testing lexicon, a unit is a module or class. That's an implementation unit. In TDD lexicon, the unit of isolation is a test. A unit test is simply a test that runs in isolation. It is a standalone unit. It tests a behavior, regardless of what implementation units are bound up in that behavior. You achieve isolation by decoupling things and faking out or mocking parts to break dependencies.

This can be a mind-bending subtlety. It took me a while to understand, and often results in poorly-done TDD, creating high-maintenance, brittle test suites that break whenever the implementation changes. That in turn may result in people abandoning TDD. Unfortunately, that's the way the terminology has played out, but it's better to think of TDD as producing developer tests.

Let's look at the differences:

Acceptance Tests TDD Unit Tests
Written before coding starts. Written real-time at the same time as the code.
Define all the requirements, the goal that the code is to meet, all at once. Are built up incrementally as the code progresses.
Written by the customer (or someone acting as the customer representative) who writes the story.  Written by the developer who implements the code that implements the story.
Written initially in natural language (i.e. English). Written in coding language.
Automated by a test developer. Automated by the developer who implements the code.
Exercise all behaviors in the story as a set. Exercise behaviors in isolation.
Automated before (ideally), during, or after coding. Automated as part of coding.
May take longer to run (measured in minutes). Run fast (measured in seconds).
Are run whenever someone wants to see if the goal has been met, and what's still missing from the code. Are run repeatedly many times as the code is built up to detect immediate breakage.

Acceptance tests are written and automated separately from the code, by separate people. TDD tests are written and automated along with the code, by the person writing the code.

An important point about test speed is that developers won't mind running fast tests repeatedly as part of the typical edit-build-run cycle. They'll be less willing to run longer-duration, slower tests, especially if they require extensive setup. "Dude, I'm crankin' here, I'm in the zone, don't slow me down!"

For ease-of-use and to ensure repeated use, TDD test suites need to be fast. They need to be RFF: Really freakin' fast. That puts them right there in the zone with the developers.

What are the similarities between acceptance tests and TDD unit tests?

  • Both test behavior, not implementation. That means both are implementation-agnostic. That is, they test via the interface to the code. They don't test via the internals of the code. They exercise those internals through the interface, but they aren't written according to the specific internals.
  • Both verify that the code meets the requirements.
  • Both are automated, written in coding language, so that they can be run repeatedly, easily, at any time, by any tester, developer, and by the CI/CD system.
  • Once initial development is done, both become automated regression tests, to detect when previously-working code fails unexpectedly as a result of changes. Surprise!

That first point is a cardinal rule. Because of the confusion with "unit test" as meaning testing of implementation units, this rule is easy to overlook. It's very tempting to test internal details. Instead, just use the interface to determine what to test.

The interface is the public contract that the code is making, which tends to be much more stable than the implementation. If it's possible to do through the interface, it's fair game for tests. But the tests shouldn't look under the hood to check on what's happening. That makes them brittle; change the implementation, and the tests may fail, even though the implementation may be correct.

The last point is also very important. Code changes over time. How do know you haven't broken something? That's the whole point of Feathers' LegacyCode.

But as you can see, other than who writes the tests and when they are written, they both end up being something pretty similar.

So are they both needed? Both are useful, because they come from different perspectives.

The difference in isolation offers a clue. Each of the TDD tests exercises a behavior in isolation, as a standalone unit, with no regard for ordering. Acceptance tests exercise a set of behaviors as a whole, as a coherent set, run in a particular order that demonstrates meeting the business requirements.

The main value that acceptance tests offer is that they test completeness from the business perspective. The TDD tests verify that each of the behaviors implemented work properly. But they can't verify behaviors that haven't been implemented.

The acceptance tests verify that all the behaviors required have been implemented and work together in such a way as to meet the business goal for the story.

In some respects, it is possible for the TDD tests to also serve as the acceptance tests. But that risks lack of completeness and inability to run properly when behaviors are combined, no longer running in isolation.


Concerns

Too Much Time

What about the concern that this is too time-consuming? Remember that the goal is to avoid making a mess of things, which would require a lot of time and money to clean up if it affected customers.

Meanwhile, TDD avoids a lot of time spent debugging before releasing (which is just dealing with messes before they affect customers).

So this is an investment of time in order to mitigate the significant downside risk of endless schedule hit for troubleshooting cycles. Wouldn't you like the code to just work without all that hassle?

Too Much Code

Another concern is that you may end up producing as much or more test code than production code. So, what's the problem? Does the production code work? Does it meet its requirements?

Can the testing get out of control and be too much? I'm sure it can be if poorly done. But with good developers who are driven to produce good product, that's unlikely to happen.

These are results-oriented people, not given to tolerating frivolous waste. If they produced that much test code in order to properly test the production code, they probably needed it.

But doesn't more code mean more bugs? Can't the tests have bugs? Yes, there is the potential for that, but you prevent it by applying the TDD cycle: you show a test failing, and then passing because of the changes you made. You have to see evidence of actually working.

Just as you are isolating and removing bugs in the production code in real-time as you write them, you are isolating and removing bugs in real-time in the tests as you write them. It may be a little messy at points as you're sorting out the specifics of a test and the specifics of the relevant production code, but you home in on correct operation of both.

Brittle Tests

Brittle tests are tests that break when the production code implementation is changed, even though the implementation is correct. This is a very real concern. Even though the behavior hasn't changed, the tests fail.

This is an indication that the tests are testing internal implementation details, not behavior. That means they have violated the cardinal rule I mentioned above.

That means that when you refactor the implementation for some reason, you are forced to refactor the tests as well to match. Then the tests start to turn into a high-maintenance nightmare. Eventually, you're liable to abandon them as a result.

That happened to me on my first use of TDD. The tests were a huge help in developing the code intially, but were far too tied in to the internals. I ended up abandoning that test suite.

Remove all such tests from the test suite to remove the brittleness.

It can be useful to temporarily add some tests that delve into internals until you get things worked out, but then remove them.


Tying This To Back To Sources

Kent Beck

I was first introduced to TDD in 2007 with Beck's TDDByExample, where he teaches the process by leading you through a series of examples. I read the book and applied the method to the C++ code I was developing. I was familiar with acceptance tests and unit tests in various forms, but this was a whole new approach.

In Chapter 1, "Multi-Currency Money," he says:

"What behavior will we need to produce the revised report? Put another way, what set of tests, when passed, will demonstrate the presence of code we are confident will compute the report correctly?"
"When we write a test, we imagine the perfect interface for our operation. We are telling ourselves a story about how the operation will look from the outside. Our story won't always come true, but it's better to start from the best-possible application program interface (API) and work backward than to make things complicated, ugly, and "realistic" from the get-go."

He lists the TDD cycle:

  1. Add a little test.
  2. Run all tests and fail.
  3. Make a little change.
  4. Run the tests and succeed.
  5. Refactor to remove duplicates.

In Chapter 26, "Equality for All, Redux," he foreshadows Feather's LegacyCode:

"You will often be implementing TDD in code that doesn't have adequate tests (at least for the next decade or so). When you don't have enough tests, your are bound to come across refactorings that aren't supported by tests. You could make a refactoring mistake and the tests would all still run. What do you do?"
"Write the tests you wish you had. If you don't, you will eventually break something while refactoring."

In Chapter 25, "Test-Driven Development Patterns", under the heading "Isolated Test," he says:

"But the main lesson I took was that tests should be able to ignore one another completely."
"One convenient implication of isolated tests is that the tests are order independent. If I want to grab a subset of tests and run them, then I can do so without worrying that a test will break now because a prerequisite test is gone."
"Isolating tests encourages you to compose solutions out of many highly cohesive, loosely coupled objects. I always heard this was a good idea, and I was happy when I achieved it, but I never knew exactly how to achieve high cohesion and loose coupling regularly until I start writing isolated tests."

In Chapter 27, "Testing Patterns", under the heading "Mock Object," he says:

"How do you test an object that relies on an expensive or complicated resource? Create a fake version of the resource that answers constants."
"Mock Objects encourage you down the path of carefully considering the visibility of every object, reducing the coupling in your designs. They add a risk to the project - what if the Mock Object doesn't behave like the real object? You can reduce this strategy by having a set of tests for the Mock Object that can also be applied to the real object when it becomes available."

I did make some mistakes. I failed to grasp the subtlety of testing just the interface, not the implementation, as he implied in Chapter 1; I needed to be clubbed over the head with that guidance explicitly before I fully appreciated it. As I mentioned above, because I tested the implementation of things, not just the interfaces, I created brittle tests. That was the biggest negative lesson.

We can rewind further, to his XPExplained. In Chapter 18, "Testing Strategy," he lays the foundation for the form of acceptance and unit tests I'm using here.

Under the heading "Who Writes Tests?" he says they come from two sources:

  • Programmers
  • Customers

"The programmers write tests method-by-method." These are the unit tests (i.e. the developer tests). He describes writing them first, before writing the code, although he doesn't describe the TDD cycle in this book.

  • If the interface for a method is at all unclear, you write a test before you write the method.
  • If the interface is clear, but you imagine that the implementation will be the least bit complicated, you write a test before you write the method.
  • If you think of an unusual circumstance in which the code should work as written, you write a test to communicate the circumstance.
  • If you find a problem later, you write a test that isolates the problem.
  • If you are about to refactor some code, and you aren't sure how it's supposed to behave, and there isn't already a test for the aspect of the behavior in question, you write a test first.

"The customers write tests story-by-story." These are the acceptance tests. The customers rely on testers to automate the acceptance tests.

"The question they need to ask themselves is, "What would have to be checked before I would be confident that this story was done?" Each scenario they come up with turns into a test, in this case a functional test."
"Customers typically can't write functional tests by themselves. They need the help of someone who can first translate their test data into tests, and over time can create tools that let the customers write, run, and maintain their own tests. That's why an XP team of any size carries at least one dedicated tester. The tester's job is to translate the sometimes vague testing ideas of the customer into real, automatic, isolated tests. The tester also uses the customer-inspired tests as the starting point for variations that are likely to break the software."

Robert Martin

Fast-forward to last year, where Martin's CleanAgile reiterates the model (given that he feels the Agile community has strayed from it). In Chapter 3, "Business Practices," under the heading "Acceptance Tests," he says "Requirements should be specified by the business" (his emphasis). The requirements take the form of acceptance tests.

He addresses some of the circular logic confusion this causes. There's an inherent paradox: the business should write the tests, but since they need to be executable, they need to be written by programmers, who won't write from the business' point of view... 

Then he lays out the practice that resolves the paradox:

"The business writes formal tests describing the behavior of each user story, and developers automate those tests."
"The developers integrate those tests into the continuous build. Those tests become the Definition of Done for the stories in the iteration. A story is not specified until its acceptance test is written (Martin means the story has not reached the fully-specified state until the test is written). A story is not complete until its acceptance test passes."

Under the heading "Developers Are the Testers," he says:

"It is the programmers' job to run the tests. It is the programmers' job to make sure that their codes passes all the tests. So of course, the programmers must run those tests. Running those tests is the only way for programmers to determine whether their stories are done."
"Indeed, the programmers will automate that process by setting up a Continuous Build server. This server will simply run all the tests in the system, including all unit tests and all acceptance tests, every time any programmer checks in a module."

Then in Chapter 5, "Technical Practices," under the heading "Test-Driven Development," he describes how TDD is programming's equivalent of the accountant's discipline of double-entry bookkeeping:

"Every required behavior is entered twice: once as a test, and then again as production code that makes the test pass. The two entries are complementary, just as assets are complementary to liabilities and equities. When executed together, the two entries produce a zero result: Zero tests failed."
"Programmers who learn TDD are taught to enter every behavior one at a time - once as a failing test, and then again as production code that passes the test. This allows them to catch errors quickly. They are taught to avoid writing a lot of production code and then adding a batch of tests, since errors would then be hard to find."

He lists three simple rules for TDD:

  • Do not write any production code until you have first written a test that fails due to the lack of that code.
  • Do not write more of a test than is sufficient to fail - and failing to compile counts as a failure.
  • Do not write more production code than is sufficient to pass the currently failing test.

Under the heading "Refactoring", he talks about "improving the structure of the code without altering the behavior, as defined by the tests." Here he lists the Red/Green/Refactor cycle (red referring to the test environment displaying a red FAILED indication, green referring to a green PASSED indication):

  1. First, we create a test that fails (the red step).
  2. Then we make the test pass (the green step).
  3. Then we clean up the code (the refactor step).
  4. Return to step 1.

Something I found critical, he goes on to say:

"The idea here is that writing code that works and writing code that is clean are two separate dimensions of programming. Attempting to control both dimensions at the same time is difficult at best, so we separate the two dimensions into two different activities."
"To say this differently, it is hard enough to get code working, let alone getting the code to be clean. So we first focus on getting the code working by whatever messy means occur to our meager minds. Then, once working, with tests passing, we clean up the mess we made."
"This makes it clear that refactoring is a continuous process, and not one that is performed on a scheduled basis. We don't make a huge mess for days and days, and then try to clean it up. Rather, we make a very small mess, over a period of a minute or two, and then we clean up that small mess."

I can't stress how important those last 3 paragraphs are. They free you to experiment and explore, try and fail, until you get something you're happy with, be as hacky or as elegant as you want while you think it through and work it out and deal with the "oh yeah, that" moments, then you make it clean.

Michael Feathers

In Feathers' LegacyCode Chapter 8, under the heading "Test-Driven Development (TDD)," he lists his version of the TDD algorithm:

  1. Write a failing test case.
  2. Get it to compile.
  3. Make it pass.
  4. Remove duplication.
  5. Repeat.

His "remove duplication" step is essentially Martin's refactor step. For legacy code (in his definition, "legacy code is simply code without tests."), he adds a pre-step 0 and modifies step 3:

0. Get the class you want to change under test.

3. Make it pass. (Try not to change existing code as you do this.)

Once you've accomplished that, then you can change existing code, because then you have a test acting as a safety net to catch you if you break anything while changing the code.

James Grenning

My favorite book on TDD is Grenning's TDDEmbeddedC. It elaborates in full detail, step-by-step, how to perform TDD, covering a variety of situations, strategies, and tactics.

That includes the important concept of test doubles, the things that fake out parts of the system to break dependencies. These also provide observability into the system, to observe the effects of the tests and the behaviors they are exercising.

So much the better that it focuses on embedded systems, where people may think they can only test on their final target platform (but the book isn't just for embedded systems).

In Chapter 1, "Test-Driven Development," section 1.2, "What Is Test-Driven Development," he draws an important distinction about TDD:

"Test-Driven Development is not a testing technique, although you do write a lot of valuable automated tests. It is is a way to solve programming problems. It helps software developers make good design decisions. Tests provide a clear warning when the solution takes a wrong path or breaks some forgotten constraint. Tests capture the production code's desired behavior."

In section 1.3, "Physics of TDD," he says that the typical Debug Later Programming (DLP) style results in lots of waste, due to the late feedback. In contrast, TDD provides fast feedback. That prevents bugs from escaping past the developer who is working on the code at the moment. That significantly cuts or even eliminates debug time.

That's one of the big returns on investment from TDD. Add that to the three benefits I noted above.

This is a hands-on practical process book. See my review above for a taste of what it offers.

One thing I discovered about TDD is that it's very easy to fool yourself into thinking you're doing it properly. It's very easy to lull yourself into short-cutting the cycle, especially when you're first getting started on a test. "Of course it's going to fail, the code isn't even written yet; why not just write it? This is just stupid!"

The real power of the method comes not in the initial steps, but once you get it rolling. So you need to be rigorous and disciplined about following the cycle.

But there are certainly some judgement calls to be made. They can be vexing until you get some experience with it. There's definitely a risk of forming poor habits.

How to avoid that? Via Grenning's live interactive TDDCourse. This is an opportunity to learn and practice TDD in real-time under a watchful eye. He really holds your feet to the fire doing the TDD cycle.

That's an enlightening experience. I had recently read his book before I was aware of the course, and thought I had it down. It was enormously valuable to have real-time guidance. It was effectively a form of pair programming, except that he was pairing with each of the people in the class at the same time.

Just like his book, the course is hands-on practical process. As with the book, see my review of it above.

Jeff Langr

Langr's book TDDC++ is an excellent follow-on to TDDEmbeddedC. It reiterates a lot of the ground covered in the other books, but with another perspective and emphasis.

In Chapter 3, "Test-Driven Development Foundations," section 3.5, "Getting Green on Red", he talks about the risk of premature passes of tests that should have failed:

  • Running the wrong tests.
  • Testing the wrong code.
  • Unfortunate test specification (i.e. you coded the test wrong).
  • Invalid assumptions about the system.
  • Suboptimal test order.
  • Linked production code.
  • Overcoding.
  • Testing for confidence.

Then in section 3.6, "Mind-Sets for Successful Adoption of TDD," under the heading "Test Behavior, Not Methods," he says:

"A common mistake for TDD newbies is to focus on testing member functions...Instead, focus on behaviors or cases that describe behaviors."

Under the heading "Sticking to the Cycle," he says:

"Not following red-green-refactor will cost you. See Section 3.5, Getting Green on Red, on page 60 for many reason why it's important to first observe a red bar. Obviously, not getting a green when you want one means you're not adding code that works. More importantly, not taking advantage of the refactoring step of the cycle means that your design will degrade. Without following a disciplined approach to TDD, you will slow down."

That last sentence is very important. As I said above, you need to be disciplined in doing this. Don't take shortcuts.

In Chapter 10, "Additional TDD Concepts and Discussions," section 10.3, "Unit Tests, Integration Tests, and Acceptance Tests," he distinguishes between the test types, as well as who writes them.

He says:

"For the purposes of this book, unit means a small piece of isolated logic that affects some systematic behavior. The word isolated in the definition suggests you can execute the logic independently. This requires decoupling the logic from dependencies on things such as service calls, APIs, databases, and the file system...the important aspect of unit tests for purposes of doing TDD is that they're darn fast."
"By definition, unit tests are inadequate. Since they verify small isolated pieces of code, they can't demonstrate the correctness of an end-to-end deployed-and-configured solution. In addition to unit tests, your system requires tests that will provide high confidence that you are shipping a high-quality product. Depending on the shop, these tests include what might be called system tests, customer tests, acceptance tests, load tests, performance tests, usability tests, functional tests, and scalability tests, to name a few (some of these are more or less the same thing). All of these tests verify against an integrated software product and are thus integration tests."
"Per the Agile community, customer tests are any tests defined to demonstrate that the software meets business needs. In an Agile process, these tests are defined before development in order to provide a specification of sorts to the development team - a close analog to the TDD process. Agile proponents will often refer to customer tests defined up front as acceptance tests (ATs). If the development team builds software that gets all the tests to pass, the customer agrees to accept the delivery of the software.

Ian Cooper

Someone on LinkedIn referred me to Cooper's video presentation TDDWrong, where he examines how people started using TDD poorly, and the negative reactions that produced.

He then covers how to do it properly, by reiterating and reinforcing that tests should be written for behaviors and interfaces, not implementations. It was only after watching this and digging back through my books that I fully appreciated that concept.

He says:

"Avoid testing implementation details, test behaviors."
"Test the public API"
"Do not write test for implementation details - these change!"
"Write tests only against the stable contract of the API"
"Only writing tests to cover the implementation details when you need to better understand the refactoring of the simple implementation we start with."
"Delete these tests!"

Regarding the unit of isolation, the unit test:

"For Kent Beck it is a test that 'runs in isolation' from other tests."
"Nothing more, nothing less."
"The test is isolated, not the system under test!"
"It is NOT to be confused with the classical unit test definition of targeting a module."

The video is a must for anyone who wants to practice TDD and avoid the pitfalls. He's actually presented this material several times, and people have uploaded screenshots and versions of his slides, for example at Tjen's blog.

Sandro Mancuso

Mancuso is the author of the excellent "Clean Coding" series book The Software Craftsman: Professionalism, Pragmatism, Pride. In his video presentation TDDGoodDesign, he directly addresses the question of design and TDD.

He differentiates between two TDD styles:

  • Classicist style, Kent Beck's original style (aka "Chicago school").
  • Outside-in style (aka "London school").

The classicist style starts from nothing, with no initial idea of what the design looks like. That results in "emergent design," where the design emerges from the TDD cycle, particular in the refactor phase. Just make it work, then clean it up; that's where the design emerges.

The outside-in style starts with an initial idea of the design. So it is an up-front design, but rather than a "Big Design Up Front" (BDUF) style of a complete grand design, it is just a starting point, going into the TDD cycle with intent. A number of design decisions have been made up front before TDD, what he calls "just-in-time design".

Both result in good design. They each have their place, applicable in different situations. For those parts where you don't know going in what you'll need down in the guts, use classicist style. For those parts where you do have some idea what you'll need, use outside-in style.


A Reality Check

While TDD has caught on reasonably well, even if in confused form, acceptance tests as defined here haven't really. Most stories have only the most perfunctory description of what the goal is, even if they have an explicit Definition of Done (DoD) listed.

Most of the time developers are working with pretty vague definitions and a lot of implied requirements, with all kinds of opportunities to get things wrong. "Hope and pray" is not an effective means for communicating requirements.

That's a shame, because good acceptance tests really could be useful. This needs to be a whole area of emphasis in Agile training and coaching.

Yeah, it's hard and it's time consuming to write out requirements in the form of tests and then actually translate them to executable, automated tests. But I believe it would be worthwhile to do so. An enormous amount of time is wasted by people spinning their wheels because these things aren't well-specified. All in the interest of moving fast, of course.

Even more aspirational is Beck's statement that customers could eventually maintain their own acceptance tests. That would be nice, but I envision it happening in only the rarest, most exceptional cases. Where they've first even bothered to write out the acceptance tests.


An Alternate Perspective

In Ousterhout's PhilDesign, Chapter 19, "Software Trends," section 19.4, "Test-driven development," he says:

"Although I am a strong advocate of unit testing, I am not a fan of test-driven development. The problem with test-driven development is that it focuses attention on getting specific features working, rather than finding the best design. This is tactical programming pure and simple, with all of its disadvantages. Test-driven development is too incremental: at any point in time, it's tempting to just hack in the next feature to make the next test pass. There's no obvious time to do design so it's easy to end up with a mess."

While I'm wildly in favor of everything else in the book, such as his discussion of comments, I disagree with his assessment of TDD.

He defines tactical programming as short-sighted focus on getting something working, as opposed to strategic programming, which is an investment mindset that focuses on producing "a great design, which also happens to work."

TDD might provide the opportunity for purely tactical programming, and it might be a stop along the way, but I think it's pretty clear from the other authors that it should not be the end point.

I assert that the goal of TDD is also good design, and in fact all the references above discuss design as part of TDD. As Grenning says, a good design is a testable design.

And as Mancuso points out, TDD is a design methodology in multiple ways. What he calls the "classicist" style comes closest to risking tactical programming, but the refactor step cleans that up. Then the "outside-in" style is a direct strategic approach.

I further assert that TDD is a force multiplier for your skills. Want to be a 10X developer? TDD will give you at least a 2X bump. Maybe 4X. Maybe more, because of the design benefits and the avoidance of time-consuming DLP work. Really.

The challenge is not to give in to tactical programming and just leave the code that way (remember that refactor step?). In my experience, TDD actually leads to good design if you keep that strategic mindset foremost.

You're expecting the codebase to survive for a long time, across multiple revisions, feature changes, product releases, and developers, so you need to be investing in the future. That's one reason for doing TDD in the first place.

It's about getting to V12.0 years down the line, not just V1.0 this year. Investment includes testability, maintainability, and an overall project-wide, system-wide design view and abstractions, not tunnel vision on single items.

There are multiple interacting goals in developing software. I believe strategic programming via TDD to achieve a good design is possible as long as you're aware of those interactions and the risks they pose.


Why Bother?

"Isn't this a lot of work? All this testing crap doesn't move us forward. We have product we need to ship!"

Yes, you do. And you'd probably like it to work.

You'd probably like to not have customers badmouthing your company or suing it. You'd probably like to not have to face a Congressional committee explaining what went wrong. You'd probably like to not have to explain to people why your product ruined their lives. You'd probably like to not have to explain to grieving family members why your product killed their loved ones.

We see examples of companies in these positions all the time in the news. You don't want to join them.

That's why you test. It's part of the due diligence you do to make sure you're creating good products. It's part of responsible, rigorous engineering discipline. It's not the whole answer, but it's part of the answer.

Another peson on LinkedIn also told me about the phrase "Tester c'est douter," meaning "Test is doubt." It's used to justify not writing tests: "If you write tests, you doubt about your talent, you are better than that, no?"

No. That's a terrible attitude. It assumes we're perfect. Any endeavor that relies on perpetual human perfection is doomed to failure.

We make mistakes. We miscommunicate and misinterpret. We forget. We get confused. We focus on the wrong things. We don't think things through. We don't deal with all possibilities. We're human.

I take an opposite philosophy, aware of our human shortcomings: untested code is by definition broken code. You can't trust it until you've proven that it works with actual evidence of working.

That's what I like about acceptance testing and TDD. They prove to me that something is working with evidence, so I have actual confirmation, not just blind hope or foolhardy unfounded confidence. And if it's not working, they direct me right to the failure point and aid me in fixing it. That's gives me real, well-founded confidence.

That's why you bother.



Memfault Beyond the Launch

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: