EmbeddedRelated.com
Blogs
Memfault Beyond the Launch

Six Software Design Tools

Steve BranamNovember 5, 20211 comment

Contents:


Introduction

Here are six tools to help you with software design. The first two are very simple, almost deceptively trivial, while the last four are more involved. They apply universally, to all types of software, all types of systems, and all languages. This is part of good engineering discipline.

At face value, this is just a bunch of acronyms, alphabet soup. How can that help you? What's really important is the concepts behind those acronyms, the ways of thinking and working. The acronyms are just mnemonics, memory aids to help you keep the concepts in mind.

Design covers a lot of ground. It's all about abstraction, combining the details of something into higher level concepts, so you can reason about those concepts rather than all the details. It's about seeing the forest, not just the trees.

This article is available in PDF format for easy printing

Some people seem to have a natural talent for it. Some seem to struggle with it. Having a set of tools doesn't automatically make you better at it, but they help to structure your thoughts and improve your discipline.

Design includes both large and small considerations. That's both the line-by-line details of your code, as well as the large overall structure of modules and objects and layers and services and systems. You have to think at multiple levels.

Design is important for the long term. Most useful products live a long life. Many people will touch the code over that time. Code that is well-designed is more maintainable, and a pleasure to work on. That means less chance of introducing bugs that can ruin a good product.

Design is also important for the short term. That means a smoother path to first product release. It also means a better product that will work better for customers and users. Or, as with most embedded systems, will become invisible to them, because it just works and does what it's supposed to do reliably.

The short term benefits start the product revenue stream flowing, and the long term benefits keep it flowing, across many years and many releases.

An important thing to remember is that tools are just tools. They can be used well, and they can used poorly. They can be used properly, and they can be used improperly. Not every tool applies in every situation. Pick the appropriate tools for the appropriate situations. A hammer is a great tool, but it's a terrible way to drive screws.

The way to apply the tools is to think. Think things through. Think about alternatives. For each tool, ask yourself the question, "Could this tool be applied here?"

And have you ever wondered what kinds of things to consider in code reviews? These are good to include in your code review checklist.

In this post I'll keep it short, just listing them and briefly discussing them. None of them are new or my own inventions, so you can find plenty of information elsewhere. My goal here is to direct you to them. Go learn about them, and then use them.

There are also other tools. This is not an exhaustive list, it's only 6. It's a good start, that will put you in the right frame of mind and carry you a long way. In particular, design patterns are a whole additional set of useful tools.


The Tools

  • DAMP: Descriptive And Meaningful Phrases
  • DRY: Don't Repeat Yourself
  • MCC: McCabe Cyclomatic Complexity
  • SOLID:
    • SRP: Single Responsibility Principle
    • OCP: Open Closed Principle
    • LSP: Liskov Substitution Principle
    • ISP: Interface Segregation Principle
    • DIP: Dependency Inversion Principle
  • API: Application Programming Interface
  • TDD: Test-Driven Development

DAMP

Names matter. They capture concepts in a single symbol. That symbol becomes part of the language and vocabulary of the topic.

DAMP means that you create names that are descriptive and meaningful phrases. All kinds of things in software have names, from the highest architectural level down to the smallest code items.

If it's any kind of software unit, at any scale, give it a DAMP name.

This abstracts things to meaningful concepts. It also communicates a lot of information, so that the code truly is more self-documenting.

DRY

Repetitive code is just an accident waiting to happen, where it gets changed one way in one place, but not in another. Or things that looked they were duplication aren't actually, and get changed along with the other occurrences.

DRY means Don't Repeat Yourself. It means ruthlessly hunting down repetition and replacing it with a single thing, that you can then give a DAMP name. Then it becomes a concept of its own, that can be used where needed.

Then you just have one place to make a change, and all uses of it get to share the benefit.

There's even a concept called WET code: Write Everything Twice. The first time you write the code, it's not duplicated anywhere, it just sits where you put it, in the middle of other code.

The second time, as you're about to write the same code again, you realize it's duplication. This is the point where you can pull it out of the original place, write it a second time as a separate, named abstraction, and then refer to it from both places.

This actually frees you from the need to try to anticipate what might end up being duplicated. So you can write the code as needed, then when you recognize the need to duplicate, DRY it at that point.

One concern about being too dogmatic with DRY is that you try to force common abstractions on things that only look like they're the same, but are in fact different concepts. One example is different constants that happen to have the same numerical value.

To maintain a good balance, always ask yourself if you are creating useful abstractions, or if you are simply hammering two things that look similar into a misshapen abstraction. Ultimately, the best value comes out of good abstractions. DRY can help you recognize them. It can also help you recognize when they aren't.

MCC

Code that's overly complex is another accident waiting to happen. You know that intuitively as you scroll up and down the pages of a long function, or sideways in a function with deep nesting running off the side of the page. These are too long and too wide, what I think of as 2-dimensional coding.

The McCabe Cyclomatic Complexity metric quantifies that. It's an objective measure of the number of paths in a function, based on the code structure. That tells you whether a function should be split up into multiple callable parts, each an abstraction of part of the overall processing. That's what I think of as 3-dimensional coding: reducing the length and width, and adding depth.

Aim for an MCC value in the range of 10-20. The lower the better, and there are some specific exceptional situations where larger values are acceptable. There are automated tools that calculate the value for source code. High values are a sign you need to break something up.

A further refinement of this is MC/DC: Modified condition/decision coverage. This examines complex decision expressions to elaborate all the individual combinations of conditions in them. Each one is a variation on the code path.

SOLID

The SOLID principles are a set of 5 design principles gathered by Robert C. Martin from other practitioners. They tend to be associated with object-oriented languages, but are not strictly limited to them. Remember, this is about a way of thinking, not about using a specific language.

SRP

The Single-Responsibility Principle says that every software entity (class, function, module, etc.) should have only one responsibility. This is cohesion, which should be high.

OCP

The Open-Closed Principle says that every software entity should be open for extension, but closed for modification.

LSP

The Liskov Substitution Principle says that you should be able to substitute an object of any subtype (derived class) for an object of its supertype (base class). This is polymorphism.

ISP

The Interface Segregation Principle says that interfaces for different clients should be segregated, so that no client should depend on a wider interface that includes things it doesn't need. This reduces coupling, which should be low.

DIP

The Dependency Inversion Principle says that you should depend on abstract interfaces, not concrete implementations.

API

An Application Programming Interface abstracts the services that a component provides to its clients. From the perspective of the component, those clients are the specific "applications" that are making use of the API, even if they are actually "system" software.

The perspective I'm taking here is that an "application" is any user of the services provided by the API. Conversely, the "system" is the set of API's and services that the application is built on. This is a more abstract view that differs from the distinctions of application and system as user programs and operating system.

In a layered architecture, each lower layer provides an API for layers above to use. From a given layer's perspective, any of those upper layers is an application, applying its services for a specific purpose. From a given layer's perspective, those lower layers form the underlying system it's built upon.

The API concept provides two important things. First, it bundles up all the internal details of a component into a smaller, well-defined set of services. Applications only have to think in terms of those services, ignoring all the lower-level details, no matter how complex they might be.

Second, it provides a modular separation point. A given component can be swapped out with a different one that provides the same API, and the applications don't need to change. The new component provides the same abstract services, but may carry them out in a much different way.

This is known as separation of concerns. Rather than thinking of a system as one giant blob of code, it is separated into different responsibilities, as dictated by the SRP. As the design evolves, components may be further broken up, and responsibilities and services may shift around. This allows functionality to be abstracted into more modular things that are easier to reason about.

There are also specialized types of API's:

  • HAL: Hardware Abstraction Layer, a specialized API for interacting with hardware.
  • OSAL: Operating System (OS) Abstraction Layer, a specialized API for interacting with an OS.
  • KAL: Kernel Abstraction Layer, a specialized API for interacting with an OS kernel.

A HAL allows hardware elements to be swapped out with different ones. By providing a version of the HAL for the new hardware that implements the services, the applications don't need to change.

An OSAL allows operating systems to be swapped out with different ones. By providing a version of the OSAL for the new OS that provides the services, the applications don't need to change.

A KAL is a further refinement of the OSAL concept, for applications that interact directly with an OS kernel (more common in embedded systems that in other types). It allows an OS kernel to be swapped out with a different one. By providing a version of the KAL for the new OS that implements the services, the applications don't need to change.

In embedded systems, an application that runs on a particular board, running a particular OS, using HAL, OSAL, and API layers, needs an implementation of those services specific to that board. A BSP (Board Support Package) provides those hardware- and OS-specific components.

In all cases, for all type of API's and specialized abstraction layers, it's up to the designers to come up with the services and abstractions. Thinking in these terms forces you to look for abstractions. That helps to break away from the monolithic software mindset that results in designs that are hard to work with.

TDD

Test-Driven Development is a way of developing code, driven by tests.

TDD is sometimes thought of as a unit test methodology, but it's far more than that. The idea is that in real-time, as you are writing code, you are testing it to see that it does what you expect.

The terminology is unfortunate, a variation on the term unit test that creates confusion with classical test-after development methodologies carried out by separate SQA staff. TDD tests are better thought of as developer tests, since they are written and executed by developers, in real-time as they are developing the production code.

TDD drives design decisions about how to make the code testable. This results in a better design, that has been proven to work by the tests, at the small scale.

TDD concepts can then be scaled up as components are integrated together. Again, this drives design decisions. That extends TDD to the larger scale.

See my post Acceptance Tests vs. TDD, which has much more information and study resources.



Memfault Beyond the Launch
[ - ]
Comment by occammanNovember 6, 2021

It's great to have all of this together - it's sort of a checklist that should be reffered to often during a project. Thanks for this.

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: