Modern C++ in Embedded Development: (Don't Fear) The ++

While C is still the language of choice for embedded development, accounting for up to 65% of embedded projects, the adoption of C++ has grown steadily. With an estimated usage of 20% - 30% in the embedded development field, C++ offers classes, improved type safety, and quality of life features such as range-based for loops, the auto keyword, and lambdas. 

Despite the features that C++ offers, many companies and individual embedded developers are still staying with the old ways of C programming. And the explanation behind this is simple. A vast amount of embedded development work is related to safety-critical applications: automotive, medical, house appliances controlling heating elements, and many more. In the eyes of many embedded developers, C++ Standard Library is notorious for dynamic memory allocation. The primary restriction for safety-critical applications and resource-restricted systems is NO dynamic memory allocation.

Besides the dynamic memory allocation, there are other concerns related to using C++ in embedded projects that need to be addressed. 

This article is available in PDF format for easy printing

Another common argument against C++ is that it adds some mechanisms that bloat the code, which is particularly important for memory-limited systems where each kilobyte matters. And this is true for the exceptions mechanism and RTTI (Run-Time Type Information), where the compiler adds some extra information to binary in order to make these mechanisms possible. 

Historically speaking, C++ started as “C with classes”. The first C++ compiler  Cfront, converted C++ to C, but that was a long time ago. Over time, C and C++ evolved separately and are now defined by separate language standards. This led to cases that code that is valid both in C and C++ will generate different results when compiled. These instances are rare and quite specific, and they are well documented. 

C++ and dynamic memory allocation

Textbooks examples of C++ code that use std::string and std::ostream demonstrate why most embedded developers do not even consider C++ for their work. Both std::string and std::ostream (std::cout) use dynamic memory allocation under the hood. What happens when ostream’s internal buffer is full? It probably allocates memory to handle this case. When you concatenate strings, std::string dynamically allocates memory to accommodate the resulting string. You can read more about strings in C++ in Niall Cooling’s article

Niall discusses the memory implications of std::string. He also explains std::string_view and std::pmr (C++17). Polymorphic memory resource (PMR) allows you to specify a chunk of memory that will be used by a string on the stack. If the buffer request exceeds the buffer size, the program will terminate.

C++ Standard Library offers a set of container template classes and algorithms that can be used on those containers. Most of these containers utilize dynamic memory allocation. However, there is an exception - std::array is a container that encapsulates fixed-size arrays. It provides several advantages over C-style arrays:

  • Method .at() performs bound checking, which prevents bad indexing, an error that may happen even to experienced embedded developers. 
  • Size information: std::array knows its own size, i.e., it carries its size information with it. You can use .size() member function to get the number of elements in the array. 
  • Compatibility with Standard library algorithms such as std::find, std::find_if, std::find_if_not, std::copy, std::copy_if, and others. 

#define N 20

int buffer[N] = {0};

for(int i = 0; i < N; i ++) { 
    printf(“%d “, buffer[i]);

The above snippet of C code can be translated into the following C++ code: 

std::array<int, 20> buffer = {0};

for(auto& element : buffer) { 
    printf(“%d “, element);

The above code example uses a range for loops which makes the code easier to read. Overall, std::array provides a more modern, safer, and more expressive alternative to C-style arrays while maintaining the performance and memory characteristics of C-style arrays.

Some parts of the Standard library, such as std::function, may use dynamic memory allocation in specific scenarios. Also, some algorithms, such as std::stable_sort, may allocate memory. This makes them risky to use in the embedded allocations. This also implies that using parts of the Standard library is difficult if we are not sure if they are allocating memory. Luckily we can delete new and delete operators, which will break the linking process if a part of the Standard library calls them.

void *operator new(std::size_t) = delete;
void *operator new(std::size_t, const std::nothrow_t &) noexcept = delete;
void *operator new[](std::size_t) = delete;
void *operator new[](std::size_t, const std::nothrow_t &) noexcept = delete;

void operator delete(void *) noexcept = delete;
void operator delete(void *, const std::nothrow_t &) noexcept = delete;
void operator delete[](void *) noexcept = delete;
void operator delete[](void *, const std::nothrow_t &) noexcept = delete;

New and delete operators are implemented as simple wrappers of malloc and free on some platforms, which allows us to use –wrap option supported by some compilers (GCC), which wraps function call with __wrap_function. If we use it as –wrap=malloc, it will wrap all malloc calls with __wrap_malloc, which we can define. If we don’t define it, and someone calls malloc, the linking will fail. 

Zero-overhead principle

The zero-overhead principle is a guiding principle for the design of C++. It states that: What you don’t use, you don’t pay for (in time or space) and further: What you do use, you couldn’t hand code any better.

In other words, no feature should be added to C++ which would make any existing code (not using the new feature) larger or slower, nor should any feature be added for which the compiler would generate code that is not as good as a programmer would create without using the feature.

The only two features in the language that do not follow the zero-overhead principle are runtime type identification (RTTI) and exceptions. That’s why most compilers include a switch to turn them off.

C xor C++ Programming

As C++ was inspired by C, it supports much of C’s syntax and semantics. However, it’s not an exact superset of C, which means that not all valid C code is valid C++ code, and sometimes code that can be compiled by both C and C++ compilers will give you different results. These differences are rare and quite specific, but they are important to be aware of, especially for those transitioning from C to C++.

The instances where the same source code has a different result if compiled with C or C++ code are summed up in a document “C xor C++ Programming” by Aaron Ballman, a member of both WG14(C) and WG21(C++) standardization groups. 


Concerns of embedded developers working on safety-critical applications are justified, but if properly addressed, usage of C++ adds a lot of value to an embedded codebase. 

C++ offers better type safety than C. It has quality of life features such as auto, range-based for loops, structured bindings, std::array, lambdas, and constexpr. I made a general overview of useful C++ features for embedded development in my first blog in the series Modern C++ in embedded development.

Constexpr specifier is probably one of the best C++ features. It makes compile-time evaluation of a function possible which means that the compiler does the work for you, reducing both space and time memory footprint. Constexpr functions also prohibit undefined behavior, which makes your code safer. An example of usage is generating signals or lookup tables at compile time. You can read more about it here.

Using strong types and user-defined literals makes the code even more type safer, which prevents common firmware bugs. I wrote about it here with real-life examples from a BLE stack. And finally, C++ plays really well with C, which makes integration with C libraries and vendor-provided SDKs and communication stacks effortless. 

[ - ]
Comment by nemikJune 28, 2023

Thank you for this great article, Amar. I'm very glad to see more embedded engineers turn to C++ and seeing the benefits it has over plain C. Simple things like hash maps are production-ready, tested parts of the libraries and they're otherwise cumbersome to implement in C.

C++ also has a useful "-fno-exceptions" flag you can use during compilation to turn off exceptions entirely which provides better memory and timing guarantees we need in embedded code running on MCUs.

I've recently been working with the CSA Matter protocol https://github.com/project-chip/connectedhomeip/tree/master and it's a C++ project that targets hardware from small MCUs up to desktop systems. It's a very good example of well-organized C++ designed to run on embedded systems. I learned the "-fno-exceptions" and other C++ optimizations for embedded from that codebase.

[ - ]
Comment by mahmutbegJune 29, 2023

Thanks! Yep, I briefly mentioned that exceptions can be disabled without getting into specifics on how to do it with different compilers. 

I didn't have a chance to work on Matter based products yet, but I'm looking forward seeing the codebase, thanks for the reminder! 

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: