## Where is C++'s place in the world of embedded systems?

Started by 6 years ago18 replieslatest reply 6 years ago660 views

I've worked on a number of embedded systems, some of which have had only C compilers, whereas others have had both C and C++ compilers. It's helped me understand the costs and benefits of several aspects of C++ features in the systems I worked on.

My question to everyone is:

Where does C++ fit into your projects?

Specifically,

• Does your microcontroller have a C++ compiler? If so, do you use it? If not, would you?
• Which of the following aspects of C++ would you use or not use:

• "Classic" OOP (classes/methods/inheritance/encapsulation)
• Dynamic memory allocation with "new" and "delete"
• References (the & operator in function arguments types)
• OOP with runtime polymorphism (virtual functions)
• Runtime type information (RTTI)
• Exceptions (try-catch-throw)
• Templates
• The Standard Template Library
• Boost
• "Modern C++" features (C++11 and later: lambdas, move semantics, type inference, etc.)

#Cpp #FAQ

[ - ]

My one microcontroller?  What alien concept is that?  I'm a consultant -- I work on what my customer wants.

Snarky evasions aside, when I can I use ARM Cortex parts, and I often use C++ to program them.  When I don't it's because I'm writing an itty-bitty application for an itty-bitty memory space -- in that case, C++ has few advantages over C, and the disadvantages outweigh them.

The features I use, or not:

• Classic OOP -- yes
• Dynamic memory allocation -- yes and no.  I will, in general, use "new" as a "sorta semi-static" way of making objects that persist forever, in setup code that runs before the "real action" starts.  I use "delete" (and the implied specter of heap fragmentation) only in code that will be used by service or factory personnel, in an environment where such use will be followed by a power cycle.
• I use references quite a bit -- my current practice is to use a reference to a const object if I'm just "reading", and a pointer if I'm "writing", this as a clue to any follow-on programmers that an address is really being taken.
• Virtual functions -- yes.  "Oh my god! Thunking will eat up all your clock cycles" -- um, no.  At least not in my experience.  Virtual functions, used judiciously, make for cleaner code.
• Operator overloading -- not much, unless I'm making an arithmetic class like quaternions or matrices.
• RTTI -- no.  So far I've avoided it like the plague, due to concerns about resource usage and unexpected delays.
• Exceptions -- same as RTTI.
• Templates -- YES!   Used judiciously ("judicious" should be the watchword when doing embedded C++ work) they can be of great help in writing efficient, reusable code.
• STL -- If I get abducted by ISIS to be a slave-programmer, and I want to thoroughly sabotage their code without it being apparent until after I've been beheaded for mouthing off, I will use the STL as much as possible (this doesn't apply if the "embedded" app resides on a PC and gets recycled often).
• Boost -- see STL
• Modern C++ -- see "Ned Ludd": https://en.wikipedia.org/wiki/Ned_Ludd
[ - ]
In my experience, C++ makes an excellent programming language for real-time embedded (RTE) systems. However, as with all C-like languages, for safety-critical applications it is highly advisable to use a safer subset of the language. In case of C++, a very reasonable and well thought-out subset is defined by MISRA-C++:2008.

One aspect of OOP that I see misunderstood in the embedded community is encapsulation. I constantly run into embedded developers, who falsely believe that as long as they use class methods to access objects, the access is somehow "safe", even if the objects are shared among RTOS tasks or between main() and ISRs.

This is a false sense of security. The class-level encapsulation in C++ does NOT protect the internal data of an object at all. The class operations always run in the caller's thread, so when an object is accessed concurrently from different threads (or ISRs), the internal data is subject to the same race conditions as global data not encapsulated at all.

To achieve encapsulation for concurrency, one needs to either use mutual exclusion (e.g., mutexes inside class methods) or, better yet, the Active Object design pattern (see the article "Prefer Using Active Objects Instead of Naked Threads" by Herb Sutter, online at http://www.drdobbs.com/parallel/prefer-using-activ... )
[ - ]
thanks for the reply, you seem to have more or less the same opinion I do.
[ - ]

Hello everybody,

just my tiny contribution to the discussion, on a side topic actually.  Concerning the use of some areas of C++ (most notably exception handling and the standard C++ library), quite often the real constraints do not come from the language itself, or from the programmer's will and wishes, but from the toolchain.

When the application is multithreaded and is supported by an underlying real-time operating system (which happens more often than not, at least in my experience), the thread-safety of the standard C++ library depends on the thread model for which the C++ compiler has been built.  This affects some non-trivial aspects of exception handling, too, for reasons you can easily imagine.

To state it plainly, with the same words used in the official libstdc++ documentation, “the library strives to be thread-safe when all of the following conditions are met: […] The compiler in use reports a thread model other than ’single’” ...

... whereas, what I see is that toolchains used for embedded software development are configured exactly in that way and programmers then have to resort to all sorts of tricks to work around the issue.

Of course... this comment refers to GCC-based toolchains :)

[ - ]

The difference between C/C++ and just about every other language out there (with the exception of assembly, which is device specific) is that it is a COMPILED language.

Pretty much all the others are interpreted. In other words the code you write in, say, python, does not actually execute on the processor. It gets digested and interpreted by an interpreter which runs on the processor.

Until recently in the embedded world, code size and execution speed were of absolute importance. That's why some vendors compilers are free until you want the speed and size optimisations switched on, then you need to pya the big \$.

For those reasons only assembly and C were the real choices, even C++ didn't get much traction in the embedded world. You just don't need the overheads of a code interpreter running on the hardware.

However recently, there has been a plethora of devices such as Raspberry-PI, Edison, Galileo, Arduino etc that have arrived in the "embedded" scene. They are a totally different kettle of fish. They have more ram and faster processors that traditional embedded microprocessors or microcontrollers. They can actually run these interpreted languages at an acceptable speed. However delve a bit deeper under the hood and you'll find that the operating systems on these, are all written in C, C++ or even assembly.

Only compiled languages can run on the silicon, that is languages which have been compiled and linked right down to machine code (one step closer to silicon than even assembly). So even your interpreted languages such as Python, Java and Perl are running by means of code which was written in C most likely.

There is no way to deprecate C/C++ without coming up with a new language which gets compiled and not interpreted.

Side Note: There is another compiled language, recently delivered to the world by Apple, called Swift, and it is open source now, and someday it might find a bit of traction. It has the benefit of allowing you to use script-like playgrounds to get things done quickly, but at the end of the day you're not gonna ship swift source code, you'll ship the compiled application.

[ - ]

You are leaving out quite a few languages, some of which are still in use -- COBOL, is, amazingly, still perking along in quite a few financial applications (to the point where a youngster willing to learn it can step into a retiring baby-boomer's shoes, today); FORTRAN is still in use for some scientific applications, etc.

Specific to embedded, Ada is a viable and popular language.  I don't use it, because I do mostly commercial applications, but high-reliability folks doing programming for military still swear by it.

(I don't know if C# is compiled or not -- if it is, add it to the list.)

[ - ]

Yes I left out cobol, fortan, etc as they don't have much headway in embedded.

C# is similar to Java in that it does get cmpiled - but not the code that can run on a processor. It gets compiled to byte-code which runs in a virtual processor running on a real processor... Seems like a waste of resources to me.

[ - ]

Those are good points (and I would also point to other compiled languages like Rust and D), but I was really more curious about your opinion of the specific features of C++.

[ - ]

@jms_nh - Yes oops! My enthusiasm boiled over and I poured out stuff that was a bit off-topic!

As for my views on the specifics of C++, I think I'd sum it up like this:

It's horses for courses. Meaning that if a project really lends itself to features like sub-classing, OOP and dynamic memory allocation/release then I'll use C++, otherwise I'd use C.

C would be my starting point in embedded engineering because the rest comes with overheads. I wouldn't even use the classic malloc/free in C unless it was strictly necessary as it is a wasteful way of allocating ram.

[ - ]

I mostly use the PSoC4 & PSoC5 micros- no C++ for them. Even when the customer insists on a particular micro, most of the projects are just too small to justify the overhead of an RTOS or C++.

[ - ]

Could you explain what you mean by "overhead" of C++? (my experience is that many features of C++ that I want to use don't have any overhead)

[ - ]

Jason

You are right- I am probably talking through my hat. The overhead refers to the RTOS, but more importantly the overhead of learning how to use the tools.

I have worked with Java (not on an embedded system) and found it incredibly verbose. The overhead may not have translated into executable code, but coding it took a lot more effort.

[ - ]

Your point about Java is interesting. Java is definitely one of the most verbose languages to accomplish certain tasks -- from the programmer's view, anything useful has to be defined inside a class (well, until recently with Java 7 lambdas) and this adds extra code. There's a flip side, which is that the "redundancy" of such a verbose language, and the ease of parsing it (compared to C) makes it very easy to automate or assist the process of coding. Did you use an IDE? Most of the Java IDEs (Eclipse, NetBeans, IntelliJ) have autocompletion and refactoring tools that are really easy to use and I really miss that with C.

With respect to C++ -- I'm still curious which features you would find useful/scary/expensive/easy. Just for two data points from my side -- I'll agree with Tim Wescott's point that exceptions appear to be expensive from a runtime resource standpoint (even though they're also very useful). Templates on the other hand are very inexpensive from a runtime standpoint. I've worked with TI's C28xx compiler and have looked at the assembly code emitted under optimization -- it does wonders. But templates are also really hard to learn how to use properly.

[ - ]

Jason

Did you use an IDE? Most of the Java IDEs (Eclipse, NetBeans, IntelliJ) have autocompletion and refactoring tools that are really easy to use and I really miss that with C.

It was an Android development, so yes there was an IDE with some of the features that you mention and the inevitable Eclipse (shudder).

With respect to C++ -- I'm still curious which features you would find useful/scary/expensive/easy.

I have never looked at C++ closely to answer this. I know it is OOP with all the good things like inheritance, polymorphism etc. I have never come close to believing I need any of those. Most of my projects are really small in scope (and size). And you can tell from my screen name, I have been around for a while and am now resistant to change just to keep up with the latest trends. In two years there will be something else...

[ - ]

By overhead I mean things like the extra ram required to track the things that are allocated using 'new' (or even "malloc" in c).

[ - ]

Here's a wee example (Using Xcode on Mac OS X 10.1.14):

The following C code generates an executable which is 18,464 bytes on disk.

#include <stdio.h>
int main(int argc, const char * argv[])
{
// insert code here...
printf("Hello, World!\n");
return 0;
}

The following C++ code generates an executable which is 27,040 bytes on disk.

#include <iostream>
int main(int argc, const char * argv[])
{
// insert code here...
std::cout << "Hello, World!\n";
return 0;
}

[ - ]

OK, you bring up another topic which is executable size. On embedded systems, that may have very little relation to how much program memory is required, especially if you have debug symbols enabled -- for example, a dsPIC motor control project I work on has an .elf file which is 1.4MB on disk, but the total program memory usage is only 27KB.

Another issue is the C/C++ runtime startup code. My guess is that the default runtime startup code in C++ probably is a little bit larger than the default runtime startup code in C. I say "default" because there are usually ways that you can customize the startup code, especially on an embedded system, and if there are C/C++ features you don't use (e.g. dynamic memory allocation) then they can usually be disabled somehow.

Just curious -- you might try compiling your C program with the C++ compiler; that should tell you whether the executable size increase comes from the fact that you are using C++ rather than C, or whether you are using std::cout instead of printf.

[ - ]