EmbeddedRelated.com
Forums

Compilers for Embedded Systems

Started by stephaneb 6 years ago7 replieslatest reply 6 years ago7900 views

A few weeks ago,I asked for new #FAQ topic suggestions.  @Keeds' suggested a discussion on compilers and his entry received many thumbs ups so let's do it!

In @Keeds words:

  • Open Source (GCC and the like) vs. Paid (IAR or similar)
  • What are some pros/cons of using one compiler over another?
  • Does anybody have a favorite compiler that they try to integrate into everything

Thanks for sharing with the Embedded Systems community your insights on Compilers for Embedded Systems.

[ - ]
Reply by jorickMarch 23, 2018

A few years back we transitioned from 8051 assembler to 32-bit ARM processors.  Since I had previous experience with IAR, I suggested that my company choose them, which we did.  Later, we were running out of space in flash (252 KB used out of 256 KB with still more features to be added), so my boss had me do a comparison of various compilers to see if there were any that would give us a smaller code footprint.  I went to various providers and downloaded both open source and paid demos and ran the compiler on some modules to see how they fared.   

A partial list of the various compilers I tested (I don't remember them all):

  • GCC
  • ARM
  • Keil
  • IAR

Every one I tried (except for IAR) generated larger code at the highest optimization level to the point where the project would exceed the flash space by at least 10 KB.  We stayed with IAR despite the fact that their toolchain is one of the most expensive ones out there.  So, looking at code size only, IAR is the best.

And the project that was running out of space?  Every time Marketing came to us with a new feature they wanted to add, I would tell them, "We can put it in, but what do you want us to remove?"  They stopped coming to us with features soon after that.

[ - ]
Reply by LaszloMarch 23, 2018

Hi,

I would recommend that you guys try out the LLVM compiler, which is actually the successor of the GCC. Intel and Apple are using among many others since it is open source and completely free, you can actually build your own compiler for your custom CPU core and keep it proprietary.

For code size optimization, it also support the LinkTimeOptimization (LTO) feature.

All the best,

Laszlo

[ - ]
Reply by SolderdotMarch 23, 2018

So what features are there to distinguish compilers?

You can check for optimization capabilities: Speed or space. But is this really required? Most microcontrollers nowadays come with megabytes of memory and hundreds of kilohertz in clock frequency. Normally they provide sufficient memory and computing power for even unoptimized code.

Further, if you really need to save on memory you might also want to check what that means. You can also write your source code such that the foot print gets minimal. However, this normally results in code which is hard to read. So maintenance cost for such code will go up. You might consider adding some more memory to your system.

Language is not really an option to vary. You should go for C/C++ and you wouldn't want to deviate from that in case you may need to involve other developers later on, and in the embedded world you find C/C++ skills easiest. Anyway, nowadays it may be hard to find a compiler for a different language for your microcontroller (or DSP).

To my opinion it is of utmost importance that the compiler can be integrated into a development environment seamlessly. You want to be able to write your code, compile your code, test your code and debug your code in a fast and efficient way. This environment may be an IDE but it may also be a custom-engineered solution. 

For smaller projects an IDE is surely a good thing. When it comes to huge projects which involves 100'000s of lines and 2000+ developers working on it such an IDE might not scale well enough. Indeed, working in such an environment myself, my experience is that the development environment has a way higher impact on productivity than the compiler.

And when it comes to debugging... Sometimes you need to go down to assembly level. If you have a poor integration of compiler and debugger (i.e. compiler generating code in vanilla flavor while the debugger prefers chocolate flavor) debugging will become really annoying. I faced such a scenario a few years ago in an IA based project where Intel compilers were used in conjunction with GDD.

So focusing just on the compiler is seldom a good thing. You need to look at the whole tool chain. And of course figure out what you really need. If you use the best-in-class optimizing compiler it won't do you any good if you have poor debug support, and if you run the code on a high-end microcontroller you may not even notice those benefits.

[ - ]
Reply by mr_banditMarch 23, 2018

My default is gcc, because it is available for just about every MPU out there. I have yet to see a bug, although at high optimization levels it will remove code that you would expect would be left in. However, that is due more to the way the standard is written instead of a bug - eg it's a "feature". Various folks have written about that. Jack Ganssle included, if my poor memory is correct.

I can tell you one to avoid: the CCS C compiler for the PIC24. I lost 60 (documented) hours of my life to the POS. Basically, they took their C compiler for the PIC16 and scaled it up. The PIC16 version wasn't bad - I found one minor bug. But the PIC16 has a weird architecture, where the PIC24 has a much more traditional (von Neuman) architecture. They made the cardinal sin of trying to scale up a compiler that worked for a weird architecture but not a traditional one && they failed.

You also need to consider the chip family. That is where gcc really shines. they can easily support different architectures because it is relatively (so i am told) to create a code-generator back-end. But every MPU I have ever wanted to use already had a gcc compiler.

The other thing about gcc is there are options for *everything embedded*. It gives you *complete* control over memory mapping (what lives where). Just type "gcc --help"

Also, if memory serves, IAR uses (or used to use) gcc.

The real difference you allude to is the difference between an IDE and an old-school editor, compile, load, run. Personally, I am not a fan of IDEs. I hate the editors. You cannot automate the build (very critical in a production environment). And - I use a serial port && printf() to debug most things.

In certain ways, choice of the MPU is much more critical than the compiler. I helped design a system with 5 ARMs. Two of them were used at the 5% range. The key was we used the IAR tools for all of the ARMs - that way, we had the *same* dev environment for all of the chips. This was a systems-level decision. 

Quote:

  • Does anybody have a favorite compiler that they try to integrate into everything

One does not normally try to integrate a compiler into "everything". One uses a compiler as a step for making source code into the MPU object code && linking it into an executable. I gently suggest you think carefully about what you really meant to say.

[ - ]
Reply by mjbcswitzerlandMarch 23, 2018

I have developed the uTasker project for a number of years and test with the following compilers:
www.utasker.com/kinetis/compilers.html to ensure that it can be build with whatever one the user prefers.

Initially I developed its TCP/IP stack with GCC and assembler level debugging in an IDE I developed myself to get an idea of what was involved (although I also simulated most operation before doing HW testing and debugging) - there is an overview at http://www.utasker.com/history.html

I wrote an assembler/disassembler and tried to write a C compiler: Assembler/disassembler was pretty easy. Writing a compiler was not and I stopped because it was clear that it would take years to get anything close to GCCs state of development and many more to get up to something like IAR optimisation (probably many, many lifetimes...). In the meantime I have also developed a multi-tasking Basic interpreter/web-based debugger and other control logic programs so do have a certain amount of experience, whereby I have to admit that these types of development have been the most complicated/difficult ones that I have actually done in my 30 years of (essentially) embedded programming career.

Nowadays, apart from some optimisation advantages and simpler/harder debugging or optimised code, the mainstream ones do a good job without huge (< 5% or so) differences in performance. Sometimes there is a bug (I had one with GCC quite recently, had loads in Freescale's Codewarrior versions for Coldfires when this was still based on Metroworks, also with IAR) but rare now.

At the end of the day (unless on the edge of memory limits or very critical timing optimisation) they all do the job and the compilation step of a development is not that critical.
However, how the IDE works with the compiler (assuming one needs to debug) and how easy/reliable it is to set up for the processor and debug the code on the target with whatever debugger is used is where the difference is found.

Although I find IAR generally slow and crashes too often it does give a virtually bullet proof debug experience. Its generated code tends to be better (and much easier to debug with highest optimisation) than GCC. I also use various Eclipse based tools which tend to use GCC/GDB and they do work. They are however still clumsy and reliability doesn't compare to that of IAR. Nor the debugger support.

Very often I use GCC from a makefile since it is flexible ad generated code is fine.

However Eclipse based GBD debugging will give me the creeps and so I still prefer to do debugging in IAR, even if just at assembler level since it often proves to be more efficient (and less nerve wracking than waiting to see if the GDB server will hang or not this time around).

Therefore the compiler is needed but which one is actually used shouldn't be critical. The overall package (especially the debugger) is what makes the real difference in development efficiency.
Professional programmers should use professional tools - nothing wrong with GCC compiler here - but it is the IDE/environment that makes the bigger difference. IAR and co. are indeed expensive (overpriced in my opinion) but they can give professionals an advantage. If professional embedded programmers can't afford the best tools they should in fact ask themselves why not - maybe it's because they haven't investing in their profession to make their work efficient and profitable enough to be able to do so.....?


[ - ]
Reply by jorickMarch 23, 2018

Your comment about IAR being generally slow and crashing often really hits home.  We're currently frozen at EWARM (Embedded Workbench for ARM processors) version 7.80 because 8.xx is so slow that Windows often pops up a box saying the application failed to respond.  And that's if we can keep it running since it's constantly crashing.  Not just in one particular place, but all the time.  I have yet to complete a single build with it.

Version 7.80 isn't the fastest IDE on the block but it gets the job done.  And it's stable (I haven't seen a single crash).  So until IAR gets its act together, 7.80 is where we'll stay.

[ - ]
Reply by Bob11March 23, 2018

In the olden days (back when I was youngen) there were far more paid compilers than free compilers for a couple of reasons: the chips were less complicated, and the languages were also less complicated and more prolific. A small company (and most were) could easily produce and sell a compiler and associated toolchain with a few man-years of effort and make good money at it, and most of the free tools were little more than hobby projects and produced spotty code. It isn't that hard to write a C compiler, and C had many more competitors: Pascal, Forth, etc.

In the years since, the 'hobby tools' have become quite good for the microcontroller architectures that remain, and many of the 'simple' languages other than C have fallen by the wayside. SDCC, for example, is a quite capable C compiler on 8051. Another change is that many of the microcontroller vendors now offer their own C/C++ compilers for their architectures based on the gcc toolchain, so the quality and factory support for the open source toolchains is better than it once was. Many of the paid vendors that remain now market based more on their IDE, debug chain, and 24/7 support and less on just supplying a compiler. Indeed, a few of these vendors now use a modified gcc as their base compiler. The change that has really pushed gcc to the forefront is the increasing complexity of the entire chain. Many of the 'micros' are now 32-bit parts with megs of memory and expected to communicate on the 'IoT'. The C++ standard is a complicated beast, far beyond the capability of most small companies to develop on their own, and add in the libstdc++ library, the STL, the network stack, etc. etc. and it becomes cost-prohibitive given the size of the market.

I am quite comfortable with command-line tools, and as most of my projects are now written in C/C++, developed on Linux, and run on Linux a gcc cross-compiler is a natural fit for me. OTOH, if I was running a project with developers who preferred an IDE and I needed someone to yell at when the compiler broke I would certainly look at paid solutions. No doubt there may be a few adventurous souls out there planning to use an open source Erlang compiler for their next embedded design, but for most of us it's gcc C/C++ vs paid C/C++, and that's like building your next desktop from parts bought at Newegg vs. walking in to the Apple store and walking out with an iMac. The latter is easier but you get what they give you; the former takes more work but you get to do it your way. Nothing wrong with either solution provided it works for you and your project.