EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Makefile or not?

Started by pozz December 3, 2018
What do you really use for embedded projects? Do you use "standard" 
makefile or do you rely on IDE functionalities?

Nowadays every MCU manufacturers give IDE, mostly for free, usually 
based on Eclipse (Atmel Studio and Microchip are probably the most 
important exception).
Anyway most of them use arm gcc as the compiler.

I usually try to compile the same project for the embedded target and 
the development machine, so I can speed up development and debugging. I 
usually use the native IDE from the manufacturer of the target and 
Code::Blocks (with mingw) for compilation on the development machine.
So I have two IDEs for a single project.

I'm thinking to finally move to Makefile, however I don't know if it is 
a good and modern choice. Do you use better alternatives?

My major reason to move from IDE compilation to Makefile is the test. I 
would start adding unit testing to my project. I understood a good 
solution is to link all the object files of the production code to a 
static library. In this way it will be very simple to replace production 
code with testing (mocking) code, simple prepending the testing oject 
files to static library of production code during linking.

I think these type of things can be managed with Makefile instead of IDE 
compilation.

What do you think?
On 03/12/18 09:18, pozz wrote:
> What do you really use for embedded projects? Do you use "standard" > makefile or do you rely on IDE functionalities? > > Nowadays every MCU manufacturers give IDE, mostly for free, usually > based on Eclipse (Atmel Studio and Microchip are probably the most > important exception). > Anyway most of them use arm gcc as the compiler. > > I usually try to compile the same project for the embedded target and > the development machine, so I can speed up development and debugging. I > usually use the native IDE from the manufacturer of the target and > Code::Blocks (with mingw) for compilation on the development machine. > So I have two IDEs for a single project. > > I'm thinking to finally move to Makefile, however I don't know if it is > a good and modern choice. Do you use better alternatives? >
I sometimes use the IDE project management to start with, or on very small projects. But for anything serious, I always use makefiles. I see it as important to separate the production build process from the development - I need to know that I can always pull up the source code for a project, do a "build", and get a bit-perfect binary image that is exactly the same as last time. This must work on different machines, preferably different OS's, and it must work over time. (My record is rebuilding a project that was a touch over 20 years old, and getting the same binary.) This means that the makefile specifies exactly which build toolchain (compiler, linker, library, etc.) are used - and that does not change during a project's lifetime, without very good reason. The IDE, and debugger, however, may change - there I will often use newer versions with more features than the original version. And sometimes I might use a lighter editor for a small change, rather than the full IDE. So IDE version and build tools version are independent. With well-designed makefiles, you can have different targets for different purposes. "make bin" for making the embedded binary, "make pc" for making the PC version, "make tests" for running the test code on the pc, and so on.
> My major reason to move from IDE compilation to Makefile is the test. I > would start adding unit testing to my project. I understood a good > solution is to link all the object files of the production code to a > static library. In this way it will be very simple to replace production > code with testing (mocking) code, simple prepending the testing oject > files to static library of production code during linking. >
I would not bother with that. I would have different variations in the build handled in different build tree directories.
> I think these type of things can be managed with Makefile instead of IDE > compilation. > > What do you think?
It can /all/ be managed from make. Also, a well-composed makefile is more efficient than an IDE project manager, IME. When you use Eclipse to do a build, it goes through each file to calculate the dependencies - so that you re-compile all the files that might be affected by the last changes, but not more than that. But it does this dependency calculation anew each time. With make, you can arrange to generate dependency files using gcc, and these dependency files get updated only when needed. This can save significant time in a build when you have a lot of files.
Il 03/12/2018 11:06, David Brown ha scritto:
> On 03/12/18 09:18, pozz wrote: >> What do you really use for embedded projects? Do you use "standard" >> makefile or do you rely on IDE functionalities? >> >> Nowadays every MCU manufacturers give IDE, mostly for free, usually >> based on Eclipse (Atmel Studio and Microchip are probably the most >> important exception). >> Anyway most of them use arm gcc as the compiler. >> >> I usually try to compile the same project for the embedded target and >> the development machine, so I can speed up development and debugging. I >> usually use the native IDE from the manufacturer of the target and >> Code::Blocks (with mingw) for compilation on the development machine. >> So I have two IDEs for a single project. >> >> I'm thinking to finally move to Makefile, however I don't know if it is >> a good and modern choice. Do you use better alternatives? >> > > I sometimes use the IDE project management to start with, or on very > small projects. But for anything serious, I always use makefiles. I > see it as important to separate the production build process from the > development - I need to know that I can always pull up the source code > for a project, do a "build", and get a bit-perfect binary image that is > exactly the same as last time. This must work on different machines, > preferably different OS's, and it must work over time. (My record is > rebuilding a project that was a touch over 20 years old, and getting the > same binary.) > > This means that the makefile specifies exactly which build toolchain > (compiler, linker, library, etc.) are used - and that does not change > during a project's lifetime, without very good reason. > > The IDE, and debugger, however, may change - there I will often use > newer versions with more features than the original version. And > sometimes I might use a lighter editor for a small change, rather than > the full IDE. So IDE version and build tools version are independent. > > With well-designed makefiles, you can have different targets for > different purposes. "make bin" for making the embedded binary, "make > pc" for making the PC version, "make tests" for running the test code on > the pc, and so on.
Fortunately modern IDEs separate well the toolchain from the IDE itself. Most manufacturers let us install the toolchain as a separate setup. I remember some years ago the scenario was different and the compiler is "included" in the IDE installation. However the problem here isn't the compiler (toolchain) that nowadays is usually arm-gcc. The big issue is with libraries and includes that the manufacturer give you to save some time in writing drivers of peripherals. I have to install the full IDE and copy the interesting headers and libraries in my folders. Another small issue is the linker script file that works like a charm in the IDE when you start a new project from the wizard. At least for me, it's very difficult to write a linker script from the scratch. You need to have a deeper understanding of the C libraries (newlib, redlib, ...) to write a correct linker script. My solution is to start with IDE wizard and copy the generated linker script in my make-based project.
>> My major reason to move from IDE compilation to Makefile is the test. I >> would start adding unit testing to my project. I understood a good >> solution is to link all the object files of the production code to a >> static library. In this way it will be very simple to replace production >> code with testing (mocking) code, simple prepending the testing oject >> files to static library of production code during linking. >> > > I would not bother with that. I would have different variations in the > build handled in different build tree directories.
Could you explain?
>> I think these type of things can be managed with Makefile instead of IDE >> compilation. >> >> What do you think? > > It can /all/ be managed from make. > > Also, a well-composed makefile is more efficient than an IDE project > manager, IME. When you use Eclipse to do a build, it goes through each > file to calculate the dependencies - so that you re-compile all the > files that might be affected by the last changes, but not more than > that. But it does this dependency calculation anew each time. With > make, you can arrange to generate dependency files using gcc, and these > dependency files get updated only when needed. This can save > significant time in a build when you have a lot of files.
Yes, this is sure!
On 03/12/18 12:13, pozz wrote:
> Il 03/12/2018 11:06, David Brown ha scritto: >> On 03/12/18 09:18, pozz wrote: >>> What do you really use for embedded projects? Do you use "standard" >>> makefile or do you rely on IDE functionalities? >>> >>> Nowadays every MCU manufacturers give IDE, mostly for free, usually >>> based on Eclipse (Atmel Studio and Microchip are probably the most >>> important exception). >>> Anyway most of them use arm gcc as the compiler. >>> >>> I usually try to compile the same project for the embedded target and >>> the development machine, so I can speed up development and debugging. I >>> usually use the native IDE from the manufacturer of the target and >>> Code::Blocks (with mingw) for compilation on the development machine. >>> So I have two IDEs for a single project. >>> >>> I'm thinking to finally move to Makefile, however I don't know if it is >>> a good and modern choice. Do you use better alternatives? >>> >> >> I sometimes use the IDE project management to start with, or on very >> small projects. But for anything serious, I always use makefiles. I >> see it as important to separate the production build process from the >> development - I need to know that I can always pull up the source code >> for a project, do a "build", and get a bit-perfect binary image that is >> exactly the same as last time. This must work on different machines, >> preferably different OS's, and it must work over time. (My record is >> rebuilding a project that was a touch over 20 years old, and getting the >> same binary.) >> >> This means that the makefile specifies exactly which build toolchain >> (compiler, linker, library, etc.) are used - and that does not change >> during a project's lifetime, without very good reason. >> >> The IDE, and debugger, however, may change - there I will often use >> newer versions with more features than the original version. And >> sometimes I might use a lighter editor for a small change, rather than >> the full IDE. So IDE version and build tools version are independent. >> >> With well-designed makefiles, you can have different targets for >> different purposes. "make bin" for making the embedded binary, "make >> pc" for making the PC version, "make tests" for running the test code on >> the pc, and so on. > > Fortunately modern IDEs separate well the toolchain from the IDE itself. > Most manufacturers let us install the toolchain as a separate setup. I > remember some years ago the scenario was different and the compiler is > "included" in the IDE installation. >
You can do that do some extent, yes - you can choose which toolchain to use. But your build process is still tied to the IDE - your choice of directories, compiler flags, and so on is all handled by the IDE. So you still need the IDE to control the build, and different versions of the IDE, or different IDEs, do not necessarily handle everything in the same way.
> However the problem here isn't the compiler (toolchain) that nowadays is > usually arm-gcc. The big issue is with libraries and includes that the > manufacturer give you to save some time in writing drivers of peripherals. > I have to install the full IDE and copy the interesting headers and > libraries in my folders.
That's fine. Copy the headers, libraries, SDK files, whatever, into your project folder. Then push everything to your version control system. Make the source code independent of the SDK, the IDE, and other files - you have your toolchain (and you archive the zip/tarball of the gnu-arm-embedded release) and your project folder, and that is all you need for the build.
> > Another small issue is the linker script file that works like a charm in > the IDE when you start a new project from the wizard. > At least for me, it's very difficult to write a linker script from the > scratch. You need to have a deeper understanding of the C libraries > (newlib, redlib, ...) to write a correct linker script. > My solution is to start with IDE wizard and copy the generated linker > script in my make-based project. >
Again, that's fine. IDE's and their wizards are great for getting started. They are just not great for long-term stability of the tools.
> >>> My major reason to move from IDE compilation to Makefile is the test. I >>> would start adding unit testing to my project. I understood a good >>> solution is to link all the object files of the production code to a >>> static library. In this way it will be very simple to replace production >>> code with testing (mocking) code, simple prepending the testing oject >>> files to static library of production code during linking. >>> >> >> I would not bother with that. I would have different variations in the >> build handled in different build tree directories. > > Could you explain? >
You have a tree something like this: Source tree: project / src / main drivers Build trees: project / build / target debug pctest Each build tree might have subtrees : project / build / target / obj / main drivers project / build / target / deps / main drivers project / build / target / lst / main drivers And so on. Your build trees are independent. So there is no mix of object files built in the "target" directory for your final target board, or the "debug" directory for the version with debugging code enabled, or the version in "pctest" for the code running on the PC, or whatever other builds you have for your project.
> >>> I think these type of things can be managed with Makefile instead of IDE >>> compilation. >>> >>> What do you think? >> >> It can /all/ be managed from make. >> >> Also, a well-composed makefile is more efficient than an IDE project >> manager, IME. When you use Eclipse to do a build, it goes through each >> file to calculate the dependencies - so that you re-compile all the >> files that might be affected by the last changes, but not more than >> that. But it does this dependency calculation anew each time. With >> make, you can arrange to generate dependency files using gcc, and these >> dependency files get updated only when needed. This can save >> significant time in a build when you have a lot of files. > > Yes, this is sure! >
Of course, if build times are important, you drop Windows and use Linux, and get a two to four-fold increase in build speed on similar hardware. And then you discover ccache on Linux and get another leap in speed.
On 2018-12-03, pozz <pozzugno@gmail.com> wrote:

> What do you really use for embedded projects? Do you use "standard" > makefile or do you rely on IDE functionalities?
Gnu makefiles.
> Nowadays every MCU manufacturers give IDE, mostly for free, usually > based on Eclipse (Atmel Studio and Microchip are probably the most > important exception).
And they're almost all timewasting piles of...
> Anyway most of them use arm gcc as the compiler.
If you're going to use an IDE, it seems like you should pick one and stick with it so that you get _good_ at it. I use Emacs, makefiles, and meld.
> I usually try to compile the same project for the embedded target and > the development machine, so I can speed up development and debugging. I > usually use the native IDE from the manufacturer of the target and > Code::Blocks (with mingw) for compilation on the development machine. > So I have two IDEs for a single project.
How awful.
> I'm thinking to finally move to Makefile, however I don't know if it is > a good and modern choice. Do you use better alternatives?
> My major reason to move from IDE compilation to Makefile is the test. I > would start adding unit testing to my project. I understood a good > solution is to link all the object files of the production code to a > static library. In this way it will be very simple to replace production > code with testing (mocking) code, simple prepending the testing oject > files to static library of production code during linking. > > I think these type of things can be managed with Makefile instead of > IDE compilation. > > What do you think?
I've tried IDEs. I've worked with others who use IDEs and watched them work, and compared it to how I work. It looks to me like IDEs are a tremendous waste of time. -- Grant Edwards grant.b.edwards Yow! ... this must be what at it's like to be a COLLEGE gmail.com GRADUATE!!
On 2018-12-03, David Brown <david.brown@hesbynett.no> wrote:

> I sometimes use the IDE project management to start with, or on very > small projects. But for anything serious, I always use makefiles. I > see it as important to separate the production build process from the > development - I need to know that I can always pull up the source code > for a project, do a "build", and get a bit-perfect binary image that is > exactly the same as last time.
It impossible to overemphasize how important that is. Somebody should be able to check out the source tree and a few tools and then type a single command to build production firmware. And you need to be able to _automate_ that process. If building depends on an IDE, then there's always an intermediate step where a person has to sit in front of a PC for a week tweaking project settings to get the damn thing to build on _this_ computer rather than on _that_ computer.
> This must work on different machines,
And in my experience, IDEs do not. The people I know who use Eclips with some custom-set-of-plugins spend days and days when they need to build on computer B insted of computer A. I just scp "build.sh" to the new machine and run it. It contains a handful of Subversion checkout commands and a "make". And I can do it remotely. From my phone if needed.
> preferably different OS's, and it must work over time.
Yes! Simply upgrading the OS often seems to render an IDE incapable of building a project: another week of engineering time goes down the drain tweaking the "project settings" to get things "just right". -- Grant Edwards grant.b.edwards Yow! JAPAN is a WONDERFUL at planet -- I wonder if we'll gmail.com ever reach their level of COMPARATIVE SHOPPING ...
Grant Edwards <invalid@invalid.invalid> wrote:
> It impossible to overemphasize how important that is. Somebody should > be able to check out the source tree and a few tools and then type a > single command to build production firmware. And you need to be able > to _automate_ that process.
One approach is to put the tools into a VM or a container (eg Docker), so that when you want to build you pull the container and you get an identical build environment to the last time anyone built it. Also, your continuous integration system can run builds and tests in the same environment as you're developing on. Unfortunately vendors have a habit of shipping IDEs for Windows only, which makes this harder. It's not so much of a problem for the actual compiler - especially if that's GCC under the hood - but ancillary tools (eg configuration tools for peripherals, flash image builders, etc), which are sometimes not designed to be scripted. (AutoIt is my worst enemy here, but it has been the only way to get the job done in some cases) Decoupling your build from the vagaries of the IDE, even if you can trust that you'll always build on a fixed platform, is still a good thing - many IDEs still don't play nicely with version control, for example. Theo
On 12/3/18 3:18 AM, pozz wrote:
> What do you really use for embedded projects? Do you use "standard" > makefile or do you rely on IDE functionalities? > > Nowadays every MCU manufacturers give IDE, mostly for free, usually > based on Eclipse (Atmel Studio and Microchip are probably the most > important exception). > Anyway most of them use arm gcc as the compiler. > > I usually try to compile the same project for the embedded target and > the development machine, so I can speed up development and debugging. I > usually use the native IDE from the manufacturer of the target and > Code::Blocks (with mingw) for compilation on the development machine. > So I have two IDEs for a single project. > > I'm thinking to finally move to Makefile, however I don't know if it is > a good and modern choice. Do you use better alternatives? > > My major reason to move from IDE compilation to Makefile is the test. I > would start adding unit testing to my project. I understood a good > solution is to link all the object files of the production code to a > static library. In this way it will be very simple to replace production > code with testing (mocking) code, simple prepending the testing oject > files to static library of production code during linking. > > I think these type of things can be managed with Makefile instead of IDE > compilation. > > What do you think?
We use cmake for that--it allows unit testing on a PC, as you say, and also automates the process of finding libraries, e.g. for emulating peripherals. Cheers Phil Hobbs -- Dr Philip C D Hobbs Principal Consultant ElectroOptical Innovations LLC / Hobbs ElectroOptics Optics, Electro-optics, Photonics, Analog Electronics Briarcliff Manor NY 10510 http://electrooptical.net http://hobbs-eo.com
On Monday, December 3, 2018 at 10:49:36 AM UTC-5, Theo Markettos wrote:
> One approach is to put the tools into a VM or a container (eg Docker), so > that when you want to build you pull the container and you get an identical > build environment to the last time anyone built it. > Also, your continuous integration system can run builds and tests in > the same environment as you're developing on.
Second that! We to development in and deliver VMs to customers now, so they are CERTAIN to receive exactly the 'used for production build' versions of every tool, library, driver required for JTAG gizmo, referenced component, etc, etc, etc. Especially important when some tools won't work under latest version of Winbloze! Saves enormous headaches sometime down the road when an update must be made... Hope that helps, Best Regards, Dave
Grant Edwards <invalid@invalid.invalid> writes:
> I use Emacs, makefiles, and meld.
+1 on those. My memory isn't good enough any more to remember all the byzantine steps through an IDE to re-complete all the tasks my projects require. Especially since each MCU seems to have a *different* IDE with *different* procedures to forget... And that's assuming they run on Linux in the first place ;-)

Memfault Beyond the Launch