Reply by StateMachineCOM December 21, 20212021-12-21
Thanks a lot for the suggestions. I'll study them carefully,

I'm already using a homegrown Makefile "template", such as this one:

https://github.com/QuantumLeaps/qpc/blob/master/examples/arm-cm/blinky_ek-tm4c123gxl/qk/gnu/Makefile

The Makefile supports multiple build configurations (Debug, Release, and "Spy" with software tracing), generation of dependencies, etc.. It is pretty straightforward with all source files, directories and libraries configurable. The Makefile uses VPATH to simplify the search for the sources. This really simplifies things, but requires unique file names for sources.

I'm not sure if this Makefile looks "professional" enough to experts. Any constructive critique and suggestions for improvement will be welcome.

Miro
Reply by StateMachineCOM December 20, 20212021-12-20
Would anyone point me to a good Makefile template for building a simple embedded project with GNU-ARM?
Reply by chris December 17, 20212021-12-17
On 12/02/21 11:46, pozz wrote:
> When I download C source code (for example for Linux), most of the time > I need to use make (or autoconf). > > In embedded world (no Linux embedded), we use MCUs produced by a silicon > vendor that give you at least a ready-to-use IDE (Elipse based or Visual > Studio based or proprietary). Recently it give you a full set of > libraries, middlewares, tools to create a complex project from scratch > in a couple of minutes that is compatibile and buildable with its IDE. > > Ok, it's a good thing to start with a minimal effort and make some tests > on EVB and new chips. However I'm wondering if a good quality > commercial/industrial grade software is maintained under the IDE of the > silicon vendor or it is maintained with a Makefile (or similar). > > I'm asking this, because I just started to add some unit tests (to run > on the host machine) on one of my projects that is built under the IDE. > Without a Makefile is very difficult to add a series of tests: do I > create a different IDE project for each module test? > > Moreover, the build process of a project maintained under an IDE is > manual (click on a button). Most of the time there isn't the possibility > to build by a command line and when it is possible, it isn't the > "normal" way. > > Many times in the past I tried to write a Makefile for my projects, but > sincerely for me make tool is very criptic (tabs instead of spaces?). > Dependencies are a mess. > > Do you use IDE or Makefile? Is there a recent and much better > alternative to make (such as cmake or SCons)? >
Have a standard Makefile template that gets edited for each new project or part thereof. IDE systems may have their attractions, but usually don't like their editors, nor the plethora of config files. The more plain vanilla the better here, hence makefiles as the least hassle and most productive route. Need to have full visibility from top to bottom and some ide's can be pretty opaque. Older versions of netbeans looked interesting though... Chris
Reply by chris December 17, 20212021-12-17
On 12/06/21 13:52, Grant Edwards wrote:
> On 2021-12-04, George Neuner<gneuner2@comcast.net> wrote: >> On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards >> <invalid@invalid.invalid> wrote: >> >>> On 2021-12-03, Theo<theom+news@chiark.greenend.org.uk> wrote: >>> >>>> [*] Powershell and WSL have been trying to improve this. But I've not seen >>>> any build flows that make much use of them, beyond simply taking Linux flows >>>> and running them in WSL. >>> >>> I always had good luck using Cygwin and gnu "make" on Windows to run >>> various Win32 .exe command line compilers (e.g. IAR). I (thankfully) >>> haven't needed to do that for several years now... >> >> The problem with Cygwin is it doesn't play well with native Windows >> GCC (MingW et al). > > It's always worked fine for me. > >> Cygwin compilers produce executables that depend on the /enormous/ >> Cygwin library. > > I wasn't talking about using Cygwin compilers. I was talking about > using Cygwin to do cross-compilation using compilers like IAR. > >> You can statically link the library or ship the DLL (or an installer >> that downloads it) with your program, but by doing so your programs >> falls under GPL - the terms of which are not acceptable to some >> developers. >> >> And the Cygwin environment is ... less than stable. Any update to >> Windows can break it. > > That's definitely true. :/ >
Used cygwin for years just to have access to the unix utils and X so I could run my favourite nedit fs editor, Never ran compilers though it, but hassle free experience once setup. That was the 32 bit version, sadly no longer available... Chris
Reply by David Brown December 12, 20212021-12-12
On 11/12/2021 18:53, Hans-Bernhard Br&ouml;ker wrote:
> Am 10.12.2021 um 19:44 schrieb David Brown: > >> The big advantage of having object directories that copy source >> directories is that it all works even if you have more than one file >> with the same name.&nbsp; > > Setting aside the issue whether the build can actually handle that > ("module names" in the code tend to only be based on the basename of the > source, not its full path, so they would clash anyway), that should > remain an exceptional mishap.&nbsp; I don't subscribe to the idea of making > my everyday life harder to account for (usually) avoidable exceptions > like that. >
Nor do I. But as I said, and as others know, supporting object files in a tree is not difficult in a makefile, and it is common practice for many build systems. I can't think of any that /don't/ support it (not that I claim to have used a sizeable proportion of build systems). If it is easy to avoid a particular class of problem, and have a nice, neat structure, then what's the problem with having object files in a tree? After all, the basic principle of an automatically maintained makefile (or other build system) is: 1. Find all the source files - src/x/y/z.c - in whatever source paths you have specified. 2. Determine all the object files you need by swapping ".c" for ".o", and changing the "src" directory for the "build" directory, giving you a list build/x/y/z.o. 3. Figure out a set of dependency rules for these, either using something like "gcc -M...", or the lazy method of making all object files depend on all headers, or something inbetween. 4. Make your binary file depend on all the build/x/y/z.o files. As I see it, it is simpler, clearer and more natural that the object files (and dependencies, lists files, etc.) follow the structure of the source files. I'd have to go out of my way to make a riskier system that put all the object files in one place.
Reply by Stefan Reuther December 12, 20212021-12-12
Am 11.12.2021 um 18:47 schrieb Hans-Bernhard Br&ouml;ker:
> Am 11.12.2021 um 10:01 schrieb Stefan Reuther: >> Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Br&ouml;ker: >>> But let's face it: we very rarely even look at object files, much less >>> work on them in any meaningful fashion.&nbsp; They just have to be somewhere, >>> but it's no particular burden at all if they're all in a single folder, >>> per primary build target. >> >> But sometimes, we do look at them. Especially in an embedded context. > > In my experience, looking at individual object files does not occur in > embedded context any more often than in others.
Neither in my experience, but this is because I look at individual object files even for desktop/server applications, but I don't expect that to be the rule :)
>> Or to >> answer the question "how much code size do I pay for using this C++ >> feature?". "Did the compiler correctly inline this function I expected >> it to inline?". > > Both of those are way easier to check in the debugger or in the mapfile, > than by inspecting individual object files.
For me, 'objdump -dr blah.o | less' or 'nm blah.o | awk ...' is the easiest way to answer such questions. The output of 'objdump | less' is much easier to handle than gdb's 'disas'. And how do you even get function sizes with a debugger?
>> And if the linker gives me a "duplicate definition" error, I prefer that >> it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'. > > Both are equally useless. You want to know which source file they're in, > not which object files.
I want to know in what translation unit they are in. It doesn't help to know that the duplicate definition comes from 'keys.inc' which is supposed to be included exactly once. I want to know which two translation units included it, and for that it helps to have the name of the translation unit - the initial *.c/cpp file - encoded in the object file name.
> Do you actually use a tool that obfuscates the o file nimes like that?
Encoding the command-line that generates a file (as a cryptographic hash) into the file name is a super-easy way to implement rebuild-on-rule-change. I use that for a number of temporary files. I do not use that for actual object files for the reasons given, but it would technically make sense.
> I don't think you've actually mentioned a single one, so far.&nbsp; None of > the things you mentioned had anything to do with _where_ the object > files are.
There are no hard technical reasons. It's all about usability, and that's about the things you actually do. If you got a GUI that takes you to the assembler code of a function with a right-click in the editor, you don't need 'objdump'. I don't have such a GUI and don't want it most of the time. Stefan
Reply by Hans-Bernhard Bröker December 11, 20212021-12-11
Am 10.12.2021 um 19:44 schrieb David Brown:

> The big advantage of having object directories that copy source > directories is that it all works even if you have more than one file > with the same name.
Setting aside the issue whether the build can actually handle that ("module names" in the code tend to only be based on the basename of the source, not its full path, so they would clash anyway), that should remain an exceptional mishap. I don't subscribe to the idea of making my everyday life harder to account for (usually) avoidable exceptions like that.
Reply by Hans-Bernhard Bröker December 11, 20212021-12-11
Am 11.12.2021 um 10:01 schrieb Stefan Reuther:
> Am 10.12.2021 um 18:35 schrieb Hans-Bernhard Br&ouml;ker:
>> But let's face it: we very rarely even look at object files, much less >> work on them in any meaningful fashion.&nbsp; They just have to be somewhere, >> but it's no particular burden at all if they're all in a single folder, >> per primary build target. > > But sometimes, we do look at them. Especially in an embedded context.
In my experience, looking at individual object files does not occur in embedded context any more often than in others.
> One example could be things like stack consumption analysis.
That one's actually easier if you have the object files all in a single folder, as the tool will have to look at all of them anyway, so it helps if you can just pass it objdir/*.o. Or to
> answer the question "how much code size do I pay for using this C++ > feature?". "Did the compiler correctly inline this function I expected > it to inline?".
Both of those are way easier to check in the debugger or in the mapfile, than by inspecting individual object files.
> > And if the linker gives me a "duplicate definition" error, I prefer that > it is located in 'editor.o', not '3d3901cdeade62df1565f9616e607f89.o'.
Both are equally useless. You want to know which source file they're in, not which object files. Do you actually use a tool that obfuscates the o file nimes like that?
> But otherwise, once you got infrastructure to place object files in SOME > subdirectory in your build system, mirroring the source structure is > easy and gives a usability win.
I don't think you've actually mentioned a single one, so far. None of the things you mentioned had anything to do with _where_ the object files are.
Reply by Grant Edwards December 11, 20212021-12-11
On 2021-12-11, David Brown <david.brown@hesbynett.no> wrote:
> On 11/12/2021 16:18, pozz wrote: > >> Ok, it's not too hard (nothing is hard when you know how to do it), but >> it's not that simple too. > > Of course. > > And once you've got a makefile you like for one project, you copy it for > the next. I don't think I have started writing a new makefile in 25 years!
Too true. You don't write a Makefile from scratch any more than you sit down with some carbon, water, nitrogen, phosphorus and whatnot and make an apple tree. You look around and find an nice existing one that's closest to what you want, copy it, and start tweaking. -- Grant
Reply by David Brown December 11, 20212021-12-11
On 11/12/2021 16:18, pozz wrote:

> > Ok, it's not too hard (nothing is hard when you know how to do it), but > it's not that simple too. >
Of course. And once you've got a makefile you like for one project, you copy it for the next. I don't think I have started writing a new makefile in 25 years!