EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Makefile or IDE?

Started by pozz December 2, 2021
Am 04.12.2021 um 16:23 schrieb pozz:
> Il 04/12/2021 10:31, Stefan Reuther ha scritto: >> I'm not sure what you need order-only dependencies for. For a project >> like this, with GNU make I'd most likely just do something like >> >> &nbsp;&nbsp;&nbsp;&nbsp; OBJ = file1.o module1/file2.o module2/file3.o >> &nbsp;&nbsp;&nbsp;&nbsp; main: $(OBJ) >> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(CC) -o $@ $(OBJ) >> &nbsp;&nbsp;&nbsp;&nbsp; $(OBJ): %.o: $(SRCDIR)/%.c >> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mkdir $(dir $@) >> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(CC) $(CFLAGS) -c $< -o $@ > > This is suboptimal. Every time one object file is created (because it is > not present or because prerequisites aren't satisfied), mkdir command is > executed, even if $(dir $@) is already created.
(did I really forget the '-p'?) The idea was that creating a directory and checking for its existence both require a path lookup, which is the expensive operation here. When generating the Makefile with a script, it's easy to sneak a 100% matching directory creation dependency into any rule that needs it foo/bar.o: bar.c foo/.mark ... foo/.mark: mkdir foo
>>> Dependencies must be created as a side effect of compilation with >>> esoteric -M options for gcc. >> >> It's not too bad with sufficiently current versions. >> >> &nbsp;&nbsp;&nbsp;&nbsp; CFLAGS += -MMD -MP >> &nbsp;&nbsp;&nbsp;&nbsp; -include $(OBJ:.o=.d) > > Are you sure you don't need -MT too, to specify exactly the target rule?
Documentation says you are right, but '-MMD -MP' works fine for me so far... Stefan
On 05/12/2021 11:02, Stefan Reuther wrote:
> Am 04.12.2021 um 22:17 schrieb David Brown: >> On 04/12/2021 20:53, George Neuner wrote: >>> On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards >>>> I always had good luck using Cygwin and gnu "make" on Windows to run >>>> various Win32 .exe command line compilers (e.g. IAR). I (thankfully) >>>> haven't needed to do that for several years now... >>> >>> The problem with Cygwin is it doesn't play well with native Windows >>> GCC (MingW et al). > [...] >> I concur with that. Cygwin made sense long ago, but for the past couple >> of decades the mingw-based alternatives have been more appropriate for >> most uses of *nix stuff on Windows. In particular, Cygwin is a thick >> compatibility layer that has its own filesystem, process management, and >> other features to fill in the gaps where Windows doesn't fulfil the >> POSIX standards (or does so in a way that plays badly with the rest of >> Windows). > > The problem is that both projects, Cygwin and MinGW/MSYS, provide much > more than just a compiler, and in an incompatible way, which is probably > incompatible with what your toolchain does, and incompatible with what > Visual Studio does.
Neither Cygwin nor msys are compilers or toolchains. Nor is MSVS, for that matter. That would only be a problem if you misunderstood what they are.
> > "-Ic:\test" specifies one path name for Windows, but probably two for a > toolchain with Unix heritage, where ":" is the separator, not a drive > letter. Cygwin wants "-I/cygdrive/c" instead, (some versions of) MinGW > want "-I/c". That, on the other hand, might be an option "-I" followed > by an option "/c" for a toolchain with Windows heritage. >
Drive letters on Windows have always been a PITA. Usually, IME, it is not a big issue for compilation - most of your include directories will be on the same drive you are working in (with "system" includes already handled by the compiler configuration). Use a makefile, make the base part a variable, then at most you only have to change one part. It's a good idea anyway to have things like base paths to includes as a variable in the makefile. One of the differences between msys and cygwin is the way they handle paths - msys has a method that is simpler and closer to Windows, while cygwin is a bit more "alien" but supports more POSIX features (like links). In practice, with programs compiled for the "mingw" targets you can usually use Windows names and paths without further ado. On my Windows systems, I put msys2's "/usr/bin" directory on my normal PATH, and from the standard Windows command prompt I happily use make, grep, less, cp, ssh, and other *nix tools without problems or special consideration.
> The problem domain is complex, therefore solutions need to be complex. > > That aside, I found staying within one universe ("use all from Cygwin", > "use all from MinGW") to work pretty well; when having to call into > another universe (e.g. native Win32), be careful to not use, for > example, any absolute paths. > > > Stefan >
On 05/12/2021 11:11, Stefan Reuther wrote:
> Am 04.12.2021 um 16:23 schrieb pozz: >> Il 04/12/2021 10:31, Stefan Reuther ha scritto: >>> I'm not sure what you need order-only dependencies for. For a project >>> like this, with GNU make I'd most likely just do something like >>> >>> &nbsp;&nbsp;&nbsp;&nbsp; OBJ = file1.o module1/file2.o module2/file3.o >>> &nbsp;&nbsp;&nbsp;&nbsp; main: $(OBJ) >>> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(CC) -o $@ $(OBJ) >>> &nbsp;&nbsp;&nbsp;&nbsp; $(OBJ): %.o: $(SRCDIR)/%.c >>> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mkdir $(dir $@) >>> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(CC) $(CFLAGS) -c $< -o $@ >> >> This is suboptimal. Every time one object file is created (because it is >> not present or because prerequisites aren't satisfied), mkdir command is >> executed, even if $(dir $@) is already created. > > (did I really forget the '-p'?) > > The idea was that creating a directory and checking for its existence > both require a path lookup, which is the expensive operation here. >
Checking for the path is not expensive - it is already necessary to have the path details read from the filesystem (and therefore cached, even on Windows) because you want to put a file in it. So it is free. "mkdir -p" is also very cheap - it only needs to do something if the path does not exist. (Of course, on Windows starting any process takes time and resources an order of magnitude or more greater than on *nix.) It is always nice to avoid unnecessary effort, as even small inefficiencies add up if there are enough of them. But there's no need to worry unduly about the small things.
> When generating the Makefile with a script, it's easy to sneak a 100% > matching directory creation dependency into any rule that needs it > > foo/bar.o: bar.c foo/.mark > ... > foo/.mark: > mkdir foo >
And it's easy to forget the "touch foo/.mark" command to make that work! But that is completely unnecessary - make is perfectly capable of working with a directory as a dependency and target (especially as an order-only dependency).
>>>> Dependencies must be created as a side effect of compilation with >>>> esoteric -M options for gcc. >>> >>> It's not too bad with sufficiently current versions. >>> >>> &nbsp;&nbsp;&nbsp;&nbsp; CFLAGS += -MMD -MP >>> &nbsp;&nbsp;&nbsp;&nbsp; -include $(OBJ:.o=.d) >> >> Are you sure you don't need -MT too, to specify exactly the target rule? > > Documentation says you are right, but '-MMD -MP' works fine for me so far... > > > Stefan >
On 2021-12-04, David Brown <david.brown@hesbynett.no> wrote:
> >> I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS >> Linux, but the compiler identification for a raw metal target was not >> happy when the trial compilation could not link a run file using the >> Linux run file creation model. > > That is kind of what I thought. CMake sounds like a good solution if > you want to make a program that compiles on Linux with native gcc, and > also with MSVC on Windows, and perhaps a few other native build > combinations. But it is not really suited for microcontroller builds as > far as I can see. (Again, I haven't tried it much, and don't want to do > it injustice by being too categorical.)
I use CMake for cross-compilation for microcontroller stuff. I don't use it for my own code, but there are a few 3rd-party libraries that use it, and I don't have any problems configuring it to use a cross compiler. -- Grant
On 2021-12-04, George Neuner <gneuner2@comcast.net> wrote:
> On Fri, 3 Dec 2021 21:28:54 -0000 (UTC), Grant Edwards ><invalid@invalid.invalid> wrote: > >>On 2021-12-03, Theo <theom+news@chiark.greenend.org.uk> wrote: >> >>> [*] Powershell and WSL have been trying to improve this. But I've not seen >>> any build flows that make much use of them, beyond simply taking Linux flows >>> and running them in WSL. >> >>I always had good luck using Cygwin and gnu "make" on Windows to run >>various Win32 .exe command line compilers (e.g. IAR). I (thankfully) >>haven't needed to do that for several years now... > > The problem with Cygwin is it doesn't play well with native Windows > GCC (MingW et al).
It's always worked fine for me.
> Cygwin compilers produce executables that depend on the /enormous/ > Cygwin library.
I wasn't talking about using Cygwin compilers. I was talking about using Cygwin to do cross-compilation using compilers like IAR.
> You can statically link the library or ship the DLL (or an installer > that downloads it) with your program, but by doing so your programs > falls under GPL - the terms of which are not acceptable to some > developers. > > And the Cygwin environment is ... less than stable. Any update to > Windows can break it.
That's definitely true. :/
On 06/12/2021 14:51, Grant Edwards wrote:
> On 2021-12-04, David Brown <david.brown@hesbynett.no> wrote: >> >>> I succeeded to use CMake for cross-compiling on PC Linux for Raspi OS >>> Linux, but the compiler identification for a raw metal target was not >>> happy when the trial compilation could not link a run file using the >>> Linux run file creation model. >> >> That is kind of what I thought. CMake sounds like a good solution if >> you want to make a program that compiles on Linux with native gcc, and >> also with MSVC on Windows, and perhaps a few other native build >> combinations. But it is not really suited for microcontroller builds as >> far as I can see. (Again, I haven't tried it much, and don't want to do >> it injustice by being too categorical.) > > I use CMake for cross-compilation for microcontroller stuff. I don't > use it for my own code, but there are a few 3rd-party libraries that > use it, and I don't have any problems configuring it to use a cross > compiler. >
OK. As I said, I haven't looked in detail or tried much. Maybe I will, one day when I have time.
On 2021-12-06, David Brown <david.brown@hesbynett.no> wrote:
> On 06/12/2021 14:51, Grant Edwards wrote:
>> I use CMake for cross-compilation for microcontroller stuff. I don't >> use it for my own code, but there are a few 3rd-party libraries that >> use it, and I don't have any problems configuring it to use a cross >> compiler. > > OK. As I said, I haven't looked in detail or tried much. Maybe I will, > one day when I have time.
I see no reason at all to use for embedded code unless you want to use a large 3rd party library that already uses it, and you want to use that library's existing cmake build process. For smaller libraries, it's probably easier to write a makefile from scratch. IMO, configuring stuff that uses cmake seems very obtuse and fragile &mdash; but that's probably because I don't use it much. -- Grant
On 12/2/2021 6:46 AM, pozz wrote:
> When I download C source code (for example for Linux), most of the time > I need to use make (or autoconf). > > In embedded world (no Linux embedded), we use MCUs produced by a silicon > vendor that give you at least a ready-to-use IDE (Elipse based or Visual > Studio based or proprietary). Recently it give you a full set of > libraries, middlewares, tools to create a complex project from scratch > in a couple of minutes that is compatibile and buildable with its IDE. > > Ok, it's a good thing to start with a minimal effort and make some tests > on EVB and new chips. However I'm wondering if a good quality > commercial/industrial grade software is maintained under the IDE of the > silicon vendor or it is maintained with a Makefile (or similar). > > I'm asking this, because I just started to add some unit tests (to run > on the host machine) on one of my projects that is built under the IDE. > Without a Makefile is very difficult to add a series of tests: do I > create a different IDE project for each module test? > > Moreover, the build process of a project maintained under an IDE is > manual (click on a button). Most of the time there isn't the possibility > to build by a command line and when it is possible, it isn't the > "normal" way. > > Many times in the past I tried to write a Makefile for my projects, but > sincerely for me make tool is very criptic (tabs instead of spaces?). > Dependencies are a mess. > > Do you use IDE or Makefile? Is there a recent and much better > alternative to make (such as cmake or SCons)?
For my most recent projects, I'm using Eclipse and letting it generate the make files. I find it necessary to manually clean up the XML controlling Eclipse to ensure that there are no hard-coded paths and everything uses sane path variables (after starting with vendor-tool-generated project). I have multiple projects in the workspace: 1) target project uses the GCC ARM cross-compiler (debug and release targets), and 2) for host builds of debug+test software, one or more minGW GCC projects Its not optimal but it does work without too much grief. Does a poor job for understanding/maintaining those places where different compiler options are needed (fortunately not many). I read *the* book on CMake and my head hurts, plus the book is revised every three weeks as CMake adds or fixes numerous 'special' things. Haven't actually used it yet but might try (with Zephyr). My older big projects are primarily make (with dozens of targets including intermediate preprocess stuff), plus separate Visual Studio build for a simulator, an Eclipse build (auto-generated make) for one of the embedded components, and Linux Eclipse build (auto-generated make) for Linux versions of utilities. All this is painful to maintain and keep synchronized. Sure would like to see a better way to handle all the different targets and platforms (which CMake should help with but I'm really not sure how to wrangle the thing). Interesting discussion!
Il 02/12/2021 12:46, pozz ha scritto:
[...]
> Do you use IDE or Makefile? Is there a recent and much better > alternative to make (such as cmake or SCons)?
I reply to this post for a few questions on make. For embedded projects, we use at least one cross-compiler. I usually use two compilers: cross-compiler for the embedded target and native-compiler for creating a "simulator" for the host or for running some tests on the host. I'm thinking to use an env variable to choose between the targets: make TARGET=embedded|host If the native compiler is usually already on the PATH (even if I prefer to avoid), cross-compiler is usually not already in the PATH. How do you solve this? I'm thinking to use again env variables and set them in a batch script path.bat that is machine dependent (so it shouldn't be tracked by git). SET GNU_ARM=c:\nxp\MCUXpressoIDE_11.2.1_4149... SET MINGW_PATH=c:\mingw64 In the makefile: ifeq($(TARGET),embedded) CC := "$(GNU_ARM)/bin/arm-none-eabi-gcc" CPP... else ifeq($(TARGET),host) CC := $(MINGW_PATH)/bin/gcc CPP... endif In this way, I launch path.bat only one time on my windows development machine and run make TARGET= during development. Another issue is with internal commands of cmd.exe. GNU make for Windows, ARM gcc, mingw and so on are able to manage paths with Unix-like slash, but Windows internal commands such as mkdir, del does not. I think it's much better to use Unix like commands (mkdir, rm) that can be installed with coreutils[1] for Windows. So in path.bat I add the coreutils folder to PATH: SET COREUTILS_PATH=C:\TOOLS\COREUTILS and in Makefile: MKDIR := $(COREUTILS_PATH)/bin/mkdir RM := $(COREUTILS_PATH)/bin/rm Do you use better solutions? [1] http://gnuwin32.sourceforge.net/packages/coreutils.htm
Il 04/12/2021 17:41, David Brown ha scritto:
[...]
>> This is suboptimal. Every time one object file is created (because it is >> not present or because prerequisites aren't satisfied), mkdir command is >> executed, even if $(dir $@) is already created. > > Use existence-only dependencies: > > target/%.o : %.c | target > $(CC) $(CFLAGS) -c $< -o $@ > > target : > mkdir -p target
Do you replicate the source tree in the target directory for build? I'd prefer to have the same tree in source and build dirs: src/ file1.c mod1/ file1.c mod2/ file1.c build/ file1.o mod1/ file1.o mod2/ file1.o With your rules above, I don't think this can be done. target is only the main build directory, but I need to create subdirectories too. I understood for this I need to use $(@D) in prerequisites and this can be done only with second expansion. .SECONDEXPANSION: target/%.o : %.c target/%.d | $$(@D) $(CC) $(CFLAGS) -c $< -o $@ $(BUILD_DIRS): $(MKDIR) -p $@

Memfault Beyond the Launch