Submakes run the same rule multiple times - makefile

I have some software built using parallel multi-level makefiles and I see that when my main Makefile runs two separate targets from a submakefile that have the same dependency, this dependency is run twice simultaneously and an error is created.
Consider the following main Makefile in the project root folder:
TARGETS = t1 t2 t3 t4 t5 t6 t7 t8
.PHONY: all $(TARGETS) clean
all: $(TARGETS)
$(TARGETS):
#echo Making $#
#sleep 1
$(MAKE) -C folder s$#
clean:
#echo Making $#
$(MAKE) -C folder clean
and the sub-makefile folder/Makefile:
SUBTARGETS = st1 st2 st3 st4 st5 st6 st7 st8
$(SUBTARGETS): dep
#echo Making $#
#sleep 1
#touch $#
dep:
#echo Making $#
#sleep 1
#echo bla >> dep
clean:
rm -f $(SUBTARGETS)
rm -f dep
rm -f dep2dump
Then running make -j8 in the root folder will run targets t1...t8 in parallel, which will then run subtargets st1...st8, which all depend on dependency dep. From the shell output and the contents of the dep file (8 lines) it is obvious that the dep rule is run 8 times, as if the 8 implications of folder/Makefile are completely independent.
I thought submakes coordinated when running in parallel and that they would avoid running the same target twice, but it seems this is not the case.
Can anyone suggest a correct way to solve such a case?
If eventually this is an unavoidable weakness of make, what alternative build tools should I look into?
Thanks
EDIT: The answers by MadScientist and Renaud Pacalet are useful but don't exactly solve my problem because they both require that the author of the top-level makefile has knowledge about the internals of the sub-makefile. I have not explained this requirement explicitly in my original post though.
So to give more details, the use case I am trying to solve is that where the source code in path folder/ is a separate project, eg. a collection of utilities st1...st8 where all (or some) of them have a dependency on library dep, internal to the utilities project in folder. Then I want to be able to use this sub-project (as seamlessly as possible) in various master projects, each of them using just a (possible different) subset of the utilities st1...st8. Additionally, the master project may contain many targets t1...t8, each depending on a different subset of st1...st8, as shown in my example above. Targets t1...t8 need to be able to run separately, building only the required dependencies from subproject (so make t1 only builds st1, etc), thus having to build all st1...st8 for each one of t1...t8 is not desired. On the other hand they also need to be able to run in parallel, eg. by running make all.
Ideally I would not want the author of each master makefile to have to know about internals of sub-project, nor have to include in the sub-makefile all the possible combinations of st1...st8 so that each master project can call just ONE of these to avoid the parallel build issue.
So far I have in mind but not tested the following imperfect solutions:
As Renaud suggested, use something like flock to at least ensure that the multiple runs of dep (by separate sub-make instances) won't happen simultaneously. Cons: requires extra tool (flock or similar) to be installed + dep runs multiple times, so extra work is needed to avoid doing the actual compilation over and over again, otherwise just eat the performance cost.
Include the sub-makefile in the master makefile so that everything runs in one make instance. This requires makes the sub-makefile able to work regardless of the path of the master makefile that includes it. No big issue. Cons: merging / including two makefile from different authors can open a can of worms, i.e. variables with same name, etc.
Modify sub-makefile as described in (2) + In the main project create another makefile, eg. utils.make, that contains a rule for the targets of sub-makefile needed and includes the sub-makefile. So utils.make will be (assuming this master project only needs st1, st5 and st7:
utils: st1 st5 st7
include foldes/Makefile
Then the master makefile will have a utils-ext rule as dependency of each of t1...t8 that will be:
utils-ext:
$(MAKE) -f rules.make utils
to build all the utils needed. This keeps the two main makefiles separate but has all utils / subtargets built when building any single one of t1...t8, which is suboptimal.

You could try to move the dep dependency to your top Makefile:
.PHONY: all $(TARGETS) clean dep
all: $(TARGETS)
$(TARGETS): dep
#echo Making $#
#sleep 1
$(MAKE) -C folder s$#
dep:
$(MAKE) -C folder s#

The only decent solution to your problem is to have ONE instance of make build all the sub-directory targets you want. Having the parent make invoke multiple sub-makes in parallel in the same directory, unless every invocation uses a completely disjoint set of targets, is a guaranteed fail situation. So if you have multiple things you want to do in the submake you should collect them all in one invocation of the sub-make and let the sub-make's parallelism handle it for you.
You could do something like this:
TARGETS = t1 t2 t3 t4 t5 t6 t7 t8
.PHONY: all $(TARGETS) clean
all: $(TARGETS)
$(TARGETS): .submake ;
.submake:
$(MAKE) -C folder $(addprefix s,$(MAKECMDGOALS))
Then in the sub-make add this so that when invoked with no arguments it builds everything:
all: $(SUBTARGETS)
Here, if you run make then the sub-make is invoked with no arguments and builds all the things in parallel. If you invoke make t1 t2 then the submake is invoked with the arguments st1 st2.
Alternatively, you can re-architect your makefiles so that you don't use recursive make at all, and one instance of make knows all the different rules and dependency relationships.

Related

Re-evaluating GNU make makefile variable

I have inherited a large branched project? that requires a volatile set of .a archives $(LIB_FILES) to be included into link target, located in some directories $(LIB_DIRS). I can write an expression like this:
LIBDEP = $(foreach ldir, $(LIB_DIRS), \
$(filter $(addprefix %/, $(LIB_FILES)), $(wildcard $(ldir)/* )))
The problem is that they might not exist at moment of make's invocation and would be built by invoking $(MAKE) inside of another target's rule, which is a prerequisite to the link step.
The problem is actual list of files that should be created varies on external factors determined at their build steps, that I can't hard-code it properly, without turning makefile into a spaghetti mess and said variable is not re-evaluated at the moment of link command invocation.
I have suspicion that $(eval ) function can be used somehow, but manual is not very forthcoming as well as I didn't found examples of its use in this way.
Toolchain: GCC and binutils, make 3.81
Another solution is to create an explicit dependency of your make script on the output of the step which currently creates the variable $(LIB_FILES). This is what the manual is dealing with in the chapter How makefiles are remade and it aims at the technique which make is best at, namely deriving dependencies from the existence and timestamp of files (instead of variables). The following hopefully depicts your situation with the process of deducing a new set of libraries simulated by the two variables $(LIBS_THIS_TIME) and $(LIB_CONFIG_SET).
LIBS_THIS_TIME = foo.a:baz.a:bar.a
LIB_CONFIG_SET = $(subst :,_,$(LIBS_THIS_TIME))
include libdeps.d
linkstep:
#echo I am linking $^ now
touch $#
libdeps.d: $(LIB_CONFIG_SET)
-rm libdeps.d
$(foreach lib,$(subst :, ,$(LIBS_THIS_TIME)),echo linkstep: $(lib) >> libdeps.d;)
$(LIB_CONFIG_SET):
touch $#
If make finds that libdeps.d is not up to date to your current library configuration it is remade before make executes any other rule, although it is not the first target in the makefile. This way, if your build process creates a new or different set of libraries, libdeps.d would be remade first and only then make would carry on with the other targets in your top makefile, now with the correct dependecy information.
It sometimes happens that you need to invoke make several times in succession. One possibility to do this is to use conditionals:
ifeq ($(STEP),)
all:
<do-first-step>
$(MAKE) STEP=2 $#
else ifeq ($(STEP),2)
all:
<do-second-step>
$(MAKE) STEP=3 $#
else ifeq ($(STEP),3)
all:
<do-third-step>
endif
In each step you can generate new files and have them existing for the next step.

How to force a certain groups of targets to be always run sequentially?

Is there a way how to ask gmake to never run two targets from a set in parallel?
I don't want to use .NOTPARALLEL, because it forces the whole Makefile to be run sequentially, not just the required part.
I could also add dependencies so that one depends on another, but then (apart from being ugly) I'd need to build all of them in order to build the last one, which isn't necessary.
The reason why I need this is that (only a) part of my Makefile invokes ghc --make, which takes care of its dependencies itself. And it's not possible to run it in parallel on two different targets, because if the two targets share some dependency, they can rewrite each other's .o file. (But ghc is fine with being called sequentially.)
Update: To give a specific example. Let's say I need to compile two programs in my Makefile:
prog1 depends on prog1.hs and mylib.hs;
prog2 depends on prog2.hs and mylib.hs.
Now if I invoke ghc --make prog1.hs, it checks its dependencies, compiles both prog1.hs and mylib.hs into their respective object and interface files, and links prog1. The same happens when I call ghc --make prog2.hs. So if they the two commands get to run in parallel, one will overwrite mylib.o of the other one, causing it to fail badly.
However, I need that neither prog1 depends on prog2 nor vice versa, because they should be compilable separately. (In reality they're very large with a lot of modules and requiring to compile them all slows development considerably.)
Hmmm, could do with a bit more information, so this is just a stab in the dark.
Make doesn't really support this, but you can sequential-ise two targets in a couple of ways. First off, a real use for recursive make:
targ1: ; recipe1...
targ2: ; recipe2...
both-targets:
${MAKE} targ1
${MAKE} targ2
So here you can just make -j both-targets and all is fine. Fragile though, because make -j targ1 targ2 still runs in parallel. You can use dependencies instead:
targ1: ; recipe1...
targ2: | targ1 ; recipe2...
Now make -j targ1 targ2 does what you want. Disadvantage? make targ2 will always try to build targ1 first (sequentially). This may (or may not) be a show-stopper for you.
EDIT
Another unsatisfactory strategy is to explicitly look at $MAKECMDGOALS, which lists the targets you specified on the command-line. Still a fragile solution as it is broken when someone uses dependencies inside the Makefile to get things built (a not unreasonable action).
Let's say your makefile contains two independent targets targ1 and targ2. Basically they remain independent until someone specifies on the command-line that they must both be built. In this particular case you break this independence. Consider this snippet:
$(and $(filter targ1,${MAKECMDGOALS)),$(filter targ2,${MAKECMDGOALS}),$(eval targ1: | targ2))
Urk! What's going on here?
Make evaluates the $(and)
It first has to expand $(filter targ1,${MAKECMDGOALS})
Iff targ1 was specified, it goes on to expand $(filter targ2,${MAKECMDGOALS})
Iff targ2 was also specified, it goes on to expand the $(eval), forcing the serialization of targ1 and targ2.
Note that the $(eval) expands to nothing (all its work was done as a side-effect), so that the original $(and) always expands to nothing at all, causing no syntax error.
Ugh!
[Now that I've typed that out, the considerably simpler prog2: | $(filter prog1,${MAKECMDGOALS})
occurs to me. Oh well.]
YMMV and all that.
I'm not familiar with ghc, but the correct solution would be to get the two runs of ghc to use different build folders, then they can happily run in parallel.
Since I got stuck at the same problem, here is another pointer in the direction that make does not provide the functionality you describe:
From the GNU Make Manual:
It is important to be careful when using parallel execution (the -j switch; see Parallel Execution) and archives. If multiple ar commands run at the same time on the same archive file, they will not know about each other and can corrupt the file.
Possibly a future version of make will provide a mechanism to circumvent this problem by serializing all recipes that operate on the same archive file. But for the time being, you must either write your makefiles to avoid this problem in some other way, or not use -j.
What you are attempting, and what I was attempting (using make to insert data in a SQLite3 database) suffers from the exact same problem.
I needed to separate the compilation from other steps (cleaning, building dirs and linking), as I wanted to run the compilation with more core processes and the -j flag.
I managed to solve this, with different makefiles including and calling each other. Only the "compile" make file is running in parallel with all the cores, the rest of the process is syncronous.
I divided my makefile in 3 separate scripts:
settings.mk: contains all the variables and flag definitions
makefile: has all the targets except the compilation one (It has .NOTPARALLEL directive). It calls compile.mk with -j flag
compile.mk: contains only the compile operation (without .NOTPARALLEL)
In settings.mk I have:
CC = g++
DB = gdb
RM = rm
MD = mkdir
CP = cp
MAKE = mingw32-make
BUILD = Debug
DEBUG = true
[... all other variables and flags needed, directories etc ...]
In makefile I have Link and compilation target as these:
include .makefiles/settings.mk
[... OTHER TARGETS (clean, directories etc)]
compilation:
#echo Compilation
#$(MAKE) -f .makefiles/compile.mk --silent -j 8 -Oline
#Link
$(TARGET): compilation
#echo -e Linking $(TARGET)
#$(CC) $(LNKFLAGS) -o $(TARGETDIR)/$(TARGET) $(OBJECTS) $(LIBDIRS) $(LIB)
#Non-File Targets
.PHONY: all prebuild release rebuild clean resources directories run debug
.NOTPARALLEL: all
# include dependency files (*.d) if available
-include $(DEPENDS)
And this is my compile.mk:
include .makefiles/settings.mk
#Defauilt
all: $(OBJECTS)
#Compile
$(BUILDDIR)/%.$(OBJEXT): $(SRCDIR)/%.$(SRCEXT)
#echo -e Compiling: $<
#$(MD) -p $(dir $#)
#$(CC) $(COMFLAGS) $(INCDIRS) -c $< -o $#
#Non-File Targets
.PHONY: all
# include dependency files (*.d) if available
-include $(DEPENDS)
Until now, it's working.
Note that I'm calling compile.mk with -j flag AND -Oline so that parallel processing doesn't mess up with the output.
Any syntax color can be setted in the makefile main script, since the -O flag invalidates escape color codes.
I hope it can help.
I had a similar problem so ended up solving it on the command line, like so:
make target1; make target2
to force it to do the targets sequentially.

How to handle different targets for same directories in parallel making

The question is about parallel making w/ GNU makefile.
Given a folder structure as below, the goal is to deliver a makefile that it supports make release/debug/clean in parallel.
project folder structure:
foo
+-foo1
+-foo2
+-foo3
The makefile may be sth like:
SUBDIR = foo1 foo2 foo3
.PHONY $(SUBDIR) release debug clean
release: $(SUBDIR)
$(SUBDIR):
$(MAKE) -C $# release
debug: $(SUBDIR)
#below is incorrect. $(SUBDIR) is overriden.
$(SUBDIR):
$(MAKE) -C $# debug
..
Sub directory list are set as phony targets for parallel making. but it lost the information of original target (release, debug, clean etc).
One method is to suffix the names for the directories and recover it in commands, but it is weird. another method might be to use variables, but not sure how to work it out.
The questions is:
How to write the rules for directories, that supports parallel making w/ different targets (release/debug/clean)?
Any hints are greatly appreciated.
Setting variables on the command line certainly works. You can also use MAKECMDGOALS (see the GNU make manual):
$(SUBDIR):
$(MAKE) -C $# $(MAKECMDGOALS)

Using make to build several binaries

I want to create a Makefile (in a parent dir) to call several other Makefiles (in sub dirs) such that I can build several binaries (one per project sub dir) by invoking just the one parent Makefile.
My research has been hampered by finding loads of stuff on recursive Makefiles, but I think this is where you are trying to build several directories Makefiles into a single binary?
Maybe what I want to do is better handled by a shell script perhaps invoking make in each sub directory in turn, but I thought a Makefile might be a more elegant solution?
any pointers gratefully received
PS using linux and the GNU tool chain
The for loop solution given in the first answer above actually shouldn't be used, as-is. In that method, if one of your sub-makes fails the build will not fail (as it should) but continue on with the other directories. Not only that, but the final result of the build will be whatever the exit code of the last subdirectory make was, so if that succeeded the build succeeds even if some other subdirectory failed. Not good!!
You could fix it by doing something like this:
all:
#for dir in $(SUBDIRS); \
do \
$(MAKE) -C $${dir} $# || exit $$?; \
done
However now you have the opposite problem: if you run "make -k" (continue even if there are errors) then this won't be obeyed in this situation. It'll still exit on failure.
An additional issue with both of the above methods is that they serialize the building of all subdirectories, so if you enable parallel builds (with make's -j option) that will only happen within a single subdirectory, instead of across all subdirectories.
Eregrith and sinsedrix have solutions that are closer to what you want, although FYI you should never, ever use "make" when you are invoking a recursive make invocation. As in johfel's example you should ALWAYS use $(MAKE).
Something like this is what you want:
SUBDIRS = subdir1 subdir1 subdir3 ...
all: $(addprefix all.,$(SUBDIRS))
all.%:
# $(MAKE) -C '$*' '$(basename $#)'
.PHONY: $(addprefix all.,$(SUBDIRS))
And of course you can add more stanzas like this for other targets such as "install" or whatever. There are even more fancy ways to handle building subdirectories with any generic target, but this requires a bit more detail.
If you want to support parallel builds you may need to declare dependencies at this level to avoid parallel builds of directories which depend on each other. For example in the above if you cannot build subdir3 until after both subdir1 and subdir2 are finished (but it's OK for subdir1 and subdir2 to build in parallel) then you can add something like this to your makefile:
all.subdir3 : all.subdir1 all.subdir2
You can call targets in subdirectory makefiles via
all:
$(MAKE) -C subdirectory1 $#
$(MAKE) -C subdirectory2 $#
...
or better
SUBDIRS=subd1 subd2 subd3
all:
#for dir in $(SUBDIRS); \
do \
$(MAKE) -C $${dir} $#; \
done
you should indeed use cmake to generate the Makefile automatically from a given CMakeLists.txt configuration file.
Here's a random link to get you started. Here you can find a simple sample project, including multiple subdirectories, executables, and a shared library.
Each makefile can have several target, it's still true with recursive makefiles, usually it's written:
all: target1 target2 target3
target1 :
make -C subdir
Then make all

Preventing two make instances from executing the same target (parallel make)

I trying to make parallel make work on our build server. I am running into a very frequent problem here that two instances of make trying to make two different targets, say A and B, nearly simultaneously try to make a target which is required by both, say C.
As both instances try to make C together in different instances, C make fails for either of them since one making of C requires some files to be moved here and there and either one of these instances ends up moving or deleting an already created file.
Is there is common construct that I can use to prevent a re-entry into a makefile if the target is already being made ?
Update:
Ok let me put it this way :
My application requires A.lo and B.lo to be present. These A.lo and B.lo are libraries which also link against C.lo.
So the rules look like
app.d : A.lo B.lo (other lo s)
(do linking)
In some other directory say A (which will house A.lo) :
A.lo : C.lo (other .o s and .lo s)
(do linking)
In some other directory say B (which will house B.lo) :
B.lo : C.lo (other .o s and .lo s)
(do linking)
So in effect while making app.d make forks off two parallel makes for targets A.lo and B.lo.
Upon entering directories A and B make forks off another two threads for target C.lo independently and at times both of these are linking C.lo at the same time, which causes one of them to fail with some weird errors like file not recognized (since it may be written onto by other linker instance)
How should I go about solving this? It is not possible to create A.lo and B.lo without C.lo linked against them.
This may sound a little, well, obvious, but the simplest solution is just to arrange for C to be built explicitly before either A or B. That way when the recursive makes for A and B run, C will already be up-to-date, and neither will bother trying to build it.
If your toplevel makefile looks like this:
all: buildA buildB
buildA:
$(MAKE) -C A
buildB:
$(MAKE) -C B
You can extend it like this:
all: buildA buildB
buildA: buildC
$(MAKE) -C A
buildB: buildC
$(MAKE) -C B
buildC:
$(MAKE) -C C
You didn't give a lot of detail about your specific build, so probably this won't just drop-in and work, but hopefully you get the idea.
I solve this problem using the "mkdir" technique:
SHELL = /bin/bash
all: targetC targetD
targetC: targetA
............
targetD: targetB
............
targetA targetB: source
-mkdir source.lock && ( $(command) < source ; rm -r source.lock )
$(SHELL) -c "while [ -d source.lock ] ; do sleep 0.1 ; done"
I would be happy to see a more elegant solution, though.

Resources