I would like to create multiple directories with some files with one makefile.
I've a directory structure like this:
conf_a/conf.json
conf_b/conf.json
main.py
Makefile
requirements.txt
And would like to type make conf_a and having a new directory like this:
build/conf_a/conf.json
build/conf_a/main.py
build/conf_a/requirements.txt
conf_a/conf.json
conf_b/conf.json
main.py
Makefile
requirements.txt
Or something like make conf_b and having a new directory like this:
build/conf_b/conf.json
build/conf_b/main.py
build/conf_b/requirements.txt
conf_a/conf.json
conf_b/conf.json
main.py
Makefile
requirements.txt
So I've made a Makefile like this:
# Disable built-in rules and variables
MAKEFLAGS += --no-builtin-rules
MAKEFLAGS += --no-builtin-variables
.ONESHELL:
.SHELLFLAGS: -ec
.SILENT:
BUILD_DIR := $(CURDIR)/build
CONF_FILE := conf.json
FILES_TO_COPY := requirements.txt main.py
FUNCTION_DIRS := $(shell ls */$(CONF_FILE) | xargs -n 1 -I {} dirname {})
HIDDEN_FUNCTION_DIRS := $(shell ls .*/$(CONF_FILE) 2> /dev/null | xargs -n 1 -I {} dirname {})
clean:
rm -rf $(BUILD_DIR)
all: clean $(FUNCTION_DIRS) deploy
$(FUNCTION_DIRS) $(HIDDEN_FUNCTION_DIRS):
tmp=$#
FUNCTION_DIR=$${tmp%/}
export FUNCTION=$${FUNCTION_DIR#.}
mkdir -p $(BUILD_DIR)/$$FUNCTION
cp -f $(FILES_TO_COPY) $$FUNCTION_DIR/$(CONF_FILE) $(BUILD_DIR)/$$FUNCTION/
test:
for FUNCTION in $(shell ls $(BUILD_DIR))
do
echo "Testing $$FUNCTION"
done
deploy:
for FUNCTION in $(shell ls $(BUILD_DIR))
do
echo "Deploying $$FUNCTION"
done
Well, it works...
So if I want to test a conf I do: make conf_a test.
If I want to deploy: make conf_b deploy
It work quiet well but test or deploy target are sequential (because the for loop) and they could have been parallel.
My problem is that I'se too much configuration directories and because the deploy is slow parallel could have been a lot better.
But I do not know how to structure a Makefile to make it this way.
Any idea ?
Truth be told, the task deploy deploy a GCP cloud function, and the test just run the function locally
Generally, the easiest way to structure a makefile to facilitate parallel operations is to define separate targets that can be processed in parallel. Then you can use make's -j option to request that it take care of the parallelization across (up to) a specific maximum number of parallel tasks.
For example:
deploy: deploy_a deploy_b
deploy_a: conf_a
echo deploying conf_a
deploy_b: conf_b
echo deploying conf_b
Then you can make -j2 deploy and (probably) the deploy_a and deploy_b rules will be processed in parallel. But note that that might not help much. Even though you have separate processes for the two deployments, if you're deploying both to the same local disk then they won't be able to both use write to different files on the disk at the same time. As a result, you'll probably not see a significantly better time to completion, and it might even be worse.
Note, too, that the above example eschews dynamically determining the available component directories. Such dynamism is atypical of makefiles, and IMO it rarely provides a net benefit. Nevertheless, GNU make (on which specific implementation you are already relying) does offer mechanisms by which you could generate the needed per-directory deployment rules dynamically.
Related
I have a number of makefiles that build and run tests. I would like to create a script that makes each one and notes whether the tests passed or failed. Though I can determine test status within each make file, I am having trouble finding a way to communicate that status to the caller of the make command.
My first thought is to somehow affect the return value of the make command, though this does not seem possible. Can I do this? Is there some other form of communication I can use to express the test status to the bash script that will be calling make? Perhaps by using environment variables?
Thanks
Edit: It seems that I cannot set the return code for make, so for the time being I will have to make the tests, run them in the calling script instead of the makefile, note the results, and then manually run a make clean. I appreciate everyone's assistance.
Make will only return one of the following according to the source
#define MAKE_SUCCESS 0
#define MAKE_TROUBLE 1
#define MAKE_FAILURE 2
MAKE_SUCCESS and MAKE_FAILURE should be self-explanatory; MAKE_TROUBLE is only returned when running make with the -q option.
That's pretty much all you get from make, there doesn't seem to be any way to set the return code.
The default behavior of make is to return failure and abandon any remaining targets if something failed.
for directory in */; do
if ( cd "$directory" && make ); then
echo "$0: Make in $directory succeeded" >&2
else
echo "$0: Make in $directory failed" >&2
fi
done
Simply ensure each test leaves its result in a file unique to that test. Least friction will be to create test.pass if thes test passes, otherwise create test.fail. At the end of the test run gather up all the files and generate a report.
This scheme has two advantages that I can see:
You can run the tests in parallel (You do us the -jn flag, don't you? (hint: it's the whole point of make))
You can use the result files to record whether the test needs to be re-run (standard culling of work (hint: this is nearly the whole point of make))
Assuming the tests are called test-blah where blah is any string, and that you have a list of tests in ${tests} (after all, you have just built them, so it's not an unreasonable assumption).
A sketch:
fail = ${#:%.pass=%.fail}
test-passes := $(addsuffix .pass,${tests})
${test-passes}: test-%.pass: test-%
rm -f ${fail}
touch $#
$* || mv $# ${fail}
.PHONY: all
all: ${test-passes}
all:
# Count the .pass files, and the .fail files
echo '$(words $(wildcard *.pass)) passes'
echo '$(words $(wildcard *.fail)) failures'
In more detail:
test-passes := $(addsuffix .pass,${tests})
If ${tests} contains test-1 test-2 (say), then ${test-passes} will be test-1.pass test-2.pass
${test-passes}: test-%.pass: test-%
You've just gotta love static pattern rules.
This says that the file test-1.pass depends on the file test-1. Similarly for test-2.pass.
If test-1.pass does not exist, or is older than the executable test-1, then make will run the recipe.
rm -f ${fail}
${fail} expands to the target with pass replaced by fail, or test-1.fail in this case. The -f ensures the rm returns no error in the case that the file does not exist.
touch $# — create the .pass file
$< || mv $# ${fail}
Here we run the executable
If it returns success, our work is finished
If it fails, the output file is deleted, and test-1.fail is put in its place
Either way, make sees no error
.PHONY: all — The all target is symbolic and is not a file
all: ${test-passes}
Before we run the recipe for all, we build and run all the tests
echo '$(words $(wildcard *.pass)) passes'
Before passing the text to the shell, make expands $(wildcard) into a list of pass files, and then counts the files with $(words). The shell gets the command echo 4 passes (say)
You run this with
$ make -j9 all
Make will keep 9 jobs running at once — lovely if you have 8 CPUs.
I know this has been asked before, but none of the solutions I've found work for me because they're anti-DRY.
I have a number of targets that depend on things that can't readily be timestamped -- such as files copied from another system. What I'd like to be able to do is list dependencies in a variable, like nobuild=this,that, and have those targets be assumed to be up-to-date. Since I have a lot of these, I don't want to ifdef around each one; what would be pseudocodibly preferable would be something like
ignorable-target: dependencies
$(call ifnobuild,$#)
.. rest of normal build steps ..
where the ifnobuild macro expanded to some sort of exit-from-this-recipe-with-success gmake instruction if ignorable-target was mentioned in the nobuild variable.
I also don't want to get into multi-line continued shell commands in order to defer the conditional to the recipe itself; I want to be able to tell make "Assume these targets are up-to-date and don't try to build them," so I can test other aspects with the local copies already obtained from the problematic recipes.
There isn't any sort of exit-recipe-with-success mechanism in gmake, is there?
[Edited to hopefully make the situation more clear.]
Here's an example. Targets remote1 and remote2 each involve using ssh to do something time-consuming on a remote system, and then copying the results locally. Target local1 is built locally, and isn't a time sink. target-under-work depends on all three of the above.
local1: local1.c Makefile
remote1: local1
scp local1 remote-host:/tmp/
ssh remote-host /tmp/local1 some-args # takes a long time
scp remote-host:/tmp/local1.out remote1
remote2: local1
scp local1 other-host:/tmp/
ssh other-host /tmp/local1 other-args # takes a long time
scp other-host:/tmp/local1.out remote2
target-under-work: local1 remote1 remote2
do-something-with remote1,remote2
Now, when I just run make target-under-work, it's going to run the recipes for remote1 and remote2. However, the local copies of those files are 'good enough' for my testing, so I don't want them run every time. Once things go into production, they will be run every time, but while I'm developing target-under-work, I just want to use the copies already built, and I can rebuild them daily (or whatever) for the necessary testing granularity.
The above is over-simplified; there are multiple steps and targets that depend on remote1 and/or remote2. I see how I can get the effect I want by making them order-only prerequisites -- but that would mean changing the dependency list of every target that has them as prerequisites, rather than making a single change to remote1 and remote2 so I can use some variable from the command line to tell their recipes 'pretend this has been built, don't actually build it if there's already a copy.'
I hope this makes my question more clear.
No, this early exit make feature does not exist.
Note that your problem is probably under-specified because you don't explain what behaviour you want when a slow target does not exist yet.
Let's assume that the slow targets listed in nobuild shall be rebuilt if and only if they don't exist. Instead of using make functions to early exit their recipe you could use make functions to "hide" their list of prerequisites. This way, if they already exist, they will not be rebuilt, even if they are outdated. The only subtlety is that you will need the second expansion to use the $# automatic variable in the lists of prerequisites. In the following example slow (your remoteX) depends on fast1 (your local1). fast2 (your target-under-work) depends on fast1 and slow:
host> cat Makefile
# Expands as empty string if $(1) exists and
# listed in $(nobuild). Else expands as $(2).
# $(1): target
# $(2): prerequisites
define HIDE_IF_NOBUILD
$(if $(wildcard $(1)),$(if $(filter $(1),$(nobuild)),,$(2)),$(2))
endef
nobuild :=
fast1:
#echo 'build $#'
#touch $#
fast2: fast1 slow
#echo 'build $#'
#touch $#
.SECONDEXPANSION:
slow: $$(call HIDE_IF_NOBUILD,$$#,fast1)
#echo 'build $#'
#touch $#
# Case 1: slow target not listed in nobuild and not existing
host> rm -f slow; touch fast1; make fast2
build slow
build fast2
# Case 2: slow target not listed in nobuild and existing and outdated
host> touch slow; sleep 2; touch fast1; make fast2
build slow
build fast2
# Case 3: slow target listed in nobuild and not existing
host> rm -f slow; touch fast1; make nobuild="slow" fast2
build slow
build fast2
# Case 4: slow target listed in nobuild and existing and outdated
host> touch slow; sleep 2; touch fast1; make nobuild="slow" fast2
build fast2
Our project uses Makefiles with the following type of rule for each multi-directory sub-make:
DIRS = lib audio conf parser control
all: $(DIRS)
#for DIR in $(DIRS); \
do \
( cd $$DIR; $(MAKE) $(MFLAGS) all; ) \
done
If any file fails to compile in one of the leaf makes, the build stops in that directory - but the rest of the make continues. How do I set up these Makefiles so the first error at any level will stop the entire make?
Thanks
From the for loops section of the bash manual:
The return status is the exit status of the last command that
executes.
So, you do not need to capture return statuses. You need your recipe to fail if any sub-make fails:
DIRS = lib audio conf parser control
all: $(DIRS)
#for DIR in $(DIRS); do \
$(MAKE) -C $$DIR $(MFLAGS) all || exit 1; \
done
But it would be much better to have individual recipes per directory, instead of a single for loop:
DIRS = lib audio conf parser control
all: $(DIRS)
.PHONY: all $(DIRS)
$(DIRS):
$(MAKE) -C $# $(MFLAGS) all
This way, if a sub-make fails, it is the complete rule's recipe that fails and make stops. Note the .PHONY special target, in this case it is needed because you want to run the recipe, even if the directory already exists.
There is another advantage with this structure: if you run make in parallel mode (make -j N) it will launch several sub-makes simultaneously instead of just one with the for loop. And each sub-make, in turn, will launch several recipes in parallel, up to N jobs. On a multi-processor or multi-core architecture the speed-up factor can be significant.
But this advantage can become a drawback if your project is not parallel safe, that is, if the order of processing of your directories matters and is not properly defined in the makefiles. If you are in this situation you can add a:
.NOTPARALLEL:
special target at the beginning of your main makefile to tell make. But it would be better to explicitly define the inter-directories dependencies. And if you do not know how to do this, please ask another question.
I found the answer in this question's answer: I have to rewrite to capture the return status of each submake.
I am using GNU make, where I have a top level makefile, which invokes
another makefile, for different types of builds, like:
LIST_OF_TYPES: 32 64 ...
tgt-name: deps
$(foreach i,$(LIST_OF_TYPES), \
$(MAKE) -f $(MY_MAKEFILE) ARCH=$i mylib;)
when running with higher j factor like -j100 etc, one of the build fails, but
the return value is still 0 so, I cannot make out if the build really did work!
Is there anything wrong with the way I'm using the foreach construct?
or its just the higher j with foreach which is causing problems?
The top-level makefile you have listed is safe under -j (though badly sub-optimal). After the expansion of the $(foreach...), make effectively sees:
tgt-name: deps
$(MAKE) -f $(MY_MAKEFILE) ARCH=32 mylib; $(MAKE) -f $(MY_MAKEFILE) ARCH=64 mylib; ...
When one of these sub-makes fails (due to mis-handling of -j), the failure is not reported to the top level make. You need to use something like:
tgt-name: deps
$(MAKE) -f $(MY_MAKEFILE) ARCH=32 mylib && $(MAKE) -f $(MY_MAKEFILE) ARCH=64 mylib && ... && :
The && tells bash to exit immediately with an error if the previous command fails. (The : at the end is the bash builtin that does nothing but issue a successful exit—it will simplify your writing of the $(foreach ...).)
EDIT:
The proper way to this of course is to use make dependencies, not serial processing in the shell. You want make to see something like:
tgt-name: # default target
.PHONY: target-32
target-32: deps
$(MAKE) -f ${MY_MAKEFILE} arch=32 mylib
.PHONY: target-64
target-64: deps
$(MAKE) -f ${MY_MAKEFILE} arch=64 mylib
# etc. etc.
tgt-name: target-32 target-64
#echo $# Success
This is -j safe. Under -j make will make all of the target-% at the same time. Nice. (Though in this case it seems that your $MY_MAKEFILE is not -j safe (naughty!).) A few macros to replace the boiler plate:
LIST_OF_TYPES := 32 64 ...
LIST_OF_TARGETS := $(add-prefix,target-,${LIST_OF_TYPES})
tgt-name: # default target
.PHONY: ${LIST_OF_TARGETS}
${LIST_OF_TARGETS}: target-%: deps # Static Pattern Rule will (conveniently) set $*
$(MAKE) -f ${MY_MAKEFILE} arch=$* mylib
tgt-name: ${LIST_OF_TARGETS}
#echo $# Success
P.S. I suspect that you should be marking tgt-name as .PHONY
I've never seen this kind of use of foreach, but it seems to work for you. Usually I use a bash for loop
tgt-name: deps
for i in $(LIST_OF_TYPES); do $(MAKE) -f $(MY_MAKEFILE) ARCH=$$i mylib; done
But this is not the problem, since in either case the makes are run sequentially, AFAICS.
Since you build a library, there's the possible clash of two objects being inserted into an archive simultaneously. When this happens the archive might become corrupted.
As with every parallel execution, be it make jobs or threads, you must protect the shared resources (the archive in your case). You must add the objects at the end of the library build or protect the insertion with some lock (e.g. man lockfile or similar).
There might be other problems, of course. Look out for the simultaneous access to shared resources (object files, archives, ...) or incomplete defined dependencies.
Update:
foreach seems not to be a problem. Set LIST_OF_TYPES to a single type (e.g. 32 only) and then do a make -j100 mylib. If the problem is with the building of a single archive, it will fail with only one type as well.
You can also test with make ARCH=32 -j100 mylib. This should show the problem too.
I want to create a Makefile (in a parent dir) to call several other Makefiles (in sub dirs) such that I can build several binaries (one per project sub dir) by invoking just the one parent Makefile.
My research has been hampered by finding loads of stuff on recursive Makefiles, but I think this is where you are trying to build several directories Makefiles into a single binary?
Maybe what I want to do is better handled by a shell script perhaps invoking make in each sub directory in turn, but I thought a Makefile might be a more elegant solution?
any pointers gratefully received
PS using linux and the GNU tool chain
The for loop solution given in the first answer above actually shouldn't be used, as-is. In that method, if one of your sub-makes fails the build will not fail (as it should) but continue on with the other directories. Not only that, but the final result of the build will be whatever the exit code of the last subdirectory make was, so if that succeeded the build succeeds even if some other subdirectory failed. Not good!!
You could fix it by doing something like this:
all:
#for dir in $(SUBDIRS); \
do \
$(MAKE) -C $${dir} $# || exit $$?; \
done
However now you have the opposite problem: if you run "make -k" (continue even if there are errors) then this won't be obeyed in this situation. It'll still exit on failure.
An additional issue with both of the above methods is that they serialize the building of all subdirectories, so if you enable parallel builds (with make's -j option) that will only happen within a single subdirectory, instead of across all subdirectories.
Eregrith and sinsedrix have solutions that are closer to what you want, although FYI you should never, ever use "make" when you are invoking a recursive make invocation. As in johfel's example you should ALWAYS use $(MAKE).
Something like this is what you want:
SUBDIRS = subdir1 subdir1 subdir3 ...
all: $(addprefix all.,$(SUBDIRS))
all.%:
# $(MAKE) -C '$*' '$(basename $#)'
.PHONY: $(addprefix all.,$(SUBDIRS))
And of course you can add more stanzas like this for other targets such as "install" or whatever. There are even more fancy ways to handle building subdirectories with any generic target, but this requires a bit more detail.
If you want to support parallel builds you may need to declare dependencies at this level to avoid parallel builds of directories which depend on each other. For example in the above if you cannot build subdir3 until after both subdir1 and subdir2 are finished (but it's OK for subdir1 and subdir2 to build in parallel) then you can add something like this to your makefile:
all.subdir3 : all.subdir1 all.subdir2
You can call targets in subdirectory makefiles via
all:
$(MAKE) -C subdirectory1 $#
$(MAKE) -C subdirectory2 $#
...
or better
SUBDIRS=subd1 subd2 subd3
all:
#for dir in $(SUBDIRS); \
do \
$(MAKE) -C $${dir} $#; \
done
you should indeed use cmake to generate the Makefile automatically from a given CMakeLists.txt configuration file.
Here's a random link to get you started. Here you can find a simple sample project, including multiple subdirectories, executables, and a shared library.
Each makefile can have several target, it's still true with recursive makefiles, usually it's written:
all: target1 target2 target3
target1 :
make -C subdir
Then make all