I have a Makefile with a target with several jobs running is parallel with -j option.
all: header
mkdir -p $(STAGEDIR)
#echo STAGEDIR = $(STAGEDIR)
[ -z "$(dir_1_y)" ] || $(MAKE) -j$(HOST_NCPU) $(sort $(dir_1_y)) || exit $$?
[ -z "$(dir_1_y)" ] || $(SET_STAGEDIR)
[ -z "$(dir_2_y)" ] || $(MAKE) -j$(HOST_NCPU) $(sort $(dir_2_y)) || exit $$?
[ -z "$(dir_2_y)" ] || $(SET_STAGEDIR)
The jobs may take different time to complete. Is there any way I can ensure that all of them are done before proceeding to the next stage in the build process?
With your current recipe, all the jobs are done before the next line starts. Whether or not -j is used is completely irrelevant.
Also, I don't know if you are aware, that what you have, is not a makefile. It is really a shell script disguised as a makefile, with no advantages of using Make at all. There is no reason to keep it that way.
Just forget about the "makefile" and use the shell script:
mkdir -p $STAGEDIR
echo STAGEDIR = $(STAGEDIR)
[ -z "$(dir_1_y)" ] || make -j$HOST_NCPU $(sort $dir_1_y) || exit $?
[ -z "$(dir_1_y)" ] || $(SET_STAGEDIR)
[ -z "$(dir_2_y)" ] || $(MAKE) -j$HOST_NCPU $(sort $dir_2_y) || exit $?
[ -z "$(dir_2_y)" ] || $(SET_STAGEDIR)
When you do this, everything will be easier to manage, including matters like in your original question. Make gives you specific advantages, at the cost of increased difficulty compared to shell script. If you don't use the advantages at all, there is no point in coping with the difficulties.
You may be thinking at too high a level.
Make prefers you just to give it true file based dependencies,
and it will accurately do what is asked of it.
So for example you may think of your build like this:
Build libs
Compile app objects
Create executable by linking app objects and libs
Simples.
Thing is, the barriers here are artificial.
There is no reason why you can't be compiling one of the lib objects,
while at the same time compiling one of the app objects.
Or compiling some app objects while the libraries are being created.
Fine-grained dependencies are your friend.
${objects}: %.o: %.c ; Recipe for creating $# from $<
lib1.a: l1.o l2.o ; Recipe for creating lib out of objects
lib2.a: l3.o l4.o ; Recipe for creating lib out of objects
executable: lib1.a lib2.a o5.o ; Recipe for linking
Here we have actual files as the dependencies and targets.
Maximum parallelism, error checking, and culled work.
This is a noddy example,
but consider a larger system where the build includes running tests.
When you make a change you only want to re-run a small subset of tests.
You also want to run some tests while other code is compiling.
Yes, you do have to get your dependencies right,
but that is the whole point of make.
If you can't do that, then yes, use a batch file and build from scratch every time.
Related
I've tweak "a bit" Make so I can use it as a "kind of" cli for some tasks.
MAKEFLAGS += --no-builtin-rules
MAKEFLAGS += --no-builtin-variables
MAKEFLAGS += --no-print-directory
SHELL := /bin/bash
.ONESHELL:
.PHONY: project_list
project_list: all_projects_info.json
echo "Filtering project list with:" >&2
echo " PROJECT_FILTER: $(PROJECT_FILTER)" >&2
jq -r -S '.[] | select(
(.projectId | test("$(PROJECT_FILTER)"))
) | .projectId' $^ > $#
.PHONY: get_storage_info
get_storage_info: project_list
PROJECT_LIST=$$(cat $<)
$(MAKE) -f $(MKFILE) -j storage_info.json PROJECT_LIST="$$PROJECT_LIST"
all_projects_info.json:
curl -X GET https://toto/get_all_my_projects_info >$#
# here it's PHONY because we want to always rebuild it
.PHONY: storage_info.json
storage_info.json: $(STORAGE_INFO_JSON_FILES)
jq -s -S '[.[]?.items?[]?]' $(STORAGE_INFO_JSON_FILES) > $#
storage_info/:
mkdir -p $#
STORAGE_INFO_JSON_FILES=$(foreach project_name,$(PROJECT_LIST),storage_info/$(project_name).json)
$(STORAGE_INFO_JSON_FILES): storage_info/%.json: | storage_info/
curl \
-X GET \
"https://storage_api/list_s3?project=$*" \
2> /dev/null > $#
As you can see here, I've got 2 "command":
project_list witch list all project I can access too,
get_storage_info witch list all bucket in projects.
The trick here is because I've got a lot of projects and buckets, I may want to filter like this:
make get_storage_info PROJECT="foo"
And it will print ONLY bucket in project with foo in their name.
It's quit handy and fast (only the first time it may be slow, the time to get all informations).
What is bothering me, I've not found a better way than to call a sub make command (with the exact list of project to take into account).
Is it possible to express dynamic dependencies of a target ?
But something that can result from another target ?
Thanks.
I don't see anything wrong with invoking a submake. That's IMO the best way to do it, especially if you want to add -j to it.
It's not really possible to get rid of this easily. It's not the fact that you want to express a dynamic dependency: that can be done. The problem is you want the list of dependencies to be extracted from the results of running another rule. But that's not how make works: make always starts with the final target and works its way backwards. So, by the time you get around to building the prerequisite file, the target that depended on it has already been processed (not its recipe of course, but all the prerequisites).
Thanks for all your time and response -
Currently, we are using the nested build, multiple Makefiles, and individual subdirectories having their own Makefile, all are connected with a top-level Makefile. We are running
make xxxxx_yyyy_defconfig
make
this builds and creates an output file which is xxxxx.elf file. --- Till here everything works fine.
Now we are having multiple def-configs(around 50), I want to build all configurations using one "make all" command. is that possible?
This is not a simple case where we can put all "all: prog01 prog02 prog03" as every program needs to have a different configuration. Configuration can be achieved by using "make xxxxx_yyyy_defconfig". The output of "make config" is the .config file, which is used during the "make" command.
Based on .config file many variables are exported which is used at the subdirectory level.
So How can I build multiple configurations using a single "make all" command?
Environment - Ubuntu, Cross compile for ARM, output file xxxx.elf.
With the use of script and make file I am able to solve, But I have to solve only using Makefile.
in Makefile add one PHONY target
all:
./build_all.sh #shell script calling.
Created one shell script like this
#! /usr/bin/bash
echo "Make All"
for entry in `ls conf`; do
make $entry
wait
make
if [ $? -eq 0 ] ; then
for xxxfile in `ls xxx*_*` ; do
xxxdir=$(echo $xxxfile | cut -b yy-zz)
mkdir -p $xxxdir
mv $xxxfile $xxxdir/
done
else
break
fi
done
If you want to build several configurations you must do this out of tree in separate build directories (make O=/tmp/builds/foo foo_defconfig; make -C /tmp/builds/foo) to avoid conflicts. A shell script could do this as well as a Makefile but if you insist on using a Makefile you could try the following that assumes your source tree is in /src/kernel and you want to build configuration foo in /tmp/builds/foo; adapt to your needs:
$ pwd
/tmp/builds
$ cat Makefile
CONFIGS := uuuu_vvvv xxxx_yyyy ...
BUILD := /tmp/build
KERNEL := /src/kernel
.PHONY: $(CONFIGS) all
all: $(CONFIGS)
$(CONFIGS):
rm -rf $#
mkdir -p $(BUILD)/$#
$(MAKE) -C $(KERNEL) O=$(BUILD)/$# $#_defconfig
$(MAKE) -C $(BUILD)/$#
$ make
I'd like to use make to process a large number of inputs to outputs using a script (python, say.) The problem is that the script takes an incredibly short amount of time to run per input, but the initialization takes a while (python engine + library initialization.) So, a naive makefile that just has an input->output rule ends up being dominated by this initialization time. Parallelism doesn't help with that.
The python script can accept multiple inputs and outputs, as so:
python my_process -i in1 -o out1 -i in2 -o out2 ...
and this is the recommended way to use the script.
How can I make a Makefile rule that best uses my_process, by sending in out of date input-output pairs in batches? Something like parallel but aware of which outputs are out of date.
I would prefer to avoid recursive make, if at all possible.
I don't completely grasp your problem: do you really want make to operate in batches or do you want a kind of perpetual make process checking the file system on the fly and feeding to the Python process whenever it finds necessary? If the latter, this is quite the opposite of a batch mode and rather a pipeline.
For the batch mode there is a work-around which needs a dummy file recording the last runnning time. In this case we are abusing make for because the makefile is in this part a one-trick pony which is unintuitive and against the good rules:
SOURCES := $(wildcard in*)
lastrun : $(SOURCES)
python my_process $(foreach src,$?,-i $(src) -o $(patsubst in%,out%,$(src)))
touch lastrun
PS: please note that this solution has a substantial flaw in that it doesn't detect the update of in-files when they happen during the run of the makefile. All in all it is more advisable to simply collect the filenames of the in-files which were updated by the update process itself and avoid make althogether.
This is what I ended up going with, a makefile with one layer of recursion.
I tried using $? both with grouped and ungrouped targets, but couldn't get the exact behavior needed. If one of the output targets was deleted, the rule would be re-run but $? wouldn't necessarily have some input files but not the correct corresponding input file, very strange.
Makefile:
all:
INDIR=in
OUTDIR=out
INFILES=$(wildcard in/*)
OUTFILES=$(patsubst in/%, out/%, $(INFILES))
ifdef FIRST_PASS
#Discover which input-output pairs are out of date
$(shell mkdir -p $(OUTDIR); echo -n > $(OUTDIR)/.needs_rebuild)
$(OUTFILES) : out/% : in/%
#echo $# $^ >> $(OUTDIR)/.needs_rebuild
all: $(OUTFILES)
#echo -n
else
#Recurse to run FIRST_PASS, builds .needs_rebuild:
$(shell $(MAKE) -f $(CURDIR)/$(firstword $(MAKEFILE_LIST)) FIRST_PASS=1)
#Convert .needs_rebuild into batches, creates all_batches phony target for convenience
$(shell cat $(OUTDIR)/.needs_rebuild | ./make_batches.sh 32 > $(OUTDIR)/.batches)
-include $(OUTDIR)/.batches
batch%:
#In this rule, $^ is all inputs needing rebuild.
#The corresponding utputs can be computed using a patsubst:
targets="$(patsubst in/%, out/%, $^)"; touch $$targets
clean:
rm -rf $(OUTDIR)
all: all_batches
endif
make_batches.sh:
#!/bin/bash
set -beEu -o pipefail
batch_size=$1
function _make_batches {
batch_num=$1
shift 1
#echo ".PHONY: batch$batch_num"
echo "all_batches: batch$batch_num"
while (( $# >= 1 )); do
read out in <<< $1
shift 1
echo "batch$batch_num: $in"
echo "$out: batch$batch_num"
done
}
export -f _make_batches
echo ".PHONY: all_batches"
parallel -N$batch_size -- _make_batches {#} {} \;
Unfortunately, the makefile is a one trick pony and there's quite a bit of boilerplate to pull this recipe off.
I have a rule in my makefile:
$(OW_GROUP_ONE_C): $(OW_GROUP_ONE_PNG)
for file in $^; \
do \`enter code here`
grit $$file -ftc -fh\! -fa -gt -gz\! -gB4 -m\! -p -pzl -pu16 -o $#; \
done
It builds a single c file out of different images, those are iterated in a for loop (They are, I checked using an echo)
The rule which depends on that is
$(OW_GROUP_ONE_O): $(OW_GROUP_ONE_C)
$(CC) $(CFLAGS) -c -o $# $<
which is executed via
$(SPRITES_BINARY): $(NORMAL_PAL_OBJ) $(SHINY_PAL_OBJ) $(SPRITE_FRONT_OBJ) $(SPRITE_BACK_OBJ) $(NORMAL_CASTFORM_PAL_OBJ) $(SHINY_CASTFORM_PAL_OBJ) $(CASTFORM_FRONT_OBJ) $(CASTFORM_BACK_OBJ) $(OW_GROUP_ONE_O)
If I execute the rule by calling "make $(OW_GROUP_ONE_C)" everything works fine, but as soon as the rule is executed via dependency from another rule, the loop seems to just read the first file. I again used echo to check, but the loop accumulates all files in the list. I don't know what the deal i, the tool (GRIT - GBA raster image transmogrifier) should be able to handle that, but there must be a difference between calling the rule explicit if it works that way...
Thanks in advance for any hints!
I'm trying to create a Makefile that will download and process file a file to generate targets, this is a simplified version:
default: all
.PHONY: all clean filelist.d
clean:
#rm -fv *.date *.d
#The actual list comes from a FTP file, but let's simplify things a bit
filelist.d:
#echo "Getting updated filelist..."
#echo "LIST=$(shell date +\%M)1.date $(shell date +\%M)2.date" > $#
#echo 'all: $$(LIST)' >> $#
%.date:
touch $#
-include filelist.d
Unfortunately the target all doesn't get updated properly on the first run, it needs to be run again to get the files. This is the output I get from it:
$ make
Getting updated filelist...
make: Nothing to be done for `default'.
$ make
Getting updated filelist...
touch 141.date
touch 142.date
touch 143.date
I'm using GNU Make 3.81 whose documentation states that it reloads the whole thing if the included files get changed. What is going wrong?
You have specified filelist.d as a .PHONY target, so make believes making that target doesn't actually update the specified file. However, it does, and the new contents are used on the next run. For the first run, the missing file isn't an error because include is prefixed with the dash.
Remove filelist.d from .PHONY. However, remember it won't be regenerated again until you delete it (as it doesn't depend on anything).
By the same token, you should include "default" in .PHONY.
I wrote a shell script rather than lump all this in the makefile:
#!/bin/bash
# Check whether file $1 is less than $2 days old.
[ $# -eq 2 ] || {
echo "Usage: $0 FILE DAYS" >&2
exit 2
}
FILE="$1"
DAYS="$2"
[ -f "$FILE" ] || exit 1 # doesn't exist or not a file
TODAY=$(date +%s)
TARGET=$(($TODAY - ($DAYS * 24 * 60 * 60)))
MODIFIED=$(date -r "$FILE" +%s)
(($TARGET < $MODIFIED))
Replace X with the max number of days that can pass before filelist.d is downloaded again:
filelist.d: force-make
./less-than-days $# X || command-to-update
.PHONY: force-make
force-make:
Now filelist.d depends on a .PHONY target, without being a phony itself. This means filelist.d is always out of date (phony targets are always "new"), but its recipe only updates the file periodically.
Unfortunately, this requires you to write the update command as a single command, and space may be a problem if it is long. In that case, I would put it in a separate script as well.