I tried looking for answers to this question, so I apologize in advance if this is a duplicate of a question I didn't find. Also sorry that I cannot directly provide the code that I am working with (it would require a lot of environmental dependencies, anyway).
I have a sequence of actions, which all depend on the success of the previous actions, and also don't need to be repeated unless they are out of date. A make solution seemed like the proper one. I've come up with a solution that does almost all of it. Here is the sequence of steps I am trying to replicate, with the output of each step listed below its input:
ZIP file
extract to package/
package/directory/*.comp
execute uncomp.py to create a .uncomp file from a .comp file
Everything works fine up to this point
package/directory/*.uncomp
For *.uncomp files, execute script1 to produce a .html file
For *_ext.uncomp files, execute script2 to produce numbered *_ext.##.png file(s)
Multiple numbered files (_ext.0.png, _ext.1.png, _ext.2.png) are possible, and may not be present at the time make is run. However, make should know that they are the output of the previous step, and only run this recipe if these files (a) don't exist or (b) any are older than the *_ext.uncomp file.
I have put together a Makefile which does almost what I'm looking for, except that it delegates all of the last portion (numbered files) to a shell script which I could program to look at file times, but that defeats the purpose of using make in the first place, in my opinion.
Environment
Debian 8.8 (x86)
GNU Make 4.0
Built for x86_64-pc-linux-gnu
My Question
What rules and recipes can I use to inform GNU make of the relationship between the *_ext.uncomp files and the _ext.##.png files so that those recipes only get executed as necessary (and say 'Target is up-to-date' if all .png files are at least as new as the _ext.uncomp file), that won't also apply to the *.uncomp files, and that will still work of there are no .png files in the output?
I will also need to indicate the relationship between non-_ext files and their corresponding HTML counterparts. So that script1 only gets executed when the HTML file is out of date or doesn't exist. This recipe/rule should not pay attention to _ext.uncomp files.
Any other advice on my Makefile would also be appreciated, because I am not overly familiar with it.
Generalized contents of my current Makefile
.PHONY : all
all : package package/directory/*.uncomp
./process $^
%.comp.uncomp : %.comp package
python uncomp.py $<
package : *.zip
rm -rf package/
unzip *.zip -d package/
Contents of the process script
This script should no longer exist if all the goals of the question are met (make will handle everything). It works great, but it always processes .uncomp files no matter what, even if the output from them already exists and is newer than the source.
#!/bin/bash
if [ $# -lt 2 ]; then
echo "$0 expects at least 2 arguments"
exit 1
fi
# Discard the first agrument, it's always 'package'
shift
# Iterate over each of the remaining arguments
while [ $# -gt 0 ]; do
if [[ $1 == *_ext.uncomp ]] ; then
python script2 $1
elif [[ $1 == *.uncomp ]] ; then
python script1 $1
else
echo "Warning: Unknown file type: $1"
fi
shift
done
I learned a lot about GNU make trying to get this to work. I discovered that the solution to my problem was in not overthinking it.
The most important realization was that I didn't need make to track all of the numbered output files, but just the first one (if the first one is out of date or missing, they all will be, and they all get re-extracted by the script, so a 1:1 relationship was all I needed to indicate there).
I found out that GNU make 3.82 and later uses "shortest stem first" order instead of definition order when matching pattern rules. To make my file compatible with both versions, I made sure to define the most specific stems first.
After that it was a matter of setting up some implicit rules, and just telling make what to expect to be able to find—the concept is a little backwards to my way of thinking which is why I had some trouble at first (look for this file that doesn't exist yet; now, here's a way to make it from a file that does exist). The end result, fully functional:
PACKAGE := package
COMP := .comp
UNCOMP := .comp.uncomp
PNG0 := .comp.0.png
TXT := .comp.txt
SUFFIX := _ext
COMPFILES = $(wildcard $(PACKAGE)/subdir/*$(COMP))
UNCOMPFILES = $(COMPFILES:$(COMP)=$(UNCOMP))
SUFFIXFILES = $(filter %$(SUFFIX)$(UNCOMP),$(UNCOMPFILES))
PNGFILES = $(SUFFIXFILES:$(UNCOMP)=$(PNG0))
NOSUFFIXFILES = $(filter-out %$(SUFFIX)$(UNCOMP),$(UNCOMPFILES))
TXTFILES = $(NOSUFFIXFILES:$(UNCOMP)=$(TXT))
.PHONY : all
all : pngs txts htaccess
.PHONY : txts
txts : $(TXTFILES)
.PHONY : pngs
pngs : $(PNGFILES)
.PHONY : uncomp
uncomp : $(UNCOMPFILES)
make pngs
make txts
.PHONY : htaccess
htaccess : $(PACKAGE)/.htaccess
%$(SUFFIX)$(PNG0) : %$(SUFFIX)$(UNCOMP)
## Ignore failures when extracting PNG files
-python script1.py $<
%$(TXT) : %$(UNCOMP)
## Ignore failures when dumping TXT files
-python script2.py $< > $#
%$(UNCOMP) : %$(COMP)
## Ignore decompression failure
-python uncomp.py $<
$(PACKAGE)/.htaccess : .htaccess | $(PACKAGE)
cp .htaccess $(PACKAGE)/
$(PACKAGE) : *.zip
rm -rf $(PACKAGE)/
unzip *.zip -d $(PACKAGE)/
make uncomp
.PHONY : clean
clean :
rm -rf $(PACKAGE)/
Related
I have a number of makefiles that build and run tests. I would like to create a script that makes each one and notes whether the tests passed or failed. Though I can determine test status within each make file, I am having trouble finding a way to communicate that status to the caller of the make command.
My first thought is to somehow affect the return value of the make command, though this does not seem possible. Can I do this? Is there some other form of communication I can use to express the test status to the bash script that will be calling make? Perhaps by using environment variables?
Thanks
Edit: It seems that I cannot set the return code for make, so for the time being I will have to make the tests, run them in the calling script instead of the makefile, note the results, and then manually run a make clean. I appreciate everyone's assistance.
Make will only return one of the following according to the source
#define MAKE_SUCCESS 0
#define MAKE_TROUBLE 1
#define MAKE_FAILURE 2
MAKE_SUCCESS and MAKE_FAILURE should be self-explanatory; MAKE_TROUBLE is only returned when running make with the -q option.
That's pretty much all you get from make, there doesn't seem to be any way to set the return code.
The default behavior of make is to return failure and abandon any remaining targets if something failed.
for directory in */; do
if ( cd "$directory" && make ); then
echo "$0: Make in $directory succeeded" >&2
else
echo "$0: Make in $directory failed" >&2
fi
done
Simply ensure each test leaves its result in a file unique to that test. Least friction will be to create test.pass if thes test passes, otherwise create test.fail. At the end of the test run gather up all the files and generate a report.
This scheme has two advantages that I can see:
You can run the tests in parallel (You do us the -jn flag, don't you? (hint: it's the whole point of make))
You can use the result files to record whether the test needs to be re-run (standard culling of work (hint: this is nearly the whole point of make))
Assuming the tests are called test-blah where blah is any string, and that you have a list of tests in ${tests} (after all, you have just built them, so it's not an unreasonable assumption).
A sketch:
fail = ${#:%.pass=%.fail}
test-passes := $(addsuffix .pass,${tests})
${test-passes}: test-%.pass: test-%
rm -f ${fail}
touch $#
$* || mv $# ${fail}
.PHONY: all
all: ${test-passes}
all:
# Count the .pass files, and the .fail files
echo '$(words $(wildcard *.pass)) passes'
echo '$(words $(wildcard *.fail)) failures'
In more detail:
test-passes := $(addsuffix .pass,${tests})
If ${tests} contains test-1 test-2 (say), then ${test-passes} will be test-1.pass test-2.pass
${test-passes}: test-%.pass: test-%
You've just gotta love static pattern rules.
This says that the file test-1.pass depends on the file test-1. Similarly for test-2.pass.
If test-1.pass does not exist, or is older than the executable test-1, then make will run the recipe.
rm -f ${fail}
${fail} expands to the target with pass replaced by fail, or test-1.fail in this case. The -f ensures the rm returns no error in the case that the file does not exist.
touch $# — create the .pass file
$< || mv $# ${fail}
Here we run the executable
If it returns success, our work is finished
If it fails, the output file is deleted, and test-1.fail is put in its place
Either way, make sees no error
.PHONY: all — The all target is symbolic and is not a file
all: ${test-passes}
Before we run the recipe for all, we build and run all the tests
echo '$(words $(wildcard *.pass)) passes'
Before passing the text to the shell, make expands $(wildcard) into a list of pass files, and then counts the files with $(words). The shell gets the command echo 4 passes (say)
You run this with
$ make -j9 all
Make will keep 9 jobs running at once — lovely if you have 8 CPUs.
I have few txt files in a directory. I want to run a shell script only on the files which have been modified. How can I achieve this through Makefile?
Have written the following part but it builds all the txt files in the directory. Would be great to get some pointers on this.
FILENAME:= $(wildcard dir/txts/*/*.txt)
.PHONY: build-txt
build-txt: $(FILENAME)
sh build-txts.sh $^
I'm guessing you want something like this:
files := $(wildcard dir/txts/*/*.txt)
dummies := $(addprefix .mod_,$files)
all:$(dummies)
$(dummies): .mod_% : %
sh build-txts.sh $^
touch $#
For any new text file, it will run the script, and create a .mod counterpart. For any non-new text file, it will check if the timestamp is newer than the .mod files timestamp. If it is, it runs the script, and then touches the .mod (making the .mod newer than the text). For any text file that has not been modified since the last make, the .mod file will be newer and the script will not run. Notice that the .mod files are NOT PHONY targets. They are dummy files who exist solely to mark when the text file was last modified. You can stick them in a dummy directory for easy cleaning as well.
If you need something where you don't want to rebuild the text files by default on a fresh checkout, or your script criteria isn't based on timestamps, you would need something a bit more tricky:
files := $(wildcard dir/txts/*/*.txt)
md5s:= $(addprefix .md5_,$files)
all:$(md5s)
.PHONY:$(md5s)
$(md5s):
( [ -e $# ] && md5sum -c $# ) || \
( sh build-txts.sh $# && md5sum $(#:.md5_=) > $# )
Here, you run the rule for all text files regardless, and you use bash to determine if the file is out of date. If the text file does not exist, or the md5sum is not correct, it runs the script, then updates the md5sum. Because the rules are phony, they always run for all the .md5sum files regardless of whether they already exist.
Using this method, you could submit the .md5 files to your repository, and it would only run the script on those files whose md5 sum changed after checkout.
Is there a way to let make determine the number of files to be recompiled before actually compiling? The problem is this: Consider having a quite big project with hundreds of source files. It would very convenient to have a rough idea of how long compilation will take, but to know that, one needs to know the number of files to be compiled.
The general answer is no, because your build could generate files which themselves are inputs to other rules which generate more files. And so on. However if a rough answer is good enough you can try the --dry-run flag. From the GNU make documentation...
“No-op”. Causes make to print the recipes that are needed to make the targets up to date, but not actually execute them. Note that some recipes are still executed, even with this flag (see How the MAKE Variable Works). Also any recipes needed to update included makefiles are still executed (see How Makefiles Are Remade).
As you can see, despite its name even the --dry-run flag will change the state of your build.
"make -n" will do the dry run. But you can't get the list of files to be rebuilt. May be you can write shell script to store the last modified time of files and get the list of files.
I think a found a decent solution for unix. Here SRC are your source files, HDR your headers and DEP the dependency files (something like DEP:=$(OBJ:.o=.d) )
isInDepFile+=$(shell grep -q $(modifiedFile) $(depFile) 1>&2 2> /dev/null && echo $(depFile))
COMPFILES=
checkDepFiles=$(foreach depFile,$(DEP), $(eval filesToCompile+=$(isInDepFile))) $(thinOutDepFiles)
thinOutDepFiles=$(foreach fileToCompile,$(filesToCompile),$(eval DEP=$(filter-out $(fileToCompile),$(DEP))))
countFilesToCompile: $(SRC) $(HDR)
$(eval modifiedFiles=$?)
$(foreach modifiedFile,$(modifiedFiles), $(call checkDepFiles))
$(eval numOfFilesToCompile = $(words $(filesToCompile)))
$(eval numDepFiles = $(words $(DEP)))
$(eval NumSRCFiles = $(words $(SRC)))
#echo $(NumSRCFiles) sources
#echo $(numDepFiles) files to leave
#echo $(numOfFilesToCompile) files to compile
#touch $#
This first generates a list of modified files within your source and header files lists. Then for each modified file it checks all dependency files for its filename. If a dependency file contains the current file we are looking at, it is added to the list of filesToCompile. It is also removed from the list of dependency files to avoid duplication.
This can be invoked in the main building rule of your project. The advantage of that over the dry run is that it gives you a simple number to work with.
I'm trying to write a Makefile that automatically calls BibTeX on files that match a specific wildcard but don't exist when I first run Make. Specifically, I have the following:
.FORCE:
all: pdf
foo=something
lat: *.tex
pdflatex $(foo).tex
pdf: lat
open $(foo).pdf &
%.aux: .FORCE
bibtex $#
bib: lat $(foo)?.aux
pdflatex $(foo).tex
pdflatex $(foo).tex
open $(foo).pdf &
What I want to happen is that the following will occur when I run make bib:
pdflatex will be called on $(foo).tex, generating files $(foo)1.aux, $(foo)2.aux, etc.
bibtex will be called on $(foo)1.aux, then $(foo)2.aux, etc.
pdflatex will be called twice on $(foo).tex
open will be called on $(foo).pdf
However, this doesn't happen: in particular, it looks as if Make evaluates the prerequisites $(foo)?.aux up-front, at a point where the files $(foo)1.aux, $(foo)2.aux, etc. don't exist. As a result, BibTeX is never called on them. If I rerun make bib, however, things work, because the files now exist after being created on the previous run.
Question: Is forcing Make to re-evaluate prerequisites for a target the right way to fix this? If so, how can I get it to re-evaluate the prerequisites for bib after running pdflatex as part of lat? If not, how can I achieve what I want please?
What I do in my Maiefile for LaTeX files is rename the targets.
That way, you can have different target names, depending on which phase has been used to create them. This is according to the spirit of make's pattern rules, which assume that files with different contents also have different extensions. So I have rules like this:
%.aux1 : %.tex
rm -f $*.aux
pdflatex -draftmode $*
mv -f $*.aux $#
%.bbl : %.aux1
cp -pf $< $*.aux
bibtex $* || : > $#
%.aux2 : %.bbl
cp -pf $*.aux1 $*.aux
pdflatex -draftmode $*
mv -f $*.aux $#
%-tex.pdf: %.aux2
cp -pf $< $*.aux
pdflatex -jobname $*-tex $*
You can't do this in a completely straightforward way, since make fundamentally assumes that one run through a target's commands will update the target. That is, there's no way in principle that you can tell make that ‘you need to run these commands twice’.
You can try to get round this with (admirably clever) tricks such as #reinerpost suggests, but a problem with that general approach is that sometimes/often a single run of BibTeX (or makeindex, or whatever) is sufficient.
After having tried various types of tricks in the past, what I generally do here is to make a command list which explicitly includes two BibTeX calls where necessary:
%.bbl: %.aux
bibtex $(#:.bbl=)
if grep -q Rerun $(#:.bbl=.log) >/dev/null; then \
bibtex $(#:.bbl=); \
fi
That command list re-runs BibTeX if the log file includes the ‘Label(s) may have changed. Rerun to get cross-references right’ message.
To be honest, what I actually do is just the single line bibtex $(#:.bbl=). When I'm writing a document, I inevitably re-run make so many times that the list of references comes out correct very quickly. This means that this target doesn't work for the ‘recreate the final version from a clean directory’ case, but that's sufficiently rare that I tend not to obsess about it.
Whenever I catch myself re-solving this problem, I now recognise that I'm trying to push water up-hill because I'm bored writing this document, so I go and do something else.
I just wanted to share an alternative solution: Using submake processes:
If so, how can I get it to re-evaluate the prerequisites for bib after running pdflatex as part of lat?
You can somewhat achieve that, by adding make lat to the recipe for bib. This will start a new make process targetting at bib. The sub-make doesn't know anything aboutits parents targets/prerequisites. (Such a concept is usually used, when some huge project is built from different smaller projekts each of which have different makefiles.)
This can be done in multiple layers (although it will be confusing):
bib: $(foo)?.aux lat check_for_bib
check_for_bib:
if grep -q Rerun $(#:.bbl=.log) >/dev/null; then make bib; fi
pdf: lat check_for_bib
open $(foo).pdf &
Note that I had to change some orders of prerequisites. The pseud-code would be something like:
latex compilation
while log suggests update:
update aux
latex compilation
Each iteration of the while loop will take place in a separate make process.
I have some XML source files which need to be processed by a Ruby script to create generated c# files before my main target can be built. The start-up cost of script is much greater than the time to process each file so it's quite inefficient to process them one by one as is usually done in make files. What I want to do is collect them all together and pass them as a list to script which execute just before updating the main target.
What I have now is something like:
_generated_/%.xml.cs : %.cs
#execute ruby script to generate .cs file
out.exe : a.cs b.cs _generated_/e.xml.cs ....
#compile .cs files
I came across the idea of using eval for this so if the files which are processed have a suffix of .s and yield a file with a suffix of .t when processed by the script my idea was to do this:
%.xml : _generated_/%.xml.cs
$(eval SOURCE_FILES += $<)
However this rule won't trigger unless there is shell command after the eval (echo will do) - I guess it's because make knows that simply calling a function can't possibly produce a file. Another idea I had was to collect the list of files into a temporary file instead.
.INTERMEDIATE source_list.txt
%.xml : _generated_/%.xml.cs
echo $< >> source_list.txt
While these will probably both work, I am wondering if there is a better way to do this.
Update:
What I ended up doing is was something like the following - the # prefix on eval function fools make into believing that a shell command is being executed.
_generated_/%.xml.cs : %.cs
# $(eval DIRTY_XML += $(<))
out.exe : a.cs b.cs _generated_/e.xml.cs ....
# Create generated cs files
# by running ruby script with DIRTY_XML as input
# Compile all .cs files
Use an empty file called, say, ruby-marker, to indicate that all of the xml files have been processed. Its modification time can be compared to those of the "x.s" files. Then use $? to select only the prerequisite "x.s" files that have changed since the last run of the ruby script.
main-target: ruby-marker
whatever...
ruby-marker: foo.s bar.s baz.s
ruby-script $?
#touch $#
You could use a $(filter) on $? - $? is the list of prerequisites that are newer than target.