After years of not using make, I find myself needing it again, the gnu version now. I'm pretty sure I should be able to do what I want, but haven't figured out how, or found an answer with Google, etc.
I'm trying to create a test target which will execute my program a number of times, saving the results in a log file. Some tests should cause my program to abort. Unfortunately, my makefile aborts on the first test which leads to an error. I have something like:
# Makefile
#
test:
myProg -h > test.log # Display help
myProg good_input >> test.log # should run fine
myProg bad_input1 >> test.log # Error 1
myProg bad_input2 >> test.log # Error 2
With the above, make quits after the bad_input1 run, never getting to the bad_input2 run.
Put a - before the command, e.g.:
-myProg bad_input >> test.log
GNU make will then ignore the process exit code.
Try running it as
make -i
or
make --ignore-errors
which ignores all errors in all rules.
I'd also suggest running it as
make -i 2>&1 | tee results
so that you got all the errors and output to see what happened.
Just blindly continuing on after an error is probably not what you're really wanting to do. The make utility, by its very nature, is usually relying on successful completion of previous commands so that it can use the artefacts of those commands as pre-requisites for commands to be executed later on.
BTW I'd highly recommend getting a copy of the O'Reilly book on make. The first edition has an excellent overview of the basic nature of make, specifically its backward chaining behaviour. Later editions are still good but the first ed. still has the clearest explanation of what's actually happening. In fact, my own copy is the first thing I pass to people who come to me to ask "WTF? questions" about make! (-:
The proper solution if you want to require the target to fail is to negate its exit code.
# Makefile
#
test:
myProg -h > test.log # Display help
myProg good_input >> test.log # should run fine
! myProg bad_input1 >> test.log # Error 1
! myProg bad_input2 >> test.log # Error 2
Now, it is an error to succeed in those two cases.
Related
This question asked about how to conceal a segmentation fault in a bash script and #yellowantphil provided a solution: pipe the output anywhere
Now I am looking through plenty of repositories handed in from my students. I need to check whether source codes in each repository could be compiled, and if so, whether the executable could work properly.
And I've observed that some of their executables end in failure with output 'segmentation fault'. Since I want to hide most details in my script, I prefer not showing any of this annoying output (and thus I found the question mentioned above). However, I still need to be aware that happens (to skip a loop). What should I do now?
A minimum reproduction of this problem:
Create any executable that causes 'segmentation fault'
Place it in a Bash script:
#!/bin/bash
./segfaultgen >/dev/null 2>&1 | :
echo $?
With that | : (mentioned in #yellowantphil's answer), the following sentence shows the output 0, which does not tell the truth. However error messages appear if | : is commented out. I've also tried appending || echo 1 before | :. It doesn't work as well :(
By default a pipeline only fails if the right-side fails. Enable pipefail so the pipeline will fail if either command fails.
(It's a good option in general. I enable by default it in all of my scripts.)
#!/bin/bash
set -o pipefail
./segfaultgen &>/dev/null | :
echo $?
Also, since you're using bash, &>/dev/null is shorter.
I have two calls that produce very different output:
Call one:
dmake -m _makefile_.m 1>> _results.out 2>> _results.out
Call two:
dmake -m _makefile_.m >2&1 >_results.out
dmake does a compile of sorts and the first call correctly inlines compile errors whereas the second one puts all the compile errors at the top. I was always of the opinion that both of these were equivalent. What exactly are the differences between these two calls? Is this because of buffering?
>2&1 is not the right syntax; it will redirect the output of the dmake command to a file called 2 (running it in background), then attempt to run a command called 1 with its output redirected to _results.out.
You want:
dmake -m _makefile_.m >_results.out 2>&1
Change > to >> if you want to append to the file.
I'm not 100% sure whether this will intersperse stdout and stderr the way you want.
I ran into this useful tip that if you're working on files a lot and you want them to build automatically you run:
watch make
And it re-runs make every couple seconds and things get built.
However ... it seems to swallow all the output all the time. I think it could be smarter - perhaps show a stream of output but suppress Nothing to be done for 'all' so that if nothing is built the output doesn't scroll.
A few shell script approaches come to mind using a loop and grep ... but perhaps something more elegant is out there? Has anyone seen something?
Using classic gnu make and inotifywait, without interval-based polling:
watch:
while true; do \
$(MAKE) $(WATCHMAKE); \
inotifywait -qre close_write .; \
done
This way make is triggered on every file write in the current directory tree. You can specify the target by running
make watch WATCHMAKE=foo
This one-liner should do it:
while true; do make --silent; sleep 1; done
It'll run make once every second, and it will only print output when it actually does something.
Here is a one-liner:
while true; do make -q || make; sleep 0.5; done
Using make -q || make instead of just make will only run the build if there is something to be done and will not output any messages otherwise.
You can add this as a rule to your project's Makefile:
watch:
while true; do $(MAKE) -q || $(MAKE); sleep 0.5; done
And then use make watch to invoke it.
This technique will prevent Make from filling a terminal with "make: Nothing to be done for TARGET" messages.
It also does not retain a bunch of open file descriptors like some file-watcher solutions, which can lead to ulimit errors.
How about
# In the makefile:
.PHONY: continuously
continuously:
while true; do make 1>/dev/null; sleep 3; done
?
This way you can run
make continuously
and only get output if something is wrong.
Twitter Bootstrap uses the watchr ruby gem for this.
https://github.com/twbs/bootstrap/blob/v2.3.2/Makefile
https://github.com/mynyml/watchr
Edit:
After two years the watchr project seems not to be maintained anymore. Please look for another solution among the answers. Personally, if the goal is only to have a better output, i would recommend the answer from wch here
I do it this way in my Makefile:
watch:
(while true; do make build.log; sleep 1; done) | grep -v 'make\[1\]'
build.log: ./src/*
thecompiler | tee build.log
So, it will only build when my source code is newer than my build.log, and the "grep -v" stuff removes some unnecessary make output.
This shell script uses make itself to detect changes with the -q flag, and then does a full rebuild if and only if there are changes.
#!/bin/sh
while true;
do
if ! make -q "$#";
then
echo "#-> Starting build: `date`"
make "$#";
echo "#-> Build complete."
fi
sleep 0.5;
done
It does not have any dependencies apart from make.
You can pass normal make arguments (such as -C mydir) to it as they are passed on to the make command.
As requested in the question it is silent if there is nothing to build but does not swallow output when there is.
You can keep this script handy as e.g. ~/bin/watch-make to use across multiple projects.
There are several automatic build systems that do this and more - basically when you check a change into version control they will make/build - look for Continuous Integration
Simple ones are TeamCity and Hudson
#Dobes Vandermeer -- I have a script named "mkall" that runs make in every subdirectory. I could assign that script as a cron job to run every five minutes, or one minute, or thirty seconds. Then, to see the output, I'd redirect gcc results (in each individual makefile) to a log in each subdirectory.
Could something like that work for you?
It could be pretty elaborate so as to avoid makes that do nothing. For example, the script could save the modify time of each source file and do the make when that guy changes.
You could try using something like inotify-tools. It will let you watch a directory and run a command when a file is changed or saved or any of the other events that inotify can watch for. A simple script that does a watch for save and kicks off a make when a file is saved would probably be useful.
You could change your make file to output a growl (OS X) or notify-send (Linux) notification. For me in Ubuntu, that would show a notification bubble in the upper-right corner of my screen.
Then you'd only notice the build when it fails.
You'd probably want to set watch to only cycle as fast as those notifications can display (so they don't pile up).
Bit of archaeology, but I still find this question useful. Here is a modified version of #otto's answer, using fswatch (for the mac):
TARGET ?= foo
all:
#fswatch -1 . | read i && make $(TARGET)
#make -ski TARGET=$(TARGET)
%: %.go
#go build $<
#./$#
Is there a way to log the commands, make invokes to compile a program? I know of the parameters -n and -p, but they either don't resolve if-conditions but just print them out. Or they don't work, when there are calls to 'make' itself in the Makefile.
This
make SHELL="sh -x -e"
will cause the shell (which make invokes to evaluate shell constructs) to print information about what it's doing, letting you see how any conditionals in shell commands are being evaluated.
The -e is necessary to ensure that errors in a Makefile target will be properly detected and a non-zero process exit code will be returned.
You could try to log execve calls with strace
strace -f -e execve make ...
Make writes each command it executes to the console, so
make 2>&1 | tee build.log
will create a log file named build.log as a side effect which contains the same stuff written to the screen. (man tee for more details.)
2>&1 combines standard output and errors into one stream. If you didn't include that, regular output would go into the log file but errors would only go to the console. (make only writes to stderr when a command returns an error code.)
If you want to suppress output entirely in favor of logging to a file, it's even simpler:
make 2>&1 > build.log
Because these just capture console output they work just fine with recursive make.
You might find what you're looking for in the annotated build logs produced by SparkBuild. That includes the commands of every rule executed in the build, whether or not "#" was used to prevent make from printing the command-line.
Your comment about if-conditions is a bit confusing though: are you talking about shell constructs, or make constructs? If you mean shell constructs, I don't think there's any way for you to get exactly what you're after except by using strace as others described. If you mean make constructs, then the output you see is the result of the resolved conditional expression.
Have you tried with the -d parameter (debug)?
Note that you can control the amount of infos with --debug instead. For instance, --debug=a (same as -d), or --debug=b to show only basic infos...
I have a Makefile that starts by running a tool before applying the build rules (which this tool writes for me). If this tool, which is a python script, exits with a non-null status code, I want GNU Make to stop right there and not go on with building the program.
Currently, I do something like this (top level, i.e. column 1):
$(info Generating build rules...)
$(shell python collect_sources.py)
include BuildRules.mk
But this does not stop make if collect_sources.py exits with a status code of 1. This also captures the standard output of collect_sources.py but does not print it out, so I have the feeling I'm looking in the wrong direction.
If at all possible, the solution should even work when a simple MS-DOS shell is the standard system shell.
Any suggestion?
There might be a better way, but I tried the following and it works:
$(if $(shell if your_command; then echo ok; fi), , $(error your_command failed))
Here I did assume that your_command does not give any output, but it shouldn't be hard to work around such a situation.
Edit: To make it work with the default Windows shell (and probably any decent shell) you could write your_command && echo ok instead of the if within the shell function. I do not think this is possible for (older) DOS shells. For these you probably want to adapt your_command or write a wrapper script to print something on error (or success).
Ok, here's my own solution, which is unfortunately not based on the status code of the collect_sources.py script, but which Works For Me (TM) and lets me see any output that the script produces:
SHELL_OUTPUT := $(shell python collect_sources.py 2>&1)
ifeq ($(filter error: [Errno %],$(SHELL_OUTPUT)),)
$(info $(SHELL_OUTPUT))
else
$(error $(SHELL_OUTPUT))
endif
The script is written so that any error produces an output beginning with "collect_sources: error:". Additionally, if python cannot find or execute the given script, it outputs an error message containing the message "[Errno 2]" or similar. So this little piece of code just captures the output (redirecting stderr to stdout) and searches for error messages. If none is found, it simply uses $(info) to print the output, otherwise it uses $(error), which effectively makes Make stop.
Note that the indentation in the ifeq ... endif is done with spaces. If tabs are used, Make thinks you're trying to invoke a command and complains about it.
You should use a regular target to create BuildRules.mk:
BuildRules.mk: collect_sources.py
python $< >$#
include BuildRules.mk
This is the standard trick to use when automatically generating dependencies.
Fixing https://stackoverflow.com/a/226974/192373
.PHONY: BuildRules.mk
BuildRules.mk: collect_sources.py
echo Generating build rules...)
python $< >$#
$(MAKE) -f BuildRules.mk
Make sure you're not invoking make/gmake with the -k option.