Is there a smarter alternative to "watch make"? - makefile

I ran into this useful tip that if you're working on files a lot and you want them to build automatically you run:
watch make
And it re-runs make every couple seconds and things get built.
However ... it seems to swallow all the output all the time. I think it could be smarter - perhaps show a stream of output but suppress Nothing to be done for 'all' so that if nothing is built the output doesn't scroll.
A few shell script approaches come to mind using a loop and grep ... but perhaps something more elegant is out there? Has anyone seen something?

Using classic gnu make and inotifywait, without interval-based polling:
watch:
while true; do \
$(MAKE) $(WATCHMAKE); \
inotifywait -qre close_write .; \
done
This way make is triggered on every file write in the current directory tree. You can specify the target by running
make watch WATCHMAKE=foo

This one-liner should do it:
while true; do make --silent; sleep 1; done
It'll run make once every second, and it will only print output when it actually does something.

Here is a one-liner:
while true; do make -q || make; sleep 0.5; done
Using make -q || make instead of just make will only run the build if there is something to be done and will not output any messages otherwise.
You can add this as a rule to your project's Makefile:
watch:
while true; do $(MAKE) -q || $(MAKE); sleep 0.5; done
And then use make watch to invoke it.
This technique will prevent Make from filling a terminal with "make: Nothing to be done for TARGET" messages.
It also does not retain a bunch of open file descriptors like some file-watcher solutions, which can lead to ulimit errors.

How about
# In the makefile:
.PHONY: continuously
continuously:
while true; do make 1>/dev/null; sleep 3; done
?
This way you can run
make continuously
and only get output if something is wrong.

Twitter Bootstrap uses the watchr ruby gem for this.
https://github.com/twbs/bootstrap/blob/v2.3.2/Makefile
https://github.com/mynyml/watchr
Edit:
After two years the watchr project seems not to be maintained anymore. Please look for another solution among the answers. Personally, if the goal is only to have a better output, i would recommend the answer from wch here

I do it this way in my Makefile:
watch:
(while true; do make build.log; sleep 1; done) | grep -v 'make\[1\]'
build.log: ./src/*
thecompiler | tee build.log
So, it will only build when my source code is newer than my build.log, and the "grep -v" stuff removes some unnecessary make output.

This shell script uses make itself to detect changes with the -q flag, and then does a full rebuild if and only if there are changes.
#!/bin/sh
while true;
do
if ! make -q "$#";
then
echo "#-> Starting build: `date`"
make "$#";
echo "#-> Build complete."
fi
sleep 0.5;
done
It does not have any dependencies apart from make.
You can pass normal make arguments (such as -C mydir) to it as they are passed on to the make command.
As requested in the question it is silent if there is nothing to build but does not swallow output when there is.
You can keep this script handy as e.g. ~/bin/watch-make to use across multiple projects.

There are several automatic build systems that do this and more - basically when you check a change into version control they will make/build - look for Continuous Integration
Simple ones are TeamCity and Hudson

#Dobes Vandermeer -- I have a script named "mkall" that runs make in every subdirectory. I could assign that script as a cron job to run every five minutes, or one minute, or thirty seconds. Then, to see the output, I'd redirect gcc results (in each individual makefile) to a log in each subdirectory.
Could something like that work for you?
It could be pretty elaborate so as to avoid makes that do nothing. For example, the script could save the modify time of each source file and do the make when that guy changes.

You could try using something like inotify-tools. It will let you watch a directory and run a command when a file is changed or saved or any of the other events that inotify can watch for. A simple script that does a watch for save and kicks off a make when a file is saved would probably be useful.

You could change your make file to output a growl (OS X) or notify-send (Linux) notification. For me in Ubuntu, that would show a notification bubble in the upper-right corner of my screen.
Then you'd only notice the build when it fails.
You'd probably want to set watch to only cycle as fast as those notifications can display (so they don't pile up).

Bit of archaeology, but I still find this question useful. Here is a modified version of #otto's answer, using fswatch (for the mac):
TARGET ?= foo
all:
#fswatch -1 . | read i && make $(TARGET)
#make -ski TARGET=$(TARGET)
%: %.go
#go build $<
#./$#

Related

On Linux, does the remove command "rm" run in the background?

I am trying to run my test cases which are nearly 40k with below scripts.
Just showing some part of script -
#!/bin/bash
# RUN script
echo "please run with: nice nohup ./run_script"
# working directory where script is stored
WORKING_DIR=$(pwd)
# temp directory to build and run the cmake ctest
BUILD_DIR=${BUILD_DIR:-/localtemp/build}
# clean and make build directory
rm -rf $BUILD_DIR
mkdir -p $BUILD_DIR
mkdir -p $BUILD_DIR/../result
cmake -G Ninja
ninja test
Note: I am using parallel threading with 6 cores to run my test cases .
In first attempt all are passing which is true as I fixed all the bugs in my test cases.
But some time If I want to re-run same script freshly, then I am getting the error in running some test cases out of 40k. But If I run that failing test cases separately(one by one) then they are passing perfectly.
So I assumed that rm -rf is taking some time to delete that old binary and symbols of all cases (40 GB files). so I need to wait for complete deletion and then run my script once again . So should I add some delay after rm -rf command in my script .
I read somewhere that rm -rf will return the status once it finishes the work .Then only next command executes. But my scnerio looks like showing that rm -rf is running in background.
Means I should not start new run immediately when I stopped the earlier run . I need to give some time to delete the old output from earlier run using rm -rf command in script (delay introduction) and run my below ninja command after that. Is that true?
Edit: It appears that, while the general question is easy to answer (i.e. rm does not run in the background) the concrete problem you are having is more complex and requires some potentially OS-level debugging. Nobody on StackOverflow will be able to do this for you :D
For your particular problem, I'd suggest using a different strategy than deleting the files to potentially avoid this hastle. In order to make sure that your build directory is pristine, you could use a new temporary directory in case you want to do a clean build.
Initial answer:
The rm command will not run "in the background" or in any way concurrent to other commands in your script unless you explicitly tell it to (e.g. using & at the end of the command). Once it has finished (i.e. the next command in your bash script executes) the files should have been deleted. Thus your problem probably lies somewhere else.
That being said, there may be differences in behaviour if, for example, you are using network shares, FUSE filesystems or any other mechanism that alters how or when your filesystem reacts to deletion request by the operating system.
It's a little bit more complicated than that. rm will not run in background. What is sure is that the deletion is done at least logically (thus from user viewpoint things are deleted); this means that on some OSes it is possible that the low-level deletion is done in the kernel even if the command has terminated (the kernel then really clean some internal structures behind your back). But that should not normally interfer with your script. Your problem is surely elsewhere.
It could be that some processes hold some files even if they seems deleted... Thus the disk could be not freed as you expected. You'll then have to waut completion of those processes so that kernel really cleans the files...

Force run a recipe (--assume-old=target)

I want to force a recipe for "output.file", even though it is up-to-date.
I have already tried make --assume-old=output.file output.file, but it does not run the recipe again.
In case you are curious: use case:
I want to use this together with --dry-run to find out the command that produce a target.
I ended up hiding the file to run make --dry-run output.file, but I was hoping for something more elegant + FMI: for future debugging makefile.
I think you're misunderstanding what that option does: it does exactly the opposite of what you hoped; from the man page:
-o file, --old-file=file, --assume-old=file
Do not remake the file file even if it is older than its dependenā€
cies, and do not remake anything on account of changes in file.
Essentially the file is treated as very old and its rules are
ignored.
You want output.file to be remade, so using -o is clearly not what you want.
There is no option in GNU make to say "always rebuild this target". What you can do is tell make to pretend that some prerequisite of the target you want to be rebuilt has been updated. See this option:
-W file, --what-if=file, --new-file=file, --assume-new=file
Pretend that the target file has just been modified. When used
with the -n flag, this shows you what would happen if you were to
modify that file. Without -n, it is almost the same as running a
touch command on the given file before running make, except that
the modification time is changed only in the imagination of make.
Say for example your output.file had a prerequisite input.file. Then if you run:
make -W input.file
it will show you what rules it would run, which would include rebuilding output.file.

Can progress be fetched from "configure" and autotools-generated "Makefile"?

I'm building tons of projects at once, and while they're being built, I'd like to do other stuff on the same machine, while being able to monitor the progress.
Is there a way to fetch current progress from configure script as generated by autoconf, and from Makefile generated by autotools?
The short answer is probably "no". But, it depends on what sort of monitoring you want. If you just want to be alerted when each step is finished, you can easily just run:
$ configure && alert-me && make && alert-me
where alert-me is some script that sends you the alert. As a concrete example, if you are using gnu-screen, you could set up monitoring for a window, then run
$ configure > config.output && echo done
When configure is done, the echo will trigger an alert on all the other windows.
If you will be doing this multiple times with each package, you could record the number of lines of output from a run of configure, and get a percentage progress report of the current run by comparing the lines of output. (That seems like a hassle.)

Is there any way for "make" to echo commands

Is there a way to have make echo commands that are manually suppressed with # in the makefile? I can't find this in the help or man page, it just says "--quiet" to do the opposite.
The most obvious idea is to change the shell that runs the commands, e.g. modify your makefile and add to the top SHELL = sh -xv.
Another solution is to change how you call make to make SHELL='sh -xv'
Lastly if your Makefile is generated by cmake then call make with make VERBOSE=1
I run into this question from time to time using cmake because it hides the command. You can use "make VERBOSE=true" to get them to print out.

How to have GNU make explicitly test for failure?

After years of not using make, I find myself needing it again, the gnu version now. I'm pretty sure I should be able to do what I want, but haven't figured out how, or found an answer with Google, etc.
I'm trying to create a test target which will execute my program a number of times, saving the results in a log file. Some tests should cause my program to abort. Unfortunately, my makefile aborts on the first test which leads to an error. I have something like:
# Makefile
#
test:
myProg -h > test.log # Display help
myProg good_input >> test.log # should run fine
myProg bad_input1 >> test.log # Error 1
myProg bad_input2 >> test.log # Error 2
With the above, make quits after the bad_input1 run, never getting to the bad_input2 run.
Put a - before the command, e.g.:
-myProg bad_input >> test.log
GNU make will then ignore the process exit code.
Try running it as
make -i
or
make --ignore-errors
which ignores all errors in all rules.
I'd also suggest running it as
make -i 2>&1 | tee results
so that you got all the errors and output to see what happened.
Just blindly continuing on after an error is probably not what you're really wanting to do. The make utility, by its very nature, is usually relying on successful completion of previous commands so that it can use the artefacts of those commands as pre-requisites for commands to be executed later on.
BTW I'd highly recommend getting a copy of the O'Reilly book on make. The first edition has an excellent overview of the basic nature of make, specifically its backward chaining behaviour. Later editions are still good but the first ed. still has the clearest explanation of what's actually happening. In fact, my own copy is the first thing I pass to people who come to me to ask "WTF? questions" about make! (-:
The proper solution if you want to require the target to fail is to negate its exit code.
# Makefile
#
test:
myProg -h > test.log # Display help
myProg good_input >> test.log # should run fine
! myProg bad_input1 >> test.log # Error 1
! myProg bad_input2 >> test.log # Error 2
Now, it is an error to succeed in those two cases.

Resources