I'm building tons of projects at once, and while they're being built, I'd like to do other stuff on the same machine, while being able to monitor the progress.
Is there a way to fetch current progress from configure script as generated by autoconf, and from Makefile generated by autotools?
The short answer is probably "no". But, it depends on what sort of monitoring you want. If you just want to be alerted when each step is finished, you can easily just run:
$ configure && alert-me && make && alert-me
where alert-me is some script that sends you the alert. As a concrete example, if you are using gnu-screen, you could set up monitoring for a window, then run
$ configure > config.output && echo done
When configure is done, the echo will trigger an alert on all the other windows.
If you will be doing this multiple times with each package, you could record the number of lines of output from a run of configure, and get a percentage progress report of the current run by comparing the lines of output. (That seems like a hassle.)
Related
Is there a way to tell make to show me a list of inputs to a target and which ones are triggering a rebuild because they have been modified?
Yes, you can use the -d option in make to show detailed information about the dependencies and the commands being executed. For example, if you run make -d , make will show a list of the dependencies of the target, as well as the commands being executed and their output.
Additionally, you can use the -n option to show what make would do, without actually executing any commands. This is useful to see which targets would be rebuilt because their dependencies have been modified. For example, if you run make -n , make will show the dependencies of the target and the commands that would be executed, without actually executing them.
I'm working on cluster and using custom toolkits (more specifically SRA toolkit). In order to use it, I fist had to download (and unpack it) to a specific folder in my directory.
Then I had to modify .bashsrc to include the following segment:
# User specific aliases and functions
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
Now I can use a stuff from SRATools in bash command line, e.g.
prefetch SR111111
My question is, can I use those tools without modifying my .bashsrc?
The reason that I want to do that is because I wrote a .sh script that takes a long time to run, and my cluster has Sun Grid Engine job management system, and I submitted my script to it, only to see the process fail - because a SRA Toolkit command I used was unrecognized.
EDIT (1):
I modified the location where my prefetch command is, and now it looks like:
/MYNAME/APPS/SRA_TOOLS/bin
different from how it is in .bashsrc:
export PATH="$PATH:/home/MYNAME/APPS/SRATOOLS/bin"
And run what #Darkman suggested (put IF THEN ELSE FI and under ELSE put export). The output is that it didn't find SRATools (because path in .bashsrc is different), but it found them under ELSE and script is running normally. Weird. It works on my job management system.
Thanks everybody.
I am trying to run my test cases which are nearly 40k with below scripts.
Just showing some part of script -
#!/bin/bash
# RUN script
echo "please run with: nice nohup ./run_script"
# working directory where script is stored
WORKING_DIR=$(pwd)
# temp directory to build and run the cmake ctest
BUILD_DIR=${BUILD_DIR:-/localtemp/build}
# clean and make build directory
rm -rf $BUILD_DIR
mkdir -p $BUILD_DIR
mkdir -p $BUILD_DIR/../result
cmake -G Ninja
ninja test
Note: I am using parallel threading with 6 cores to run my test cases .
In first attempt all are passing which is true as I fixed all the bugs in my test cases.
But some time If I want to re-run same script freshly, then I am getting the error in running some test cases out of 40k. But If I run that failing test cases separately(one by one) then they are passing perfectly.
So I assumed that rm -rf is taking some time to delete that old binary and symbols of all cases (40 GB files). so I need to wait for complete deletion and then run my script once again . So should I add some delay after rm -rf command in my script .
I read somewhere that rm -rf will return the status once it finishes the work .Then only next command executes. But my scnerio looks like showing that rm -rf is running in background.
Means I should not start new run immediately when I stopped the earlier run . I need to give some time to delete the old output from earlier run using rm -rf command in script (delay introduction) and run my below ninja command after that. Is that true?
Edit: It appears that, while the general question is easy to answer (i.e. rm does not run in the background) the concrete problem you are having is more complex and requires some potentially OS-level debugging. Nobody on StackOverflow will be able to do this for you :D
For your particular problem, I'd suggest using a different strategy than deleting the files to potentially avoid this hastle. In order to make sure that your build directory is pristine, you could use a new temporary directory in case you want to do a clean build.
Initial answer:
The rm command will not run "in the background" or in any way concurrent to other commands in your script unless you explicitly tell it to (e.g. using & at the end of the command). Once it has finished (i.e. the next command in your bash script executes) the files should have been deleted. Thus your problem probably lies somewhere else.
That being said, there may be differences in behaviour if, for example, you are using network shares, FUSE filesystems or any other mechanism that alters how or when your filesystem reacts to deletion request by the operating system.
It's a little bit more complicated than that. rm will not run in background. What is sure is that the deletion is done at least logically (thus from user viewpoint things are deleted); this means that on some OSes it is possible that the low-level deletion is done in the kernel even if the command has terminated (the kernel then really clean some internal structures behind your back). But that should not normally interfer with your script. Your problem is surely elsewhere.
It could be that some processes hold some files even if they seems deleted... Thus the disk could be not freed as you expected. You'll then have to waut completion of those processes so that kernel really cleans the files...
I ran into this useful tip that if you're working on files a lot and you want them to build automatically you run:
watch make
And it re-runs make every couple seconds and things get built.
However ... it seems to swallow all the output all the time. I think it could be smarter - perhaps show a stream of output but suppress Nothing to be done for 'all' so that if nothing is built the output doesn't scroll.
A few shell script approaches come to mind using a loop and grep ... but perhaps something more elegant is out there? Has anyone seen something?
Using classic gnu make and inotifywait, without interval-based polling:
watch:
while true; do \
$(MAKE) $(WATCHMAKE); \
inotifywait -qre close_write .; \
done
This way make is triggered on every file write in the current directory tree. You can specify the target by running
make watch WATCHMAKE=foo
This one-liner should do it:
while true; do make --silent; sleep 1; done
It'll run make once every second, and it will only print output when it actually does something.
Here is a one-liner:
while true; do make -q || make; sleep 0.5; done
Using make -q || make instead of just make will only run the build if there is something to be done and will not output any messages otherwise.
You can add this as a rule to your project's Makefile:
watch:
while true; do $(MAKE) -q || $(MAKE); sleep 0.5; done
And then use make watch to invoke it.
This technique will prevent Make from filling a terminal with "make: Nothing to be done for TARGET" messages.
It also does not retain a bunch of open file descriptors like some file-watcher solutions, which can lead to ulimit errors.
How about
# In the makefile:
.PHONY: continuously
continuously:
while true; do make 1>/dev/null; sleep 3; done
?
This way you can run
make continuously
and only get output if something is wrong.
Twitter Bootstrap uses the watchr ruby gem for this.
https://github.com/twbs/bootstrap/blob/v2.3.2/Makefile
https://github.com/mynyml/watchr
Edit:
After two years the watchr project seems not to be maintained anymore. Please look for another solution among the answers. Personally, if the goal is only to have a better output, i would recommend the answer from wch here
I do it this way in my Makefile:
watch:
(while true; do make build.log; sleep 1; done) | grep -v 'make\[1\]'
build.log: ./src/*
thecompiler | tee build.log
So, it will only build when my source code is newer than my build.log, and the "grep -v" stuff removes some unnecessary make output.
This shell script uses make itself to detect changes with the -q flag, and then does a full rebuild if and only if there are changes.
#!/bin/sh
while true;
do
if ! make -q "$#";
then
echo "#-> Starting build: `date`"
make "$#";
echo "#-> Build complete."
fi
sleep 0.5;
done
It does not have any dependencies apart from make.
You can pass normal make arguments (such as -C mydir) to it as they are passed on to the make command.
As requested in the question it is silent if there is nothing to build but does not swallow output when there is.
You can keep this script handy as e.g. ~/bin/watch-make to use across multiple projects.
There are several automatic build systems that do this and more - basically when you check a change into version control they will make/build - look for Continuous Integration
Simple ones are TeamCity and Hudson
#Dobes Vandermeer -- I have a script named "mkall" that runs make in every subdirectory. I could assign that script as a cron job to run every five minutes, or one minute, or thirty seconds. Then, to see the output, I'd redirect gcc results (in each individual makefile) to a log in each subdirectory.
Could something like that work for you?
It could be pretty elaborate so as to avoid makes that do nothing. For example, the script could save the modify time of each source file and do the make when that guy changes.
You could try using something like inotify-tools. It will let you watch a directory and run a command when a file is changed or saved or any of the other events that inotify can watch for. A simple script that does a watch for save and kicks off a make when a file is saved would probably be useful.
You could change your make file to output a growl (OS X) or notify-send (Linux) notification. For me in Ubuntu, that would show a notification bubble in the upper-right corner of my screen.
Then you'd only notice the build when it fails.
You'd probably want to set watch to only cycle as fast as those notifications can display (so they don't pile up).
Bit of archaeology, but I still find this question useful. Here is a modified version of #otto's answer, using fswatch (for the mac):
TARGET ?= foo
all:
#fswatch -1 . | read i && make $(TARGET)
#make -ski TARGET=$(TARGET)
%: %.go
#go build $<
#./$#
I am looking for a tool that will help me to compile a history of certain code metrics for a given project.
The project is stored inside a mercurial repository and has about a hundred revisions. I am looking for something that:
checks out each revision
computes the metrics and stores them somewhere with an identifier of the revision
does the same with the next revisions
For a start, counting SLOCs would be sufficient, but it would also be nice to analyze # of Tests,TestCoverage etc.
I know such things are usually handled by a CI Server, however I am solo on this project and thus haven't bothered to set up a CI Server (I'd like to use TeamCity but I really didn't see the benefit of doing so in the beginnig). If I'd set up my CI Server now, could it handle that?
According to jitter's suggestion I have written a small bash script running inside cygwin using sloccount for counting the source lines. The output was simply dumped to a textfile:
#!/bin/bash
COUNT=0 #startrev
STOPATREV = 98
until [ $COUNT -gt $STOPATREV ]; do
hg update -C -r $COUNT >> sloc.log # update and log
echo "" >> sloc.log # echo a newline
rm -r lib # dont count lib folder
sloccount /thisIsTheSourcePath | print_sum
let COUNT=COUNT+1
done
You could write a e.g. shell script which
checks out first version
run sloccount on it (save output)
check out next version
repeat steps 2-4
Or look into ohloh which seems to have mercurial support by now.
Otherwise I don't know of any SCM statistics tool which supports mercurial. As mercurial is relatively young (since 2005) it might take some time until such "secondary use cases" are supported. (HINT: maybe provide a hgstat library yourself as there are for svn and csv)
If it were me writing software to do that kind of thing, I think I'd dump metrics results for the project into a single file, and revision control that. Then the "historical analysis" tool would have to pull out old versions of just that one file, rather than having to pull out every old copy of the entire repository and rerun all the tests every time.