Makefile with a remote URL dependency - makefile

I know a network dependent makefile is poor form, so please don't lecture me.
I have a Makefile where I want to grab the latest copy of some tweets for example, as network efficiently as possible is a plus.
webconverger.txt:
wget http://greptweet.com/u/webconverger/webconverger.txt -O webconverger.txt
However make obviously thinks the file is upto date once running it. Are there hack to put in the dependency section to do a wget -q -N to see if indeed webconverger.txt is upto date?

refresh:
wget -q -N http://greptweet.com/u/webconverger/webconverger.txt -O webconverger.txt
all: refresh webconverger.txt

Related

Force run a recipe (--assume-old=target)

I want to force a recipe for "output.file", even though it is up-to-date.
I have already tried make --assume-old=output.file output.file, but it does not run the recipe again.
In case you are curious: use case:
I want to use this together with --dry-run to find out the command that produce a target.
I ended up hiding the file to run make --dry-run output.file, but I was hoping for something more elegant + FMI: for future debugging makefile.
I think you're misunderstanding what that option does: it does exactly the opposite of what you hoped; from the man page:
-o file, --old-file=file, --assume-old=file
Do not remake the file file even if it is older than its dependenā€
cies, and do not remake anything on account of changes in file.
Essentially the file is treated as very old and its rules are
ignored.
You want output.file to be remade, so using -o is clearly not what you want.
There is no option in GNU make to say "always rebuild this target". What you can do is tell make to pretend that some prerequisite of the target you want to be rebuilt has been updated. See this option:
-W file, --what-if=file, --new-file=file, --assume-new=file
Pretend that the target file has just been modified. When used
with the -n flag, this shows you what would happen if you were to
modify that file. Without -n, it is almost the same as running a
touch command on the given file before running make, except that
the modification time is changed only in the imagination of make.
Say for example your output.file had a prerequisite input.file. Then if you run:
make -W input.file
it will show you what rules it would run, which would include rebuilding output.file.

Alias target names in Makefiles or alternative approach

I have a directory where I download some data from somewhere on which I want to perform some operations by a script. When said directory does not yet exists, I want make to automatically download the files, which I would also like to be able to initiate by running make fetch. However, I don't want the download process to be started everytime I run make for the other target.
My attempt so far:
.PHONY: fetch
fetch: data/www.example.org
foo.dat: | data/www.example.org
somescript > foo.dat
data/www.example.org:
wget --mirror --prefix-directory=data http://www.example.org
This has almost the desired effects, except for when I run make fetch it gives me:
$ make fetch
make: Nothing to be done for `fetch'.
I could, of course, simply copy the recipe of data/www.example.org to fetch, however as the recipe may get more complex in the future, I would like to avoid that kind of solution.
EDIT:
It seems kind of obvious in hindsight, but for some reason I didn't think of using variables. I think, my mind kept searching for some kind of "neater" way. But that does solve it for me, so thanks to l0b0 for pointing it out to me:
FETCH = wget --mirror --prefix-directory=data http://www.example.org
.PHONY: fetch
fetch:
$(FETCH)
foo.dat: | data/www.example.org
somescript > foo.dat
data/www.example.org:
$(FETCH)

download a file with curl, keep original filename and add timestamp or so one

i've started playing around with curl a few days ago. For any reason i couldn't figure out how to archive the following.
I would like to get the original filename with the output option
-O -J
AND put there some kind of variable, like time stamp, source path or whatever. This would avoid the file overwriting issue and also make it easier for further work with it.
Here are a few specs about my setup
Win7 x64
curl 7.37.0
Admin user
just commandline no PHP or script or so one
no scripting solutions please, need tihs command in a single line for Selenium automation
C:>curl --retry 1 --cert c:\certificate.cer --URL https://blabla.com/pdf-file --user username:password --cookie-jar cookie.txt -v -O -J
I've played around with various things i found online like
-o %(file %H:%s)
-O -J/%date%
-o $(%H) bla#1.pdf
but it always just print out the file as it is named link "%(file.pdf" or some other shitty names. I guess this is something pointing to escaping and quoting issues but cant find it right now.
No scripting solutions please, I need tihs command in a single line for Selenium automation.
Prefered output
originalfilename_date_time_source.pdf
Let me know if you get a solution for this.

Is there a smarter alternative to "watch make"?

I ran into this useful tip that if you're working on files a lot and you want them to build automatically you run:
watch make
And it re-runs make every couple seconds and things get built.
However ... it seems to swallow all the output all the time. I think it could be smarter - perhaps show a stream of output but suppress Nothing to be done for 'all' so that if nothing is built the output doesn't scroll.
A few shell script approaches come to mind using a loop and grep ... but perhaps something more elegant is out there? Has anyone seen something?
Using classic gnu make and inotifywait, without interval-based polling:
watch:
while true; do \
$(MAKE) $(WATCHMAKE); \
inotifywait -qre close_write .; \
done
This way make is triggered on every file write in the current directory tree. You can specify the target by running
make watch WATCHMAKE=foo
This one-liner should do it:
while true; do make --silent; sleep 1; done
It'll run make once every second, and it will only print output when it actually does something.
Here is a one-liner:
while true; do make -q || make; sleep 0.5; done
Using make -q || make instead of just make will only run the build if there is something to be done and will not output any messages otherwise.
You can add this as a rule to your project's Makefile:
watch:
while true; do $(MAKE) -q || $(MAKE); sleep 0.5; done
And then use make watch to invoke it.
This technique will prevent Make from filling a terminal with "make: Nothing to be done for TARGET" messages.
It also does not retain a bunch of open file descriptors like some file-watcher solutions, which can lead to ulimit errors.
How about
# In the makefile:
.PHONY: continuously
continuously:
while true; do make 1>/dev/null; sleep 3; done
?
This way you can run
make continuously
and only get output if something is wrong.
Twitter Bootstrap uses the watchr ruby gem for this.
https://github.com/twbs/bootstrap/blob/v2.3.2/Makefile
https://github.com/mynyml/watchr
Edit:
After two years the watchr project seems not to be maintained anymore. Please look for another solution among the answers. Personally, if the goal is only to have a better output, i would recommend the answer from wch here
I do it this way in my Makefile:
watch:
(while true; do make build.log; sleep 1; done) | grep -v 'make\[1\]'
build.log: ./src/*
thecompiler | tee build.log
So, it will only build when my source code is newer than my build.log, and the "grep -v" stuff removes some unnecessary make output.
This shell script uses make itself to detect changes with the -q flag, and then does a full rebuild if and only if there are changes.
#!/bin/sh
while true;
do
if ! make -q "$#";
then
echo "#-> Starting build: `date`"
make "$#";
echo "#-> Build complete."
fi
sleep 0.5;
done
It does not have any dependencies apart from make.
You can pass normal make arguments (such as -C mydir) to it as they are passed on to the make command.
As requested in the question it is silent if there is nothing to build but does not swallow output when there is.
You can keep this script handy as e.g. ~/bin/watch-make to use across multiple projects.
There are several automatic build systems that do this and more - basically when you check a change into version control they will make/build - look for Continuous Integration
Simple ones are TeamCity and Hudson
#Dobes Vandermeer -- I have a script named "mkall" that runs make in every subdirectory. I could assign that script as a cron job to run every five minutes, or one minute, or thirty seconds. Then, to see the output, I'd redirect gcc results (in each individual makefile) to a log in each subdirectory.
Could something like that work for you?
It could be pretty elaborate so as to avoid makes that do nothing. For example, the script could save the modify time of each source file and do the make when that guy changes.
You could try using something like inotify-tools. It will let you watch a directory and run a command when a file is changed or saved or any of the other events that inotify can watch for. A simple script that does a watch for save and kicks off a make when a file is saved would probably be useful.
You could change your make file to output a growl (OS X) or notify-send (Linux) notification. For me in Ubuntu, that would show a notification bubble in the upper-right corner of my screen.
Then you'd only notice the build when it fails.
You'd probably want to set watch to only cycle as fast as those notifications can display (so they don't pile up).
Bit of archaeology, but I still find this question useful. Here is a modified version of #otto's answer, using fswatch (for the mac):
TARGET ?= foo
all:
#fswatch -1 . | read i && make $(TARGET)
#make -ski TARGET=$(TARGET)
%: %.go
#go build $<
#./$#

Is there an equivilent function in CURL for WGET -N?

I was wondering if CURL allows you to do the same function as WGET -N does - which will only download / overwrite a file if the existing file on the client side is older than the one on the server.
I realise this question is old now, but just in case someone else is looking for the answer, it seems that cURL can indeed acheieve similar to wget -N.
Because I was just looking for an answer to this question today, and I found elsewhere that cURL does have time condition option. If google brings you here first, as it did me, then I hope this answer might save you some time in looking. According to curl --help, there is a time-cond flag;
-z, --time-cond <time> Transfer based on a time condition
The other part I needed, in order to make it like wget -N, is to make it try and preserve the timestamp. This is with the -R option.
-R, --remote-time Set the remote file's time on the local output
We can use these to download "$file", only in the condition when the current local "$file" timestamp is older than the server's file timestamp; we can do it in this form;
curl -R -o "$file" -z "$file" "$serverurl"
So, for example, I use it to check if there is a newer cygwin installer like this;
curl -R -o "C:\cygwin64\setup-x86_64.exe" -z "C:\cygwin64\setup-x86_64.exe" "https://www.cygwin.com/setup-x86_64.exe"
cURL doesn't have the same type of mirroring support that wget has built in. There is one setting in there with cURL that should make it pretty easy to implement this for yourself though with a little bit of wrapping logic. It's the --remote-time option:
-R/--remote-time
When used, this will make libcurl attempt to figure out the
timestamp of the remote file, and if that is available make the
local file get that same timestamp.

Resources