I am writing a script with the intention of being able to download and run it from anywhere, like:
bash <(curl -s https://raw.githubusercontent.com/path/to/script.sh)
The command above allows me to download the script, run interactive commands (e.g. read), and - for the most part - Just Works. I have run into an issue during the cleanup portion of my script, however, and haven't been able to discern a fix
During cleanup I need to remove several .bkp files created by the script's execution. To do so I run rm -f **/*.bkp inside the script. When a local copy of the script is run, this works great! When run via bash/curl, however, it removes nothing. I believe this has something to do with a failure to expand the glob as a result of the way I've connected the I/O of bash and curl, but have been unable to find a way to get everything to play nice
How can I meet all of the following requirements?
Download and run a script from a remote resource
Ensure that the user's keyboard input is connected for use in e.g. read calls within the script
Correctly expand the glob passed to rm
Bonus points: colorize input with e.g. echo -e "\x1b[31mSome error text here\x1b[0m" (also not working, suspected to be related to the same bash/curl I/O issues)
I'm working a small JS project and trying to get a script to run, which compiles some source files that are written in our own "language x".
To run the compiler normally you would use the command ./a.out < source.x And it would print out success or compilation errors etc.
In the case now, I'm trying to working between two directories and using this command:
sudo ~/Documents/server/xCompiler/./a.out < ~/Documents/server/xPrograms/source.x
But this produces no output into the terminal at all and doesn't affect the output files. Is there somthing I'm doing wrong with the use of <? I'm planning to use it in child_process.exec within a node server later.
Any help would be appreciated, I'm a bit stumped.
Thanks.
Redirection operators (<, >, and others like them) describe operations to be performed by the shell before your command is run at all. Because these operations are performed by the shell itself, it's extremely unlikely that they would be broken in a way specific to an individual command: When they're performed, the command hasn't started yet.
There are, however, some more pertinent ways your first and second commands differ:
The second (non-working) one uses a fully-qualified path to the compiler itself. That means that the directory that the compiler is found in and the current working directory where the compiler is running can differ. If the compiler looks for files in or in locations relative to its current working directory, this can cause a failure.
The second uses sudo to escalate privileges to run the compiler. This means you're running as a different user, with most environment variables cleared or modified (unless explicitly whitelisted in /etc/sudoers) during the switch -- and has widespread potential to break things depending on details of your compiler's expectations about its runtime environment beyond what we can reasonably be expected to diagnose here.
That first one, at least, is amenable to a solution. In shell:
xCompile() {
(cd ~/Documents/server/xCompiler && exec ./a.out "$#")
}
xCompile < ~/Documents/server/xPrograms/source.x
Using exec is a performance optimization: It balances the cost of creating a new subshell (with the parenthesis) by consuming that subshell to launch the compiler rather than launching it as a subprocess.
Calling the node child_process.exec(), you can simply pass the desired runtime directory in the cwd argument, so no shell function is necessary.
At the beginning of the file I do
cp ~/.bundle/config ~/.bundle/config_save
At the end of the file I restore it with
cp ~/.bundle/config_save ~/.bundle/config
and within the file I am issuing lots of different rspec/spec/dir/file.rb commands
How can I make it so that if interrupted by the user (ctrl - c), it does cleanup and restores the config_save file back to be config ?
I would like the processes to run in the foreground is possible so that I can see the actual failures themselves. Failing this, perhaps another option might be to tail the logs/test.log in each repository.
Maybe I misunderstand your question, but can't you just "concatenate" the commands using &&:
cp ~/.bundle/config ~/.bundle/config_save
rspec spec/dir/file1.rb &&
rspec spec/dir/file2.rb &&
rspec spec/dir/file3.rb
cp ~/.bundle/config_save ~/.bundle/config
If one of the rspec commands fails, the remaining commands are skipped and the next (i.e. last) line is executed.
I'd like to simplify the workflow so that rather than issuing these commands
$ make program_unittest
... output of $MAKE ...
$ ./program_unittest args
I could have my program automatically attempt to compile itself (if the source has been updated) when it is run, so that I do not have to go back and run make myself.
Here's what I'm thinking: My unit test build should first check if there is a makefile in the directory it's in, and if so, fork and exec make with the target corresponding to itself. If make determines "nothing to be done", it will continue on its way (running the unit-tests). However, if make actually performs a compilation, one of two things may happen. gcc (invoked by make) might be able to overwrite the build (an older version of which is already running) during compilation, in which case I can then perhaps exec it. If my system does not permit gcc to overwrite the program which is in use, then I have to quit the program before running make.
So this has become quite involved already. Are there perhaps more elegant solutions? Maybe I could use a bash script? How do I ascertain if make issued compilation commands or not?
Why not have make run the unit tests?
I ran into this useful tip that if you're working on files a lot and you want them to build automatically you run:
watch make
And it re-runs make every couple seconds and things get built.
However ... it seems to swallow all the output all the time. I think it could be smarter - perhaps show a stream of output but suppress Nothing to be done for 'all' so that if nothing is built the output doesn't scroll.
A few shell script approaches come to mind using a loop and grep ... but perhaps something more elegant is out there? Has anyone seen something?
Using classic gnu make and inotifywait, without interval-based polling:
watch:
while true; do \
$(MAKE) $(WATCHMAKE); \
inotifywait -qre close_write .; \
done
This way make is triggered on every file write in the current directory tree. You can specify the target by running
make watch WATCHMAKE=foo
This one-liner should do it:
while true; do make --silent; sleep 1; done
It'll run make once every second, and it will only print output when it actually does something.
Here is a one-liner:
while true; do make -q || make; sleep 0.5; done
Using make -q || make instead of just make will only run the build if there is something to be done and will not output any messages otherwise.
You can add this as a rule to your project's Makefile:
watch:
while true; do $(MAKE) -q || $(MAKE); sleep 0.5; done
And then use make watch to invoke it.
This technique will prevent Make from filling a terminal with "make: Nothing to be done for TARGET" messages.
It also does not retain a bunch of open file descriptors like some file-watcher solutions, which can lead to ulimit errors.
How about
# In the makefile:
.PHONY: continuously
continuously:
while true; do make 1>/dev/null; sleep 3; done
?
This way you can run
make continuously
and only get output if something is wrong.
Twitter Bootstrap uses the watchr ruby gem for this.
https://github.com/twbs/bootstrap/blob/v2.3.2/Makefile
https://github.com/mynyml/watchr
Edit:
After two years the watchr project seems not to be maintained anymore. Please look for another solution among the answers. Personally, if the goal is only to have a better output, i would recommend the answer from wch here
I do it this way in my Makefile:
watch:
(while true; do make build.log; sleep 1; done) | grep -v 'make\[1\]'
build.log: ./src/*
thecompiler | tee build.log
So, it will only build when my source code is newer than my build.log, and the "grep -v" stuff removes some unnecessary make output.
This shell script uses make itself to detect changes with the -q flag, and then does a full rebuild if and only if there are changes.
#!/bin/sh
while true;
do
if ! make -q "$#";
then
echo "#-> Starting build: `date`"
make "$#";
echo "#-> Build complete."
fi
sleep 0.5;
done
It does not have any dependencies apart from make.
You can pass normal make arguments (such as -C mydir) to it as they are passed on to the make command.
As requested in the question it is silent if there is nothing to build but does not swallow output when there is.
You can keep this script handy as e.g. ~/bin/watch-make to use across multiple projects.
There are several automatic build systems that do this and more - basically when you check a change into version control they will make/build - look for Continuous Integration
Simple ones are TeamCity and Hudson
#Dobes Vandermeer -- I have a script named "mkall" that runs make in every subdirectory. I could assign that script as a cron job to run every five minutes, or one minute, or thirty seconds. Then, to see the output, I'd redirect gcc results (in each individual makefile) to a log in each subdirectory.
Could something like that work for you?
It could be pretty elaborate so as to avoid makes that do nothing. For example, the script could save the modify time of each source file and do the make when that guy changes.
You could try using something like inotify-tools. It will let you watch a directory and run a command when a file is changed or saved or any of the other events that inotify can watch for. A simple script that does a watch for save and kicks off a make when a file is saved would probably be useful.
You could change your make file to output a growl (OS X) or notify-send (Linux) notification. For me in Ubuntu, that would show a notification bubble in the upper-right corner of my screen.
Then you'd only notice the build when it fails.
You'd probably want to set watch to only cycle as fast as those notifications can display (so they don't pile up).
Bit of archaeology, but I still find this question useful. Here is a modified version of #otto's answer, using fswatch (for the mac):
TARGET ?= foo
all:
#fswatch -1 . | read i && make $(TARGET)
#make -ski TARGET=$(TARGET)
%: %.go
#go build $<
#./$#