I'm trying to get a bash script to run exclusively -- if another instance of the script is already running, then wait for the other instance to finish before starting the new one. I found some references to flock which sounds like it should do what I need, but it doesn't seem to be working the way I expect. I have the following script:
#!/bin/bash
inst=$1
lock=/nobackup/julvr/locks/_tst.lk
exec 200>$lock
flock -x -w30 200 || { echo "$inst: failed flock" && exit 1; }
echo "$inst:got lock"
for i in {1..2}; do
echo "$inst: $i"
sleep 1
done
echo "$inst:done script";
And then I run
> flocktest.sh test1 & flocktest.sh test2
[1] 25213
test1:got lock
test1: 1
test2:got lock
test2: 1
test1: 2
test2: 2
test1:done script
test2:done script
[1]+ Done flocktest.sh test1
It seems both instances of flocktest are running in parallel... When does flock release its lock? How to make it keep the lock until the script is complete?
(An aside, if I do flock -x -w 20 200, then it complains flock: 20: fcntl: Bad file descriptor..., which seems odd, as the man page seems to imply I can add a --w timeout parameter before the lockfile...)
flock seems to me very complicated.
Perhaps you can try this way.
cat script_unique.sh
while test -n "$run_sh"
do
sleep 2
done
export run_sh="run_sh"
sleep 2
echo "$run_sh"
sleep 4
echo "$0 $1"
run_sh=""
Ok, I found my bug -- I was using a really old version of flock which had a different interface than the one described in the man page. I updated to a new version of flock, and it worked.
Related
I have two bash scripts:
a.sh:
echo "running"
doit=true
if [ $doit = true ];then
./b.sh &
fi
some-long-operation-binary
echo "done"
b.sh:
for i in {0..50}; do
echo "counting";
sleep 1;
done
I get this output:
> ./a.sh
running
counting
Why do I only see the first "counting" from b.sh and then nothing anymore? (Currently some-long-operation-binary just sleep 5 for this example). I first thought that due to setting b.sh in the background, its STDOUT is lost, but why do I see the first output? More importantly: is b.sh still running and doing its thing (its iteration)?
For context:
b.sh is going to poll a service provided by some-long-operation-binary, which is only available after some time the latter has run, and when ready, would write its content to a file.
Apologies if this is just rubbish, it's a bit late...
You should add #!/bin/bash or the like to b.sh that uses a Bash-like expansion, to make sure Bash is actually running the script. Otherwise there may be (indeed) only one loop iteration happening.
When you start a background process, it is usually a good practice to kill it and wait for it, no matter which way the script exits.
#!/bin/bash
set -e -o pipefail
declare -i show_counter=1
counter() {
local -i i
for ((i = 0;; ++i)); do
echo "counting $((i))"
sleep 1
done
}
echo starting
if ((show_counter)); then
counter &
declare -i counter_pid="${!}"
trap 'kill "${counter_pid}"
wait -n "${counter_pid}" || :
echo terminating' EXIT
fi
sleep 10 # long-running process
I have a bash script (this_script.sh) that invokes multiple instances of another TCL script.
set -m
for vars in $( cat vars.txt );
do
exec tclsh8.5 the_script.tcl "$vars" &
done
while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done
The multi threading portion was taken from Aleksandr's answer on: Forking / Multi-Threaded Processes | Bash.
The script works perfectly (still trying to figure out the last line). However, this line is always displaed: exec tclsh8.5 the_script.tcl "$vars"
How do I hide that line? I tried running the script as :
bash this_script.sh > /dev/null
But this hides the output of the invoked tcl scripts too (I need the output of the TCL scripts).
I tried adding the /dev/null to the end of the statement within the for statement, but that too did not work either. Basically, I am trying to hide the command but not the output.
You should use $! to get the PID of the background process just started, accumulate those in a variable, and then wait for each of those in turn in a second for loop.
set -m
pids=""
for vars in $( cat vars.txt ); do
tclsh8.5 the_script.tcl "$vars" &
pids="$pids $!"
done
for pid in $pids; do
wait $pid
# Ought to look at $? for failures, but there's no point in not reaping them all
done
I'm not very good on bash I've been modifying a code to create a lock file so a cron don't execute a second time if the first process hasn't finish.
LOCK_FILE=./$(hostname)-lock
(set -C; : > $LOCK_FILE) 2> /dev/null
if [ $? != "0" ]; then
echo "already running (lock file exists); exiting..."
exit 1
fi
trap 'rm $LOCK_FILE' INT TERM EXIT
when I run it for the first time I get the message already running as if the file already existed.
Perhaps I'm missing something
#!/bin/sh
(
# Wait for lock on /tmp/lock
flock -x -w 10 200 || exit 127 # you can use or not use -w
#your stuff here
) 200> /tmp/lock
check man page flock.
This is the tool for you.
And it comes with example in man page :)
Is there an alternative for the timeout command on Mac OSx. The basic requirement is I am able to run a command for a specified amount of time.
e.g:
timeout 10 ping google.com
This program runs ping for 10s on Linux.
You can use
brew install coreutils
And then whenever you need timeout, use
gtimeout
..instead. To explain why here's a snippet from the Homebrew Caveats section:
Caveats
All commands have been installed with the prefix 'g'.
If you really need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/coreutils/libexec/gnubin:$PATH"
Additionally, you can access their man pages with normal names if you add
the "gnuman" directory to your MANPATH from your bashrc as well:
MANPATH="/usr/local/opt/coreutils/libexec/gnuman:$MANPATH"
Another simple approach that works pretty much cross platform (because it uses perl which is nearly everywhere) is this:
function timeout() { perl -e 'alarm shift; exec #ARGV' "$#"; }
Snagged from here:
https://gist.github.com/jaytaylor/6527607
Instead of putting it in a function, you can just put the following line in a script, and it'll work too:
timeout.sh
perl -e 'alarm shift; exec #ARGV' "$#";
or a version that has built in help/examples:
timeout.sh
#!/usr/bin/env bash
function show_help()
{
IT=$(cat <<EOF
Runs a command, and times out if it doesnt complete in time
Example usage:
# Will fail after 1 second, and shows non zero exit code result
$ timeout 1 "sleep 2" 2> /dev/null ; echo \$?
142
# Will succeed, and return exit code of 0.
$ timeout 1 sleep 0.5; echo \$?
0
$ timeout 1 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
142
$ timeout 3 bash -c 'echo "hi" && sleep 2 && echo "bye"' 2> /dev/null; echo \$?
hi
bye
0
EOF
)
echo "$IT"
exit
}
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
#
# Mac OS-X does not come with the delightfully useful `timeout` program. Thankfully a rough BASH equivalent can be achieved with only 2 perl statements.
#
# Originally found on SO: http://stackoverflow.com/questions/601543/command-line-command-to-auto-kill-a-command-after-a-certain-amount-of-time
#
perl -e 'alarm shift; exec #ARGV' "$#";
As kvz stated simply use homebrew:
brew install coreutils
Now the timeout command is already ready to use - no aliases are required (and no gtimeout required, although also available).
You can limit execution time of any program using this command:
ping -t 10 google.com & sleep 5; kill $!
The Timeout Package from Ubuntu / Debian can be made to compile on Mac and it works.
The package is available at http://packages.ubuntu.com/lucid/timeout
You can do ping -t 10 google.com >nul
the >nul gets rid of the output. So instead of showing 64 BYTES FROM 123.45.67.8 BLAH BLAH BLAH it'll just show a blank newline until it times out. -t flag can be changed to any number.
Greetings all. I'm setting up a cron job to execute a bash script, and I'm worried that the next one may start before the previous one ends. A little googling reveals that a popular way to address this is the flock command, used in the following manner:
flock -n lockfile myscript.sh
if [ $? -eq 1 ]; then
echo "Previous script is still running! Can't execute!"
fi
This works great. However, what do I do if I want to check the exit code of myscript.sh? Whatever exit code it returns will be overwritten by flock's, so I have no way of knowing if it executed successfully or not.
It looks like you can use the alternate form of flock, flock <fd>, where <fd> is a file descriptor. If you put this into a subshell, and redirect that file descriptor to your lock file, then flock will wait until it can write to that file (or error out if it can't open it immediately and you've passed -n). You can then do everything in your subshell, including testing the return value of scripts you run:
(
if flock -n 200
then
myscript.sh
echo $?
fi
) 200>lockfile
According to the flock man page, flock has a -E or --exit-conflict-code flag you can use to set what the exit code of flock should be in the case a conflict occurs:
-E, --conflict-exit-code number
The exit status used when the -n option is in use, and the conflicting lock exists, or the -w option is in use, and the timeout is reached. The default value is 1. The number has to be in the range of 0 to 255.
The man page also states:
EXIT STATUS
The command uses sysexits.h exit status values for everything, except when using either of the options -n or -w which report a failure to acquire the lock with a exit status given by the -E option, or 1 by default. The exit status given by -E has to be in the range of 0 to 255.
When using the command variant, and executing the child worked, then the exit status is that of the child command.
So, in the case of the -n or -w flags while using the "command" variant, you can see both exit statuses.
Example:
$ flock --exclusive /tmp/flock.lock bash -c 'exit 42'; echo $?
42
$ flock --exclusive /tmp/flock.lock flock --exclusive --nonblock --conflict-exit-code 100 /tmp/flock.lock bash -c 'exit 42'; echo $?
100
In the first example, we see that we get back the exit status of the process we're running with flock. In the second example, we are creating contention for the lock. In that case, flock itself returns the status code we tell it (100). If you do not specify a value with the --conflict-exit-code flag, it will return 1 instead. However, I prefer setting less common values to prevent confusion from other processess/scripts which also might return a value of 1.
#!/bin/bash
if ! pgrep myscript.sh; then
flock -n lockfile myscript.sh
fi
If I understand you right, you want to make sure 'myscript.sh' is not running before cron attempts to run your command again. Assuming that's right, we check to see if pgrep failed to find myscript.sh in the processes list and if so we run the flock command again.
Perhaps something like this would work for you.
#!/bin/bash
RETVAL=0
lockfailed()
{
echo "cannot flock"
exit 1
}
(
flock -w 2 42 || lockfailed
false
RETVAL=$?
echo "original retval $RETVAL"
exit $RETVAL
) 42>|/tmp/flocker
RETVAL=$?
echo "returned $RETVAL"
exit $RETVAL