Execute a shell function with timeout - bash

Why would this work
timeout 10s echo "foo bar" # foo bar
but this wouldn't
function echoFooBar {
echo "foo bar"
}
echoFooBar # foo bar
timeout 10s echoFooBar # timeout: failed to run command `echoFooBar': No such file or directory
and how can I make it work?

As Douglas Leeder said you need a separate process for timeout to signal to. Workaround by exporting function to subshells and running subshell manually.
export -f echoFooBar
timeout 10s bash -c echoFooBar

timeout is a command - so it is executing in a subprocess of your bash shell. Therefore it has no access to your functions defined in your current shell.
The command timeout is given is executed as a subprocess of timeout - a grand-child process of your shell.
You might be confused because echo is both a shell built-in and a separate command.
What you can do is put your function in it's own script file, chmod it to be executable, then execute it with timeout.
Alternatively fork, executing your function in a sub-shell - and in the original process, monitor the progress, killing the subprocess if it takes too long.

There's an inline alternative also launching a subprocess of bash shell:
timeout 10s bash <<EOT
function echoFooBar {
echo foo
}
echoFooBar
sleep 20
EOT

You can create a function which would allow you to do the same as timeout but also for other functions:
function run_cmd {
cmd="$1"; timeout="$2";
grep -qP '^\d+$' <<< $timeout || timeout=10
(
eval "$cmd" &
child=$!
trap -- "" SIGTERM
(
sleep $timeout
kill $child 2> /dev/null
) &
wait $child
)
}
And could run as below:
run_cmd "echoFooBar" 10
Note: The solution came from one of my questions:
Elegant solution to implement timeout for bash commands and functions

if you just want to add timeout as an additional option for the entire existing script, you can make it test for the timeout-option, and then make it call it self recursively without that option.
example.sh:
#!/bin/bash
if [ "$1" == "-t" ]; then
timeout 1m $0 $2
else
#the original script
echo $1
sleep 2m
echo YAWN...
fi
running this script without timeout:
$./example.sh -other_option # -other_option
# YAWN...
running it with a one minute timeout:
$./example.sh -t -other_option # -other_option

function foo(){
for i in {1..100};
do
echo $i;
sleep 1;
done;
}
cat <( foo ) # Will work
timeout 3 cat <( foo ) # Will Work
timeout 3 cat <( foo ) | sort # Wont work, As sort will fail
cat <( timeout 3 cat <( foo ) ) | sort -r # Will Work

This function uses only builtins
Maybe consider evaling "$*" instead of running $# directly depending on your needs
It starts a job with the command string specified after the first arg that is the timeout value and monitors the job pid
It checks every 1 seconds, bash supports timeouts down to 0.01 so that can be tweaked
Also if your script needs stdin, read should rely on a dedicated fd (exec {tofd}<> <(:))
Also you might want to tweak the kill signal (the one inside the loop) which is default to -15, you might want -9
## forking is evil
timeout() {
to=$1; shift
$# & local wp=$! start=0
while kill -0 $wp; do
read -t 1
start=$((start+1))
if [ $start -ge $to ]; then
kill $wp && break
fi
done
}

Putting my comment to Tiago Lopo's answer into more readable form:
I think it's more readable to impose a timeout on the most recent subshell, this way we don't need to eval a string and the whole script can be highlighted as shell by your favourite editor. I simply put the commands after the subshell with eval has spawned into a shell-function (tested with zsh, but should work with bash):
timeout_child () {
trap -- "" SIGTERM
child=$!
timeout=$1
(
sleep $timeout
kill $child
) &
wait $child
}
Example usage:
( while true; do echo -n .; sleep 0.1; done) & timeout_child 2
And this way it also works with a shell function (if it runs in the background):
print_dots () {
while true
do
sleep 0.1
echo -n .
done
}
> print_dots & timeout_child 2
[1] 21725
[3] 21727
...................[1] 21725 terminated print_dots
[3] + 21727 done ( sleep $timeout; kill $child; )

I have a slight modification of #Tiago Lopo's answer that can handle commands with multiple arguments. I've also tested TauPan's solution, but it does not work if you use it multiple times in a script, while Tiago's does.
function timeout_cmd {
local arr
local cmd
local timeout
arr=( "$#" )
# timeout: first arg
# cmd: the other args
timeout="${arr[0]}"
cmd=( "${arr[#]:1}" )
(
eval "${cmd[#]}" &
child=$!
echo "child: $child"
trap -- "" SIGTERM
(
sleep "$timeout"
kill "$child" 2> /dev/null
) &
wait "$child"
)
}
Here's a fully functional script thant you can use to test the function above:
$ ./test_timeout.sh -h
Usage:
test_timeout.sh [-n] [-r REPEAT] [-s SLEEP_TIME] [-t TIMEOUT]
test_timeout.sh -h
Test timeout_cmd function.
Options:
-n Dry run, do not actually sleep.
-r REPEAT Reapeat everything multiple times [default: 1].
-s SLEEP_TIME Sleep for SLEEP_TIME seconds [default: 5].
-t TIMEOUT Timeout after TIMEOUT seconds [default: no timeout].
For example you cnal launch like this:
$ ./test_timeout.sh -r 2 -s 5 -t 3
Try no: 1
- Set timeout to: 3
child: 2540
-> retval: 143
-> The command timed out
Try no: 2
- Set timeout to: 3
child: 2593
-> retval: 143
-> The command timed out
Done!
#!/usr/bin/env bash
#shellcheck disable=SC2128
SOURCED=false && [ "$0" = "$BASH_SOURCE" ] || SOURCED=true
if ! $SOURCED; then
set -euo pipefail
IFS=$'\n\t'
fi
#################### helpers
function check_posint() {
local re='^[0-9]+$'
local mynum="$1"
local option="$2"
if ! [[ "$mynum" =~ $re ]] ; then
(echo -n "Error in option '$option': " >&2)
(echo "must be a positive integer, got $mynum." >&2)
exit 1
fi
if ! [ "$mynum" -gt 0 ] ; then
(echo "Error in option '$option': must be positive, got $mynum." >&2)
exit 1
fi
}
#################### end: helpers
#################### usage
function short_usage() {
(>&2 echo \
"Usage:
test_timeout.sh [-n] [-r REPEAT] [-s SLEEP_TIME] [-t TIMEOUT]
test_timeout.sh -h"
)
}
function usage() {
(>&2 short_usage )
(>&2 echo \
"
Test timeout_cmd function.
Options:
-n Dry run, do not actually sleep.
-r REPEAT Reapeat everything multiple times [default: 1].
-s SLEEP_TIME Sleep for SLEEP_TIME seconds [default: 5].
-t TIMEOUT Timeout after TIMEOUT seconds [default: no timeout].
")
}
#################### end: usage
help_flag=false
dryrun_flag=false
SLEEP_TIME=5
TIMEOUT=-1
REPEAT=1
while getopts ":hnr:s:t:" opt; do
case $opt in
h)
help_flag=true
;;
n)
dryrun_flag=true
;;
r)
check_posint "$OPTARG" '-r'
REPEAT="$OPTARG"
;;
s)
check_posint "$OPTARG" '-s'
SLEEP_TIME="$OPTARG"
;;
t)
check_posint "$OPTARG" '-t'
TIMEOUT="$OPTARG"
;;
\?)
(>&2 echo "Error. Invalid option: -$OPTARG.")
(>&2 echo "Try -h to get help")
short_usage
exit 1
;;
:)
(>&2 echo "Error.Option -$OPTARG requires an argument.")
(>&2 echo "Try -h to get help")
short_usage
exit 1
;;
esac
done
if $help_flag; then
usage
exit 0
fi
#################### utils
if $dryrun_flag; then
function wrap_run() {
( echo -en "[dry run]\\t" )
( echo "$#" )
}
else
function wrap_run() { "$#"; }
fi
# Execute a shell function with timeout
# https://stackoverflow.com/a/24416732/2377454
function timeout_cmd {
local arr
local cmd
local timeout
arr=( "$#" )
# timeout: first arg
# cmd: the other args
timeout="${arr[0]}"
cmd=( "${arr[#]:1}" )
(
eval "${cmd[#]}" &
child=$!
echo "child: $child"
trap -- "" SIGTERM
(
sleep "$timeout"
kill "$child" 2> /dev/null
) &
wait "$child"
)
}
####################
function sleep_func() {
local secs
local waitsec
waitsec=1
secs=$(($1))
while [ "$secs" -gt 0 ]; do
echo -ne "$secs\033[0K\r"
sleep "$waitsec"
secs=$((secs-waitsec))
done
}
command=("wrap_run" \
"sleep_func" "${SLEEP_TIME}"
)
for i in $(seq 1 "$REPEAT"); do
echo "Try no: $i"
if [ "$TIMEOUT" -gt 0 ]; then
echo " - Set timeout to: $TIMEOUT"
set +e
timeout_cmd "$TIMEOUT" "${command[#]}"
retval="$?"
set -e
echo " -> retval: $retval"
# check if (retval % 128) == SIGTERM (== 15)
if [[ "$((retval % 128))" -eq 15 ]]; then
echo " -> The command timed out"
fi
else
echo " - No timeout"
"${command[#]}"
retval="$?"
fi
done
echo "Done!"
exit 0

This small modification to TauPan's answer adds some useful protection. If the child process that is being waited for has already exited before the sleep $timeout completes. The kill command attempts to kill a process that no longer exists. This is probably harmless, but there is no absolute guarantee that the same PID has not been re-assigned. To obviate this, a quick check is done to test that the child PID exists and that its parent is the shell it was forked from. Also trying to kill a non-existent process generates errors which if not suppressed can easily fill up logs.
I also used a more aggressive kill -9. This is the only way to kill a process that is blocking not on the shell command but instead from the file system eg. read < named_pipe.
A consequence of this is that the kill -9 $child command send its kill signal asynchronously to the process and hence generates a message into the calling shell. This can be suppressed by re-directing the wait $child > /dev/null 2>&1. With obvious consequences for debugging.
#!/bin/bash
function child_timeout () {
child=$!
timeout=$1
(
#trap -- "" SIGINT
sleep $timeout
if [ $(ps -o pid= -o comm= --ppid $$ | grep -o $child) ]; then
kill -9 $child
fi
) &
wait $child > /dev/null 2>&1
}
( tail -f /dev/null ) & child_timeout 10

This one liner will exit your Bash session after 10s
$ TMOUT=10 && echo "foo bar"

Related

How to report back the EXIT code of a multi program script? [duplicate]

How to wait in a bash script for several subprocesses spawned from that script to finish, and then return exit code !=0 when any of the subprocesses ends with code !=0?
Simple script:
#!/bin/bash
for i in `seq 0 9`; do
doCalculations $i &
done
wait
The above script will wait for all 10 spawned subprocesses, but it will always give exit status 0 (see help wait). How can I modify this script so it will discover exit statuses of spawned subprocesses and return exit code 1 when any of subprocesses ends with code !=0?
Is there any better solution for that than collecting PIDs of the subprocesses, wait for them in order and sum exit statuses?
wait also (optionally) takes the PID of the process to wait for, and with $! you get the PID of the last command launched in the background.
Modify the loop to store the PID of each spawned sub-process into an array, and then loop again waiting on each PID.
# run processes and store pids in array
for i in $n_procs; do
./procs[${i}] &
pids[${i}]=$!
done
# wait for all pids
for pid in ${pids[*]}; do
wait $pid
done
http://jeremy.zawodny.com/blog/archives/010717.html :
#!/bin/bash
FAIL=0
echo "starting"
./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
Here is simple example using wait.
Run some processes:
$ sleep 10 &
$ sleep 10 &
$ sleep 20 &
$ sleep 20 &
Then wait for them with wait command:
$ wait < <(jobs -p)
Or just wait (without arguments) for all.
This will wait for all jobs in the background are completed.
If the -n option is supplied, waits for the next job to terminate and returns its exit status.
See: help wait and help jobs for syntax.
However the downside is that this will return on only the status of the last ID, so you need to check the status for each subprocess and store it in the variable.
Or make your calculation function to create some file on failure (empty or with fail log), then check of that file if exists, e.g.
$ sleep 20 && true || tee fail &
$ sleep 20 && false || tee fail &
$ wait < <(jobs -p)
$ test -f fail && echo Calculation failed.
How about simply:
#!/bin/bash
pids=""
for i in `seq 0 9`; do
doCalculations $i &
pids="$pids $!"
done
wait $pids
...code continued here ...
Update:
As pointed by multiple commenters, the above waits for all processes to be completed before continuing, but does not exit and fail if one of them fails, it can be made to do with the following modification suggested by #Bryan, #SamBrightman, and others:
#!/bin/bash
pids=""
RESULT=0
for i in `seq 0 9`; do
doCalculations $i &
pids="$pids $!"
done
for pid in $pids; do
wait $pid || let "RESULT=1"
done
if [ "$RESULT" == "1" ];
then
exit 1
fi
...code continued here ...
If you have GNU Parallel installed you can do:
# If doCalculations is a function
export -f doCalculations
seq 0 9 | parallel doCalculations {}
GNU Parallel will give you exit code:
0 - All jobs ran without error.
1-253 - Some of the jobs failed. The exit status gives the number of failed jobs
254 - More than 253 jobs failed.
255 - Other error.
Watch the intro videos to learn more: http://pi.dk/1
Here's what I've come up with so far. I would like to see how to interrupt the sleep command if a child terminates, so that one would not have to tune WAITALL_DELAY to one's usage.
waitall() { # PID...
## Wait for children to exit and indicate whether all exited with 0 status.
local errors=0
while :; do
debug "Processes remaining: $*"
for pid in "$#"; do
shift
if kill -0 "$pid" 2>/dev/null; then
debug "$pid is still alive."
set -- "$#" "$pid"
elif wait "$pid"; then
debug "$pid exited with zero exit status."
else
debug "$pid exited with non-zero exit status."
((++errors))
fi
done
(("$#" > 0)) || break
# TODO: how to interrupt this sleep when a child terminates?
sleep ${WAITALL_DELAY:-1}
done
((errors == 0))
}
debug() { echo "DEBUG: $*" >&2; }
pids=""
for t in 3 5 4; do
sleep "$t" &
pids="$pids $!"
done
waitall $pids
To parallelize this...
for i in $(whatever_list) ; do
do_something $i
done
Translate it to this...
for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
(
export -f do_something ## export functions (if needed)
export PATH ## export any variables that are required
xargs -I{} --max-procs 0 bash -c ' ## process in batches...
{
echo "processing {}" ## optional
do_something {}
}'
)
If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole.
Exporting functions and variables may or may not be necessary, in any particular case.
You can set --max-procs based on how much parallelism you want (0 means "all at once").
GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default.
The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on.
Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts.
You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.
Here's a simplified working example...
for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
{
echo sleep {}
sleep 2s
}'
This is something that I use:
#wait for jobs
for job in `jobs -p`; do wait ${job}; done
This is an expansion on the most-upvoted answer, by #Luca Tettamanti, to make a fully-runnable example.
That answer left me wondering:
What type of variable is n_procs, and what does it contain? What type of variable is procs, and what does it contain? Can someone please update this answer to make it runnable by adding definitions for those variables? I don't understand how.
...and also:
How do you get the return code from the subprocess when it has completed (which is the whole crux of this question)?
Anyway, I figured it out, so here is a fully-runnable example.
Notes:
$! is how to obtain the PID (Process ID) of the last-executed sub-process.
Running any command with the & after it, like cmd &, for example, causes it to run in the background as a parallel suprocess with the main process.
myarray=() is how to create an array in bash.
To learn a tiny bit more about the wait built-in command, see help wait. See also, and especially, the official Bash user manual on Job Control built-ins, such as wait and jobs, here: https://www.gnu.org/software/bash/manual/html_node/Job-Control-Builtins.html#index-wait.
Full, runnable program: wait for all processes to end
multi_process_program.sh (from my eRCaGuy_hello_world repo):
#!/usr/bin/env bash
# This is a special sleep function which returns the number of seconds slept as
# the "error code" or return code" so that we can easily see that we are in
# fact actually obtaining the return code of each process as it finishes.
my_sleep() {
seconds_to_sleep="$1"
sleep "$seconds_to_sleep"
return "$seconds_to_sleep"
}
# Create an array of whatever commands you want to run as subprocesses
procs=() # bash array
procs+=("my_sleep 5")
procs+=("my_sleep 2")
procs+=("my_sleep 3")
procs+=("my_sleep 4")
num_procs=${#procs[#]} # number of processes
echo "num_procs = $num_procs"
# run commands as subprocesses and store pids in an array
pids=() # bash array
for (( i=0; i<"$num_procs"; i++ )); do
echo "cmd = ${procs[$i]}"
${procs[$i]} & # run the cmd as a subprocess
# store pid of last subprocess started; see:
# https://unix.stackexchange.com/a/30371/114401
pids+=("$!")
echo " pid = ${pids[$i]}"
done
# OPTION 1 (comment this option out if using Option 2 below): wait for all pids
for pid in "${pids[#]}"; do
wait "$pid"
return_code="$?"
echo "PID = $pid; return_code = $return_code"
done
echo "All $num_procs processes have ended."
Change the file above to be executable by running chmod +x multi_process_program.sh, then run it like this:
time ./multi_process_program.sh
Sample output. See how the output of the time command in the call shows it took 5.084sec to run. We were also able to successfully retrieve the return code from each subprocess.
eRCaGuy_hello_world/bash$ time ./multi_process_program.sh
num_procs = 4
cmd = my_sleep 5
pid = 21694
cmd = my_sleep 2
pid = 21695
cmd = my_sleep 3
pid = 21697
cmd = my_sleep 4
pid = 21699
PID = 21694; return_code = 5
PID = 21695; return_code = 2
PID = 21697; return_code = 3
PID = 21699; return_code = 4
All 4 processes have ended.
PID 21694 is done; return_code = 5; 3 PIDs remaining.
PID 21695 is done; return_code = 2; 2 PIDs remaining.
PID 21697 is done; return_code = 3; 1 PIDs remaining.
PID 21699 is done; return_code = 4; 0 PIDs remaining.
real 0m5.084s
user 0m0.025s
sys 0m0.061s
Going further: determine live when each individual process ends
If you'd like to do some action as each process finishes, and you don't know when they will finish, you can poll in an infinite while loop to see when each process terminates, then do whatever action you want.
Simply comment out the "OPTION 1" block of code above, and replace it with this "OPTION 2" block instead:
# OR OPTION 2 (comment out Option 1 above if using Option 2): poll to detect
# when each process terminates, and print out when each process finishes!
while true; do
for i in "${!pids[#]}"; do
pid="${pids[$i]}"
# echo "pid = $pid" # debugging
# See if PID is still running; see my answer here:
# https://stackoverflow.com/a/71134379/4561887
ps --pid "$pid" > /dev/null
if [ "$?" -ne 0 ]; then
# PID doesn't exist anymore, meaning it terminated
# 1st, read its return code
wait "$pid"
return_code="$?"
# 2nd, remove this PID from the `pids` array by `unset`ting the
# element at this index; NB: due to how bash arrays work, this does
# NOT actually remove this element from the array. Rather, it
# removes its index from the `"${!pids[#]}"` list of indices,
# adjusts the array count(`"${#pids[#]}"`) accordingly, and it sets
# the value at this index to either a null value of some sort, or
# an empty string (I'm not exactly sure).
unset "pids[$i]"
num_pids="${#pids[#]}"
echo "PID $pid is done; return_code = $return_code;" \
"$num_pids PIDs remaining."
fi
done
# exit the while loop if the `pids` array is empty
if [ "${#pids[#]}" -eq 0 ]; then
break
fi
# Do some small sleep here to keep your polling loop from sucking up
# 100% of one of your CPUs unnecessarily. Sleeping allows other processes
# to run during this time.
sleep 0.1
done
Sample run and output of the full program with Option 1 commented out and Option 2 in-use:
eRCaGuy_hello_world/bash$ ./multi_process_program.sh
num_procs = 4
cmd = my_sleep 5
pid = 22275
cmd = my_sleep 2
pid = 22276
cmd = my_sleep 3
pid = 22277
cmd = my_sleep 4
pid = 22280
PID 22276 is done; return_code = 2; 3 PIDs remaining.
PID 22277 is done; return_code = 3; 2 PIDs remaining.
PID 22280 is done; return_code = 4; 1 PIDs remaining.
PID 22275 is done; return_code = 5; 0 PIDs remaining.
Each of those PID XXXXX is done lines prints out live right after that process has terminated! Notice that even though the process for sleep 5 (PID 22275 in this case) was run first, it finished last, and we successfully detected each process right after it terminated. We also successfully detected each return code, just like in Option 1.
Other References:
*****+ [VERY HELPFUL] Get exit code of a background process - this answer taught me the key principle that (emphasis added):
wait <n> waits until the process with PID is complete (it will block until the process completes, so you might not want to call this until you are sure the process is done), and then returns the exit code of the completed process.
In other words, it helped me know that even after the process is complete, you can still call wait on it to get its return code!
How to check if a process id (PID) exists
my answer
Remove an element from a Bash array - note that elements in a bash array aren't actually deleted, they are just "unset". See my comments in the code above for what that means.
How to use the command-line executable true to make an infinite while loop in bash: https://www.cyberciti.biz/faq/bash-infinite-loop/
I see lots of good examples listed on here, wanted to throw mine in as well.
#! /bin/bash
items="1 2 3 4 5 6"
pids=""
for item in $items; do
sleep $item &
pids+="$! "
done
for pid in $pids; do
wait $pid
if [ $? -eq 0 ]; then
echo "SUCCESS - Job $pid exited with a status of $?"
else
echo "FAILED - Job $pid exited with a status of $?"
fi
done
I use something very similar to start/stop servers/services in parallel and check each exit status. Works great for me. Hope this helps someone out!
I don't believe it's possible with Bash's builtin functionality.
You can get notification when a child exits:
#!/bin/sh
set -o monitor # enable script job control
trap 'echo "child died"' CHLD
However there's no apparent way to get the child's exit status in the signal handler.
Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.
What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.
The following code will wait for completion of all calculations and return exit status 1 if any of doCalculations fails.
#!/bin/bash
for i in $(seq 0 9); do
(doCalculations $i >&2 & wait %1; echo $?) &
done | grep -qv 0 && exit 1
Here's my version that works for multiple pids, logs warnings if execution takes too long, and stops the subprocesses if execution takes longer than a given value.
[EDIT] I have uploaded my newer implementation of WaitForTaskCompletion, called ExecTasks at https://github.com/deajan/ofunctions.
There's also a compat layer for WaitForTaskCompletion
[/EDIT]
function WaitForTaskCompletion {
local pids="${1}" # pids to wait for, separated by semi-colon
local soft_max_time="${2}" # If execution takes longer than $soft_max_time seconds, will log a warning, unless $soft_max_time equals 0.
local hard_max_time="${3}" # If execution takes longer than $hard_max_time seconds, will stop execution, unless $hard_max_time equals 0.
local caller_name="${4}" # Who called this function
local exit_on_error="${5:-false}" # Should the function exit program on subprocess errors
Logger "${FUNCNAME[0]} called by [$caller_name]."
local soft_alert=0 # Does a soft alert need to be triggered, if yes, send an alert once
local log_ttime=0 # local time instance for comparaison
local seconds_begin=$SECONDS # Seconds since the beginning of the script
local exec_time=0 # Seconds since the beginning of this function
local retval=0 # return value of monitored pid process
local errorcount=0 # Number of pids that finished with errors
local pidCount # number of given pids
IFS=';' read -a pidsArray <<< "$pids"
pidCount=${#pidsArray[#]}
while [ ${#pidsArray[#]} -gt 0 ]; do
newPidsArray=()
for pid in "${pidsArray[#]}"; do
if kill -0 $pid > /dev/null 2>&1; then
newPidsArray+=($pid)
else
wait $pid
result=$?
if [ $result -ne 0 ]; then
errorcount=$((errorcount+1))
Logger "${FUNCNAME[0]} called by [$caller_name] finished monitoring [$pid] with exitcode [$result]."
fi
fi
done
## Log a standby message every hour
exec_time=$(($SECONDS - $seconds_begin))
if [ $((($exec_time + 1) % 3600)) -eq 0 ]; then
if [ $log_ttime -ne $exec_time ]; then
log_ttime=$exec_time
Logger "Current tasks still running with pids [${pidsArray[#]}]."
fi
fi
if [ $exec_time -gt $soft_max_time ]; then
if [ $soft_alert -eq 0 ] && [ $soft_max_time -ne 0 ]; then
Logger "Max soft execution time exceeded for task [$caller_name] with pids [${pidsArray[#]}]."
soft_alert=1
SendAlert
fi
if [ $exec_time -gt $hard_max_time ] && [ $hard_max_time -ne 0 ]; then
Logger "Max hard execution time exceeded for task [$caller_name] with pids [${pidsArray[#]}]. Stopping task execution."
kill -SIGTERM $pid
if [ $? == 0 ]; then
Logger "Task stopped successfully"
else
errrorcount=$((errorcount+1))
fi
fi
fi
pidsArray=("${newPidsArray[#]}")
sleep 1
done
Logger "${FUNCNAME[0]} ended for [$caller_name] using [$pidCount] subprocesses with [$errorcount] errors."
if [ $exit_on_error == true ] && [ $errorcount -gt 0 ]; then
Logger "Stopping execution."
exit 1337
else
return $errorcount
fi
}
# Just a plain stupid logging function to be replaced by yours
function Logger {
local value="${1}"
echo $value
}
Example, wait for all three processes to finish, log a warning if execution takes loger than 5 seconds, stop all processes if execution takes longer than 120 seconds. Don't exit program on failures.
function something {
sleep 10 &
pids="$!"
sleep 12 &
pids="$pids;$!"
sleep 9 &
pids="$pids;$!"
WaitForTaskCompletion $pids 5 120 ${FUNCNAME[0]} false
}
# Launch the function
someting
If you have bash 4.2 or later available the following might be useful to you. It uses associative arrays to store task names and their "code" as well as task names and their pids. I have also built in a simple rate-limiting method which might come handy if your tasks consume a lot of CPU or I/O time and you want to limit the number of concurrent tasks.
The script launches all tasks in the first loop and consumes the results in the second one.
This is a bit overkill for simple cases but it allows for pretty neat stuff. For example one can store error messages for each task in another associative array and print them after everything has settled down.
#! /bin/bash
main () {
local -A pids=()
local -A tasks=([task1]="echo 1"
[task2]="echo 2"
[task3]="echo 3"
[task4]="false"
[task5]="echo 5"
[task6]="false")
local max_concurrent_tasks=2
for key in "${!tasks[#]}"; do
while [ $(jobs 2>&1 | grep -c Running) -ge "$max_concurrent_tasks" ]; do
sleep 1 # gnu sleep allows floating point here...
done
${tasks[$key]} &
pids+=(["$key"]="$!")
done
errors=0
for key in "${!tasks[#]}"; do
pid=${pids[$key]}
local cur_ret=0
if [ -z "$pid" ]; then
echo "No Job ID known for the $key process" # should never happen
cur_ret=1
else
wait $pid
cur_ret=$?
fi
if [ "$cur_ret" -ne 0 ]; then
errors=$(($errors + 1))
echo "$key (${tasks[$key]}) failed."
fi
done
return $errors
}
main
I've had a go at this and combined all the best parts from the other examples here. This script will execute the checkpids function when any background process exits, and output the exit status without resorting to polling.
#!/bin/bash
set -o monitor
sleep 2 &
sleep 4 && exit 1 &
sleep 6 &
pids=`jobs -p`
checkpids() {
for pid in $pids; do
if kill -0 $pid 2>/dev/null; then
echo $pid is still alive.
elif wait $pid; then
echo $pid exited with zero exit status.
else
echo $pid exited with non-zero exit status.
fi
done
echo
}
trap checkpids CHLD
wait
#!/bin/bash
set -m
for i in `seq 0 9`; do
doCalculations $i &
done
while fg; do true; done
set -m allows you to use fg & bg in a script
fg, in addition to putting the last process in the foreground, has the same exit status as the process it foregrounds
while fg will stop looping when any fg exits with a non-zero exit status
unfortunately this won't handle the case when a process in the background exits with a non-zero exit status. (the loop won't terminate immediately. it will wait for the previous processes to complete.)
Wait for all jobs and return the exit code of the last failing job. Unlike solutions above, this does not require pid saving, or modifying inner loops of scripts. Just bg away, and wait.
function wait_ex {
# this waits for all jobs and returns the exit code of the last failing job
ecode=0
while true; do
[ -z "$(jobs)" ] && break
wait -n
err="$?"
[ "$err" != "0" ] && ecode="$err"
done
return $ecode
}
EDIT: Fixed the bug where this could be fooled by a script that ran a command that didn't exist.
Just store the results out of the shell, e.g. in a file.
#!/bin/bash
tmp=/tmp/results
: > $tmp #clean the file
for i in `seq 0 9`; do
(doCalculations $i; echo $i:$?>>$tmp)&
done #iterate
wait #wait until all ready
sort $tmp | grep -v ':0' #... handle as required
I've just been modifying a script to background and parallelise a process.
I did some experimenting (on Solaris with both bash and ksh) and discovered that 'wait' outputs the exit status if it's not zero , or a list of jobs that return non-zero exit when no PID argument is provided. E.g.
Bash:
$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]- Exit 2 sleep 20 && exit 2
[2]+ Exit 1 sleep 10 && exit 1
Ksh:
$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]+ Done(2) sleep 20 && exit 2
[2]+ Done(1) sleep 10 && exit 1
This output is written to stderr, so a simple solution to the OPs example could be:
#!/bin/bash
trap "rm -f /tmp/x.$$" EXIT
for i in `seq 0 9`; do
doCalculations $i &
done
wait 2> /tmp/x.$$
if [ `wc -l /tmp/x.$$` -gt 0 ] ; then
exit 1
fi
While this:
wait 2> >(wc -l)
will also return a count but without the tmp file. This might also be used this way, for example:
wait 2> >(if [ `wc -l` -gt 0 ] ; then echo "ERROR"; fi)
But this isn't very much more useful than the tmp file IMO. I couldn't find a useful way to avoid the tmp file whilst also avoiding running the "wait" in a subshell, which wont work at all.
I needed this, but the target process wasn't a child of current shell, in which case wait $PID doesn't work. I did find the following alternative instead:
while [ -e /proc/$PID ]; do sleep 0.1 ; done
That relies on the presence of procfs, which may not be available (Mac doesn't provide it for example). So for portability, you could use this instead:
while ps -p $PID >/dev/null ; do sleep 0.1 ; done
There are already a lot of answers here, but I am surprised no one seems to have suggested using arrays... So here's what I did - this might be useful to some in the future.
n=10 # run 10 jobs
c=0
PIDS=()
while true
my_function_or_command &
PID=$!
echo "Launched job as PID=$PID"
PIDS+=($PID)
(( c+=1 ))
# required to prevent any exit due to error
# caused by additional commands run which you
# may add when modifying this example
true
do
if (( c < n ))
then
continue
else
break
fi
done
# collect launched jobs
for pid in "${PIDS[#]}"
do
wait $pid || echo "failed job PID=$pid"
done
This works, should be just as a good if not better than #HoverHell's answer!
#!/usr/bin/env bash
set -m # allow for job control
EXIT_CODE=0; # exit code of overall script
function foo() {
echo "CHLD exit code is $1"
echo "CHLD pid is $2"
echo $(jobs -l)
for job in `jobs -p`; do
echo "PID => ${job}"
wait ${job} || echo "At least one test failed with exit code => $?" ; EXIT_CODE=1
done
}
trap 'foo $? $$' CHLD
DIRN=$(dirname "$0");
commands=(
"{ echo "foo" && exit 4; }"
"{ echo "bar" && exit 3; }"
"{ echo "baz" && exit 5; }"
)
clen=`expr "${#commands[#]}" - 1` # get length of commands - 1
for i in `seq 0 "$clen"`; do
(echo "${commands[$i]}" | bash) & # run the command via bash in subshell
echo "$i ith command has been issued as a background job"
done
# wait for all to finish
wait;
echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"
# end
and of course, I have immortalized this script, in an NPM project which allows you to run bash commands in parallel, useful for testing:
https://github.com/ORESoftware/generic-subshell
Exactly for this purpose I wrote a bash function called :for.
Note: :for not only preserves and returns the exit code of the failing function, but also terminates all parallel running instance. Which might not be needed in this case.
#!/usr/bin/env bash
# Wait for pids to terminate. If one pid exits with
# a non zero exit code, send the TERM signal to all
# processes and retain that exit code
#
# usage:
# :wait 123 32
function :wait(){
local pids=("$#")
[ ${#pids} -eq 0 ] && return $?
trap 'kill -INT "${pids[#]}" &>/dev/null || true; trap - INT' INT
trap 'kill -TERM "${pids[#]}" &>/dev/null || true; trap - RETURN TERM' RETURN TERM
for pid in "${pids[#]}"; do
wait "${pid}" || return $?
done
trap - INT RETURN TERM
}
# Run a function in parallel for each argument.
# Stop all instances if one exits with a non zero
# exit code
#
# usage:
# :for func 1 2 3
#
# env:
# FOR_PARALLEL: Max functions running in parallel
function :for(){
local f="${1}" && shift
local i=0
local pids=()
for arg in "$#"; do
( ${f} "${arg}" ) &
pids+=("$!")
if [ ! -z ${FOR_PARALLEL+x} ]; then
(( i=(i+1)%${FOR_PARALLEL} ))
if (( i==0 )) ;then
:wait "${pids[#]}" || return $?
pids=()
fi
fi
done && [ ${#pids} -eq 0 ] || :wait "${pids[#]}" || return $?
}
usage
for.sh:
#!/usr/bin/env bash
set -e
# import :for from gist: https://gist.github.com/Enteee/c8c11d46a95568be4d331ba58a702b62#file-for
# if you don't like curl imports, source the actual file here.
source <(curl -Ls https://gist.githubusercontent.com/Enteee/c8c11d46a95568be4d331ba58a702b62/raw/)
msg="You should see this three times"
:(){
i="${1}" && shift
echo "${msg}"
sleep 1
if [ "$i" == "1" ]; then sleep 1
elif [ "$i" == "2" ]; then false
elif [ "$i" == "3" ]; then
sleep 3
echo "You should never see this"
fi
} && :for : 1 2 3 || exit $?
echo "You should never see this"
$ ./for.sh; echo $?
You should see this three times
You should see this three times
You should see this three times
1
References
[1]: blog
[2]: gist
set -e
fail () {
touch .failure
}
expect () {
wait
if [ -f .failure ]; then
rm -f .failure
exit 1
fi
}
sleep 2 || fail &
sleep 2 && false || fail &
sleep 2 || fail
expect
The set -e at top makes your script stop on failure.
expect will return 1 if any subjob failed.
There can be a case where the process is complete before waiting for the process. If we trigger wait for a process that is already finished, it will trigger an error like pid is not a child of this shell. To avoid such cases, the following function can be used to find whether the process is complete or not:
isProcessComplete(){
PID=$1
while [ -e /proc/$PID ]
do
echo "Process: $PID is still running"
sleep 5
done
echo "Process $PID has finished"
}
Starting with Bash 5.1, there is a nice new way of waiting for and handling the results of multiple background jobs thanks to the introduction of wait -p:
#!/usr/bin/env bash
# Spawn background jobs
for ((i=0; i < 10; i++)); do
secs=$((RANDOM % 10)); code=$((RANDOM % 256))
(sleep ${secs}; exit ${code}) &
echo "Started background job (pid: $!, sleep: ${secs}, code: ${code})"
done
# Wait for background jobs, print individual results, determine overall result
result=0
while true; do
wait -n -p pid; code=$?
[[ -z "${pid}" ]] && break
echo "Background job ${pid} finished with code ${code}"
(( ${code} != 0 )) && result=1
done
# Return overall result
exit ${result}
I used this recently (thanks to Alnitak):
#!/bin/bash
# activate child monitoring
set -o monitor
# locking subprocess
(while true; do sleep 0.001; done) &
pid=$!
# count, and kill when all done
c=0
function kill_on_count() {
# you could kill on whatever criterion you wish for
# I just counted to simulate bash's wait with no args
[ $c -eq 9 ] && kill $pid
c=$((c+1))
echo -n '.' # async feedback (but you don't know which one)
}
trap "kill_on_count" CHLD
function save_status() {
local i=$1;
local rc=$2;
# do whatever, and here you know which one stopped
# but remember, you're called from a subshell
# so vars have their values at fork time
}
# care must be taken not to spawn more than one child per loop
# e.g don't use `seq 0 9` here!
for i in {0..9}; do
(doCalculations $i; save_status $i $?) &
done
# wait for locking subprocess to be killed
wait $pid
echo
From there one can easily extrapolate, and have a trigger (touch a file, send a signal) and change the counting criteria (count files touched, or whatever) to respond to that trigger. Or if you just want 'any' non zero rc, just kill the lock from save_status.
Trapping CHLD signal may not work because you can lose some signals if they arrived simultaneously.
#!/bin/bash
trap 'rm -f $tmpfile' EXIT
tmpfile=$(mktemp)
doCalculations() {
echo start job $i...
sleep $((RANDOM % 5))
echo ...end job $i
exit $((RANDOM % 10))
}
number_of_jobs=10
for i in $( seq 1 $number_of_jobs )
do
( trap "echo job$i : exit value : \$? >> $tmpfile" EXIT; doCalculations ) &
done
wait
i=0
while read res; do
echo "$res"
let i++
done < "$tmpfile"
echo $i jobs done !!!
solution to wait for several subprocesses and to exit when any one of them exits with non-zero status code is by using 'wait -n'
#!/bin/bash
wait_for_pids()
{
for (( i = 1; i <= $#; i++ )) do
wait -n $#
status=$?
echo "received status: "$status
if [ $status -ne 0 ] && [ $status -ne 127 ]; then
exit 1
fi
done
}
sleep_for_10()
{
sleep 10
exit 10
}
sleep_for_20()
{
sleep 20
}
sleep_for_10 &
pid1=$!
sleep_for_20 &
pid2=$!
wait_for_pids $pid2 $pid1
status code '127' is for non-existing process which means the child might have exited.
I almost fell into the trap of using jobs -p to collect PIDs, which does not work if the child has already exited, as shown in the script below. The solution I picked was simply calling wait -n N times, where N is the number of children I have, which I happen to know deterministically.
#!/usr/bin/env bash
sleeper() {
echo "Sleeper $1"
sleep $2
echo "Exiting $1"
return $3
}
start_sleepers() {
sleeper 1 1 0 &
sleeper 2 2 $1 &
sleeper 3 5 0 &
sleeper 4 6 0 &
sleep 4
}
echo "Using jobs"
start_sleepers 1
pids=( $(jobs -p) )
echo "PIDS: ${pids[*]}"
for pid in "${pids[#]}"; do
wait "$pid"
echo "Exit code $?"
done
echo "Clearing other children"
wait -n; echo "Exit code $?"
wait -n; echo "Exit code $?"
echo "Waiting for N processes"
start_sleepers 2
for ignored in $(seq 1 4); do
wait -n
echo "Exit code $?"
done
Output:
Using jobs
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
PIDS: 56496 56497
Exiting 3
Exit code 0
Exiting 4
Exit code 0
Clearing other children
Exit code 0
Exit code 1
Waiting for N processes
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
Exit code 0
Exit code 2
Exiting 3
Exit code 0
Exiting 4
Exit code 0

Solved, To wait the end of several processes launched from a loop before to increment variables is not working in Bash [duplicate]

How to wait in a bash script for several subprocesses spawned from that script to finish, and then return exit code !=0 when any of the subprocesses ends with code !=0?
Simple script:
#!/bin/bash
for i in `seq 0 9`; do
doCalculations $i &
done
wait
The above script will wait for all 10 spawned subprocesses, but it will always give exit status 0 (see help wait). How can I modify this script so it will discover exit statuses of spawned subprocesses and return exit code 1 when any of subprocesses ends with code !=0?
Is there any better solution for that than collecting PIDs of the subprocesses, wait for them in order and sum exit statuses?
wait also (optionally) takes the PID of the process to wait for, and with $! you get the PID of the last command launched in the background.
Modify the loop to store the PID of each spawned sub-process into an array, and then loop again waiting on each PID.
# run processes and store pids in array
for i in $n_procs; do
./procs[${i}] &
pids[${i}]=$!
done
# wait for all pids
for pid in ${pids[*]}; do
wait $pid
done
http://jeremy.zawodny.com/blog/archives/010717.html :
#!/bin/bash
FAIL=0
echo "starting"
./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
Here is simple example using wait.
Run some processes:
$ sleep 10 &
$ sleep 10 &
$ sleep 20 &
$ sleep 20 &
Then wait for them with wait command:
$ wait < <(jobs -p)
Or just wait (without arguments) for all.
This will wait for all jobs in the background are completed.
If the -n option is supplied, waits for the next job to terminate and returns its exit status.
See: help wait and help jobs for syntax.
However the downside is that this will return on only the status of the last ID, so you need to check the status for each subprocess and store it in the variable.
Or make your calculation function to create some file on failure (empty or with fail log), then check of that file if exists, e.g.
$ sleep 20 && true || tee fail &
$ sleep 20 && false || tee fail &
$ wait < <(jobs -p)
$ test -f fail && echo Calculation failed.
How about simply:
#!/bin/bash
pids=""
for i in `seq 0 9`; do
doCalculations $i &
pids="$pids $!"
done
wait $pids
...code continued here ...
Update:
As pointed by multiple commenters, the above waits for all processes to be completed before continuing, but does not exit and fail if one of them fails, it can be made to do with the following modification suggested by #Bryan, #SamBrightman, and others:
#!/bin/bash
pids=""
RESULT=0
for i in `seq 0 9`; do
doCalculations $i &
pids="$pids $!"
done
for pid in $pids; do
wait $pid || let "RESULT=1"
done
if [ "$RESULT" == "1" ];
then
exit 1
fi
...code continued here ...
If you have GNU Parallel installed you can do:
# If doCalculations is a function
export -f doCalculations
seq 0 9 | parallel doCalculations {}
GNU Parallel will give you exit code:
0 - All jobs ran without error.
1-253 - Some of the jobs failed. The exit status gives the number of failed jobs
254 - More than 253 jobs failed.
255 - Other error.
Watch the intro videos to learn more: http://pi.dk/1
Here's what I've come up with so far. I would like to see how to interrupt the sleep command if a child terminates, so that one would not have to tune WAITALL_DELAY to one's usage.
waitall() { # PID...
## Wait for children to exit and indicate whether all exited with 0 status.
local errors=0
while :; do
debug "Processes remaining: $*"
for pid in "$#"; do
shift
if kill -0 "$pid" 2>/dev/null; then
debug "$pid is still alive."
set -- "$#" "$pid"
elif wait "$pid"; then
debug "$pid exited with zero exit status."
else
debug "$pid exited with non-zero exit status."
((++errors))
fi
done
(("$#" > 0)) || break
# TODO: how to interrupt this sleep when a child terminates?
sleep ${WAITALL_DELAY:-1}
done
((errors == 0))
}
debug() { echo "DEBUG: $*" >&2; }
pids=""
for t in 3 5 4; do
sleep "$t" &
pids="$pids $!"
done
waitall $pids
To parallelize this...
for i in $(whatever_list) ; do
do_something $i
done
Translate it to this...
for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
(
export -f do_something ## export functions (if needed)
export PATH ## export any variables that are required
xargs -I{} --max-procs 0 bash -c ' ## process in batches...
{
echo "processing {}" ## optional
do_something {}
}'
)
If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole.
Exporting functions and variables may or may not be necessary, in any particular case.
You can set --max-procs based on how much parallelism you want (0 means "all at once").
GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default.
The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on.
Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts.
You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.
Here's a simplified working example...
for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
{
echo sleep {}
sleep 2s
}'
This is something that I use:
#wait for jobs
for job in `jobs -p`; do wait ${job}; done
This is an expansion on the most-upvoted answer, by #Luca Tettamanti, to make a fully-runnable example.
That answer left me wondering:
What type of variable is n_procs, and what does it contain? What type of variable is procs, and what does it contain? Can someone please update this answer to make it runnable by adding definitions for those variables? I don't understand how.
...and also:
How do you get the return code from the subprocess when it has completed (which is the whole crux of this question)?
Anyway, I figured it out, so here is a fully-runnable example.
Notes:
$! is how to obtain the PID (Process ID) of the last-executed sub-process.
Running any command with the & after it, like cmd &, for example, causes it to run in the background as a parallel suprocess with the main process.
myarray=() is how to create an array in bash.
To learn a tiny bit more about the wait built-in command, see help wait. See also, and especially, the official Bash user manual on Job Control built-ins, such as wait and jobs, here: https://www.gnu.org/software/bash/manual/html_node/Job-Control-Builtins.html#index-wait.
Full, runnable program: wait for all processes to end
multi_process_program.sh (from my eRCaGuy_hello_world repo):
#!/usr/bin/env bash
# This is a special sleep function which returns the number of seconds slept as
# the "error code" or return code" so that we can easily see that we are in
# fact actually obtaining the return code of each process as it finishes.
my_sleep() {
seconds_to_sleep="$1"
sleep "$seconds_to_sleep"
return "$seconds_to_sleep"
}
# Create an array of whatever commands you want to run as subprocesses
procs=() # bash array
procs+=("my_sleep 5")
procs+=("my_sleep 2")
procs+=("my_sleep 3")
procs+=("my_sleep 4")
num_procs=${#procs[#]} # number of processes
echo "num_procs = $num_procs"
# run commands as subprocesses and store pids in an array
pids=() # bash array
for (( i=0; i<"$num_procs"; i++ )); do
echo "cmd = ${procs[$i]}"
${procs[$i]} & # run the cmd as a subprocess
# store pid of last subprocess started; see:
# https://unix.stackexchange.com/a/30371/114401
pids+=("$!")
echo " pid = ${pids[$i]}"
done
# OPTION 1 (comment this option out if using Option 2 below): wait for all pids
for pid in "${pids[#]}"; do
wait "$pid"
return_code="$?"
echo "PID = $pid; return_code = $return_code"
done
echo "All $num_procs processes have ended."
Change the file above to be executable by running chmod +x multi_process_program.sh, then run it like this:
time ./multi_process_program.sh
Sample output. See how the output of the time command in the call shows it took 5.084sec to run. We were also able to successfully retrieve the return code from each subprocess.
eRCaGuy_hello_world/bash$ time ./multi_process_program.sh
num_procs = 4
cmd = my_sleep 5
pid = 21694
cmd = my_sleep 2
pid = 21695
cmd = my_sleep 3
pid = 21697
cmd = my_sleep 4
pid = 21699
PID = 21694; return_code = 5
PID = 21695; return_code = 2
PID = 21697; return_code = 3
PID = 21699; return_code = 4
All 4 processes have ended.
PID 21694 is done; return_code = 5; 3 PIDs remaining.
PID 21695 is done; return_code = 2; 2 PIDs remaining.
PID 21697 is done; return_code = 3; 1 PIDs remaining.
PID 21699 is done; return_code = 4; 0 PIDs remaining.
real 0m5.084s
user 0m0.025s
sys 0m0.061s
Going further: determine live when each individual process ends
If you'd like to do some action as each process finishes, and you don't know when they will finish, you can poll in an infinite while loop to see when each process terminates, then do whatever action you want.
Simply comment out the "OPTION 1" block of code above, and replace it with this "OPTION 2" block instead:
# OR OPTION 2 (comment out Option 1 above if using Option 2): poll to detect
# when each process terminates, and print out when each process finishes!
while true; do
for i in "${!pids[#]}"; do
pid="${pids[$i]}"
# echo "pid = $pid" # debugging
# See if PID is still running; see my answer here:
# https://stackoverflow.com/a/71134379/4561887
ps --pid "$pid" > /dev/null
if [ "$?" -ne 0 ]; then
# PID doesn't exist anymore, meaning it terminated
# 1st, read its return code
wait "$pid"
return_code="$?"
# 2nd, remove this PID from the `pids` array by `unset`ting the
# element at this index; NB: due to how bash arrays work, this does
# NOT actually remove this element from the array. Rather, it
# removes its index from the `"${!pids[#]}"` list of indices,
# adjusts the array count(`"${#pids[#]}"`) accordingly, and it sets
# the value at this index to either a null value of some sort, or
# an empty string (I'm not exactly sure).
unset "pids[$i]"
num_pids="${#pids[#]}"
echo "PID $pid is done; return_code = $return_code;" \
"$num_pids PIDs remaining."
fi
done
# exit the while loop if the `pids` array is empty
if [ "${#pids[#]}" -eq 0 ]; then
break
fi
# Do some small sleep here to keep your polling loop from sucking up
# 100% of one of your CPUs unnecessarily. Sleeping allows other processes
# to run during this time.
sleep 0.1
done
Sample run and output of the full program with Option 1 commented out and Option 2 in-use:
eRCaGuy_hello_world/bash$ ./multi_process_program.sh
num_procs = 4
cmd = my_sleep 5
pid = 22275
cmd = my_sleep 2
pid = 22276
cmd = my_sleep 3
pid = 22277
cmd = my_sleep 4
pid = 22280
PID 22276 is done; return_code = 2; 3 PIDs remaining.
PID 22277 is done; return_code = 3; 2 PIDs remaining.
PID 22280 is done; return_code = 4; 1 PIDs remaining.
PID 22275 is done; return_code = 5; 0 PIDs remaining.
Each of those PID XXXXX is done lines prints out live right after that process has terminated! Notice that even though the process for sleep 5 (PID 22275 in this case) was run first, it finished last, and we successfully detected each process right after it terminated. We also successfully detected each return code, just like in Option 1.
Other References:
*****+ [VERY HELPFUL] Get exit code of a background process - this answer taught me the key principle that (emphasis added):
wait <n> waits until the process with PID is complete (it will block until the process completes, so you might not want to call this until you are sure the process is done), and then returns the exit code of the completed process.
In other words, it helped me know that even after the process is complete, you can still call wait on it to get its return code!
How to check if a process id (PID) exists
my answer
Remove an element from a Bash array - note that elements in a bash array aren't actually deleted, they are just "unset". See my comments in the code above for what that means.
How to use the command-line executable true to make an infinite while loop in bash: https://www.cyberciti.biz/faq/bash-infinite-loop/
I see lots of good examples listed on here, wanted to throw mine in as well.
#! /bin/bash
items="1 2 3 4 5 6"
pids=""
for item in $items; do
sleep $item &
pids+="$! "
done
for pid in $pids; do
wait $pid
if [ $? -eq 0 ]; then
echo "SUCCESS - Job $pid exited with a status of $?"
else
echo "FAILED - Job $pid exited with a status of $?"
fi
done
I use something very similar to start/stop servers/services in parallel and check each exit status. Works great for me. Hope this helps someone out!
I don't believe it's possible with Bash's builtin functionality.
You can get notification when a child exits:
#!/bin/sh
set -o monitor # enable script job control
trap 'echo "child died"' CHLD
However there's no apparent way to get the child's exit status in the signal handler.
Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.
What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.
The following code will wait for completion of all calculations and return exit status 1 if any of doCalculations fails.
#!/bin/bash
for i in $(seq 0 9); do
(doCalculations $i >&2 & wait %1; echo $?) &
done | grep -qv 0 && exit 1
Here's my version that works for multiple pids, logs warnings if execution takes too long, and stops the subprocesses if execution takes longer than a given value.
[EDIT] I have uploaded my newer implementation of WaitForTaskCompletion, called ExecTasks at https://github.com/deajan/ofunctions.
There's also a compat layer for WaitForTaskCompletion
[/EDIT]
function WaitForTaskCompletion {
local pids="${1}" # pids to wait for, separated by semi-colon
local soft_max_time="${2}" # If execution takes longer than $soft_max_time seconds, will log a warning, unless $soft_max_time equals 0.
local hard_max_time="${3}" # If execution takes longer than $hard_max_time seconds, will stop execution, unless $hard_max_time equals 0.
local caller_name="${4}" # Who called this function
local exit_on_error="${5:-false}" # Should the function exit program on subprocess errors
Logger "${FUNCNAME[0]} called by [$caller_name]."
local soft_alert=0 # Does a soft alert need to be triggered, if yes, send an alert once
local log_ttime=0 # local time instance for comparaison
local seconds_begin=$SECONDS # Seconds since the beginning of the script
local exec_time=0 # Seconds since the beginning of this function
local retval=0 # return value of monitored pid process
local errorcount=0 # Number of pids that finished with errors
local pidCount # number of given pids
IFS=';' read -a pidsArray <<< "$pids"
pidCount=${#pidsArray[#]}
while [ ${#pidsArray[#]} -gt 0 ]; do
newPidsArray=()
for pid in "${pidsArray[#]}"; do
if kill -0 $pid > /dev/null 2>&1; then
newPidsArray+=($pid)
else
wait $pid
result=$?
if [ $result -ne 0 ]; then
errorcount=$((errorcount+1))
Logger "${FUNCNAME[0]} called by [$caller_name] finished monitoring [$pid] with exitcode [$result]."
fi
fi
done
## Log a standby message every hour
exec_time=$(($SECONDS - $seconds_begin))
if [ $((($exec_time + 1) % 3600)) -eq 0 ]; then
if [ $log_ttime -ne $exec_time ]; then
log_ttime=$exec_time
Logger "Current tasks still running with pids [${pidsArray[#]}]."
fi
fi
if [ $exec_time -gt $soft_max_time ]; then
if [ $soft_alert -eq 0 ] && [ $soft_max_time -ne 0 ]; then
Logger "Max soft execution time exceeded for task [$caller_name] with pids [${pidsArray[#]}]."
soft_alert=1
SendAlert
fi
if [ $exec_time -gt $hard_max_time ] && [ $hard_max_time -ne 0 ]; then
Logger "Max hard execution time exceeded for task [$caller_name] with pids [${pidsArray[#]}]. Stopping task execution."
kill -SIGTERM $pid
if [ $? == 0 ]; then
Logger "Task stopped successfully"
else
errrorcount=$((errorcount+1))
fi
fi
fi
pidsArray=("${newPidsArray[#]}")
sleep 1
done
Logger "${FUNCNAME[0]} ended for [$caller_name] using [$pidCount] subprocesses with [$errorcount] errors."
if [ $exit_on_error == true ] && [ $errorcount -gt 0 ]; then
Logger "Stopping execution."
exit 1337
else
return $errorcount
fi
}
# Just a plain stupid logging function to be replaced by yours
function Logger {
local value="${1}"
echo $value
}
Example, wait for all three processes to finish, log a warning if execution takes loger than 5 seconds, stop all processes if execution takes longer than 120 seconds. Don't exit program on failures.
function something {
sleep 10 &
pids="$!"
sleep 12 &
pids="$pids;$!"
sleep 9 &
pids="$pids;$!"
WaitForTaskCompletion $pids 5 120 ${FUNCNAME[0]} false
}
# Launch the function
someting
If you have bash 4.2 or later available the following might be useful to you. It uses associative arrays to store task names and their "code" as well as task names and their pids. I have also built in a simple rate-limiting method which might come handy if your tasks consume a lot of CPU or I/O time and you want to limit the number of concurrent tasks.
The script launches all tasks in the first loop and consumes the results in the second one.
This is a bit overkill for simple cases but it allows for pretty neat stuff. For example one can store error messages for each task in another associative array and print them after everything has settled down.
#! /bin/bash
main () {
local -A pids=()
local -A tasks=([task1]="echo 1"
[task2]="echo 2"
[task3]="echo 3"
[task4]="false"
[task5]="echo 5"
[task6]="false")
local max_concurrent_tasks=2
for key in "${!tasks[#]}"; do
while [ $(jobs 2>&1 | grep -c Running) -ge "$max_concurrent_tasks" ]; do
sleep 1 # gnu sleep allows floating point here...
done
${tasks[$key]} &
pids+=(["$key"]="$!")
done
errors=0
for key in "${!tasks[#]}"; do
pid=${pids[$key]}
local cur_ret=0
if [ -z "$pid" ]; then
echo "No Job ID known for the $key process" # should never happen
cur_ret=1
else
wait $pid
cur_ret=$?
fi
if [ "$cur_ret" -ne 0 ]; then
errors=$(($errors + 1))
echo "$key (${tasks[$key]}) failed."
fi
done
return $errors
}
main
I've had a go at this and combined all the best parts from the other examples here. This script will execute the checkpids function when any background process exits, and output the exit status without resorting to polling.
#!/bin/bash
set -o monitor
sleep 2 &
sleep 4 && exit 1 &
sleep 6 &
pids=`jobs -p`
checkpids() {
for pid in $pids; do
if kill -0 $pid 2>/dev/null; then
echo $pid is still alive.
elif wait $pid; then
echo $pid exited with zero exit status.
else
echo $pid exited with non-zero exit status.
fi
done
echo
}
trap checkpids CHLD
wait
#!/bin/bash
set -m
for i in `seq 0 9`; do
doCalculations $i &
done
while fg; do true; done
set -m allows you to use fg & bg in a script
fg, in addition to putting the last process in the foreground, has the same exit status as the process it foregrounds
while fg will stop looping when any fg exits with a non-zero exit status
unfortunately this won't handle the case when a process in the background exits with a non-zero exit status. (the loop won't terminate immediately. it will wait for the previous processes to complete.)
Wait for all jobs and return the exit code of the last failing job. Unlike solutions above, this does not require pid saving, or modifying inner loops of scripts. Just bg away, and wait.
function wait_ex {
# this waits for all jobs and returns the exit code of the last failing job
ecode=0
while true; do
[ -z "$(jobs)" ] && break
wait -n
err="$?"
[ "$err" != "0" ] && ecode="$err"
done
return $ecode
}
EDIT: Fixed the bug where this could be fooled by a script that ran a command that didn't exist.
Just store the results out of the shell, e.g. in a file.
#!/bin/bash
tmp=/tmp/results
: > $tmp #clean the file
for i in `seq 0 9`; do
(doCalculations $i; echo $i:$?>>$tmp)&
done #iterate
wait #wait until all ready
sort $tmp | grep -v ':0' #... handle as required
I've just been modifying a script to background and parallelise a process.
I did some experimenting (on Solaris with both bash and ksh) and discovered that 'wait' outputs the exit status if it's not zero , or a list of jobs that return non-zero exit when no PID argument is provided. E.g.
Bash:
$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]- Exit 2 sleep 20 && exit 2
[2]+ Exit 1 sleep 10 && exit 1
Ksh:
$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]+ Done(2) sleep 20 && exit 2
[2]+ Done(1) sleep 10 && exit 1
This output is written to stderr, so a simple solution to the OPs example could be:
#!/bin/bash
trap "rm -f /tmp/x.$$" EXIT
for i in `seq 0 9`; do
doCalculations $i &
done
wait 2> /tmp/x.$$
if [ `wc -l /tmp/x.$$` -gt 0 ] ; then
exit 1
fi
While this:
wait 2> >(wc -l)
will also return a count but without the tmp file. This might also be used this way, for example:
wait 2> >(if [ `wc -l` -gt 0 ] ; then echo "ERROR"; fi)
But this isn't very much more useful than the tmp file IMO. I couldn't find a useful way to avoid the tmp file whilst also avoiding running the "wait" in a subshell, which wont work at all.
I needed this, but the target process wasn't a child of current shell, in which case wait $PID doesn't work. I did find the following alternative instead:
while [ -e /proc/$PID ]; do sleep 0.1 ; done
That relies on the presence of procfs, which may not be available (Mac doesn't provide it for example). So for portability, you could use this instead:
while ps -p $PID >/dev/null ; do sleep 0.1 ; done
There are already a lot of answers here, but I am surprised no one seems to have suggested using arrays... So here's what I did - this might be useful to some in the future.
n=10 # run 10 jobs
c=0
PIDS=()
while true
my_function_or_command &
PID=$!
echo "Launched job as PID=$PID"
PIDS+=($PID)
(( c+=1 ))
# required to prevent any exit due to error
# caused by additional commands run which you
# may add when modifying this example
true
do
if (( c < n ))
then
continue
else
break
fi
done
# collect launched jobs
for pid in "${PIDS[#]}"
do
wait $pid || echo "failed job PID=$pid"
done
This works, should be just as a good if not better than #HoverHell's answer!
#!/usr/bin/env bash
set -m # allow for job control
EXIT_CODE=0; # exit code of overall script
function foo() {
echo "CHLD exit code is $1"
echo "CHLD pid is $2"
echo $(jobs -l)
for job in `jobs -p`; do
echo "PID => ${job}"
wait ${job} || echo "At least one test failed with exit code => $?" ; EXIT_CODE=1
done
}
trap 'foo $? $$' CHLD
DIRN=$(dirname "$0");
commands=(
"{ echo "foo" && exit 4; }"
"{ echo "bar" && exit 3; }"
"{ echo "baz" && exit 5; }"
)
clen=`expr "${#commands[#]}" - 1` # get length of commands - 1
for i in `seq 0 "$clen"`; do
(echo "${commands[$i]}" | bash) & # run the command via bash in subshell
echo "$i ith command has been issued as a background job"
done
# wait for all to finish
wait;
echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"
# end
and of course, I have immortalized this script, in an NPM project which allows you to run bash commands in parallel, useful for testing:
https://github.com/ORESoftware/generic-subshell
Exactly for this purpose I wrote a bash function called :for.
Note: :for not only preserves and returns the exit code of the failing function, but also terminates all parallel running instance. Which might not be needed in this case.
#!/usr/bin/env bash
# Wait for pids to terminate. If one pid exits with
# a non zero exit code, send the TERM signal to all
# processes and retain that exit code
#
# usage:
# :wait 123 32
function :wait(){
local pids=("$#")
[ ${#pids} -eq 0 ] && return $?
trap 'kill -INT "${pids[#]}" &>/dev/null || true; trap - INT' INT
trap 'kill -TERM "${pids[#]}" &>/dev/null || true; trap - RETURN TERM' RETURN TERM
for pid in "${pids[#]}"; do
wait "${pid}" || return $?
done
trap - INT RETURN TERM
}
# Run a function in parallel for each argument.
# Stop all instances if one exits with a non zero
# exit code
#
# usage:
# :for func 1 2 3
#
# env:
# FOR_PARALLEL: Max functions running in parallel
function :for(){
local f="${1}" && shift
local i=0
local pids=()
for arg in "$#"; do
( ${f} "${arg}" ) &
pids+=("$!")
if [ ! -z ${FOR_PARALLEL+x} ]; then
(( i=(i+1)%${FOR_PARALLEL} ))
if (( i==0 )) ;then
:wait "${pids[#]}" || return $?
pids=()
fi
fi
done && [ ${#pids} -eq 0 ] || :wait "${pids[#]}" || return $?
}
usage
for.sh:
#!/usr/bin/env bash
set -e
# import :for from gist: https://gist.github.com/Enteee/c8c11d46a95568be4d331ba58a702b62#file-for
# if you don't like curl imports, source the actual file here.
source <(curl -Ls https://gist.githubusercontent.com/Enteee/c8c11d46a95568be4d331ba58a702b62/raw/)
msg="You should see this three times"
:(){
i="${1}" && shift
echo "${msg}"
sleep 1
if [ "$i" == "1" ]; then sleep 1
elif [ "$i" == "2" ]; then false
elif [ "$i" == "3" ]; then
sleep 3
echo "You should never see this"
fi
} && :for : 1 2 3 || exit $?
echo "You should never see this"
$ ./for.sh; echo $?
You should see this three times
You should see this three times
You should see this three times
1
References
[1]: blog
[2]: gist
set -e
fail () {
touch .failure
}
expect () {
wait
if [ -f .failure ]; then
rm -f .failure
exit 1
fi
}
sleep 2 || fail &
sleep 2 && false || fail &
sleep 2 || fail
expect
The set -e at top makes your script stop on failure.
expect will return 1 if any subjob failed.
There can be a case where the process is complete before waiting for the process. If we trigger wait for a process that is already finished, it will trigger an error like pid is not a child of this shell. To avoid such cases, the following function can be used to find whether the process is complete or not:
isProcessComplete(){
PID=$1
while [ -e /proc/$PID ]
do
echo "Process: $PID is still running"
sleep 5
done
echo "Process $PID has finished"
}
Starting with Bash 5.1, there is a nice new way of waiting for and handling the results of multiple background jobs thanks to the introduction of wait -p:
#!/usr/bin/env bash
# Spawn background jobs
for ((i=0; i < 10; i++)); do
secs=$((RANDOM % 10)); code=$((RANDOM % 256))
(sleep ${secs}; exit ${code}) &
echo "Started background job (pid: $!, sleep: ${secs}, code: ${code})"
done
# Wait for background jobs, print individual results, determine overall result
result=0
while true; do
wait -n -p pid; code=$?
[[ -z "${pid}" ]] && break
echo "Background job ${pid} finished with code ${code}"
(( ${code} != 0 )) && result=1
done
# Return overall result
exit ${result}
I used this recently (thanks to Alnitak):
#!/bin/bash
# activate child monitoring
set -o monitor
# locking subprocess
(while true; do sleep 0.001; done) &
pid=$!
# count, and kill when all done
c=0
function kill_on_count() {
# you could kill on whatever criterion you wish for
# I just counted to simulate bash's wait with no args
[ $c -eq 9 ] && kill $pid
c=$((c+1))
echo -n '.' # async feedback (but you don't know which one)
}
trap "kill_on_count" CHLD
function save_status() {
local i=$1;
local rc=$2;
# do whatever, and here you know which one stopped
# but remember, you're called from a subshell
# so vars have their values at fork time
}
# care must be taken not to spawn more than one child per loop
# e.g don't use `seq 0 9` here!
for i in {0..9}; do
(doCalculations $i; save_status $i $?) &
done
# wait for locking subprocess to be killed
wait $pid
echo
From there one can easily extrapolate, and have a trigger (touch a file, send a signal) and change the counting criteria (count files touched, or whatever) to respond to that trigger. Or if you just want 'any' non zero rc, just kill the lock from save_status.
Trapping CHLD signal may not work because you can lose some signals if they arrived simultaneously.
#!/bin/bash
trap 'rm -f $tmpfile' EXIT
tmpfile=$(mktemp)
doCalculations() {
echo start job $i...
sleep $((RANDOM % 5))
echo ...end job $i
exit $((RANDOM % 10))
}
number_of_jobs=10
for i in $( seq 1 $number_of_jobs )
do
( trap "echo job$i : exit value : \$? >> $tmpfile" EXIT; doCalculations ) &
done
wait
i=0
while read res; do
echo "$res"
let i++
done < "$tmpfile"
echo $i jobs done !!!
solution to wait for several subprocesses and to exit when any one of them exits with non-zero status code is by using 'wait -n'
#!/bin/bash
wait_for_pids()
{
for (( i = 1; i <= $#; i++ )) do
wait -n $#
status=$?
echo "received status: "$status
if [ $status -ne 0 ] && [ $status -ne 127 ]; then
exit 1
fi
done
}
sleep_for_10()
{
sleep 10
exit 10
}
sleep_for_20()
{
sleep 20
}
sleep_for_10 &
pid1=$!
sleep_for_20 &
pid2=$!
wait_for_pids $pid2 $pid1
status code '127' is for non-existing process which means the child might have exited.
I almost fell into the trap of using jobs -p to collect PIDs, which does not work if the child has already exited, as shown in the script below. The solution I picked was simply calling wait -n N times, where N is the number of children I have, which I happen to know deterministically.
#!/usr/bin/env bash
sleeper() {
echo "Sleeper $1"
sleep $2
echo "Exiting $1"
return $3
}
start_sleepers() {
sleeper 1 1 0 &
sleeper 2 2 $1 &
sleeper 3 5 0 &
sleeper 4 6 0 &
sleep 4
}
echo "Using jobs"
start_sleepers 1
pids=( $(jobs -p) )
echo "PIDS: ${pids[*]}"
for pid in "${pids[#]}"; do
wait "$pid"
echo "Exit code $?"
done
echo "Clearing other children"
wait -n; echo "Exit code $?"
wait -n; echo "Exit code $?"
echo "Waiting for N processes"
start_sleepers 2
for ignored in $(seq 1 4); do
wait -n
echo "Exit code $?"
done
Output:
Using jobs
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
PIDS: 56496 56497
Exiting 3
Exit code 0
Exiting 4
Exit code 0
Clearing other children
Exit code 0
Exit code 1
Waiting for N processes
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
Exit code 0
Exit code 2
Exiting 3
Exit code 0
Exiting 4
Exit code 0

bash shutdown hook; or, kill all background processes when main process is killed

I have a bash that runs endless commands as background processes:
#!/bin/bash
function xyz() {
# some awk command
}
endlesscommand "param 1" | xyz & # async
pids=$!
endlesscommand "param 2" | xyz & # async
pids="$pids "$!
endlesscommand "param 3" | xyz # sync so the script doesn't leave
The only way to stop this script is (must be) Ctrl-C or kill and when that happens, I need to kill all the background processes listed in the $pids variable.
How do I do that?
If it was possible to catch the kill signal on the main process and execute a function when that happens (shutdown hook), I would do something like:
for $pid in $pids; do kill $pid; done;
But I can't find how to do this...
Here's a trap that doesn't need you to track pids:
trap 'jobs -p | xargs kill' EXIT
EDIT: #Barmar asked if this works within non-sourced scripts, where job control isn't usually available. It does. Consider this script:
$ cat no-job-control
#! /bin/bash
set -e -o pipefail
# Prove job control is off
if suspend
then
echo suspended
else
echo suspension failed, job control must be off
fi
echo
# Set up the trap
trap 'jobs -p | xargs kill' EXIT
# Make some work
(echo '=> Starting 0'; sleep 5; echo '=> Finishing 0') &
(echo '=> Starting 1'; sleep 5; echo '=> Finishing 1') &
(echo '=> Starting 2'; sleep 5; echo '=> Finishing 2') &
echo "What's in jobs -p?"
echo
jobs -p
echo
echo "Ok, exiting now"
echo
When run we see the pids of the three group leaders, and then see them killed:
$ ./no-job-control
./no-job-control: line 6: suspend: cannot suspend: no job control
suspension failed, job control must be off
=> Starting 0
What's in jobs -p?
=> Starting 1
54098
54099
54100
Ok, exiting now
=> Starting 2
./no-job-control: line 31: 54098 Terminated: 15 ( echo '=> Starting 0'; sleep 5; echo '=> Finishing 0' )
./no-job-control: line 31: 54099 Terminated: 15 ( echo '=> Starting 1'; sleep 5; echo '=> Finishing 1' )
./no-job-control: line 31: 54100 Terminated: 15 ( echo '=> Starting 2'; sleep 5; echo '=> Finishing 2' )
If we instead comment out the trap line and re-run, the three jobs do not die and in fact print out their final messages a few seconds later. Notice the returned prompt interleaved with the final outputs.
$ ./no-job-control
./no-job-control: line 6: suspend: cannot suspend: no job control
suspension failed, job control must be off
=> Starting 0
What's in jobs -p?
54110
54111
54112
=> Starting 1
Ok, exiting now
=> Starting 2
$ => Finishing 0
=> Finishing 2
=> Finishing 1
You can make use of pgrep and a function to kill all processes created under the main process like this. This would not only kill the direct child processes but also those created under it.
#!/bin/bash
function killchildren {
local LIST=() IFS=$'\n' A
read -a LIST -d '' < <(exec pgrep -P "$1")
local A SIGNAL="${2:-SIGTERM}"
for A in "${LIST[#]}"; do
killchildren_ "$A" "$SIGNAL"
done
}
function killchildren_ {
local LIST=()
read -a LIST -d '' < <(exec pgrep -P "$1")
kill -s "$2" "$1"
if [[ ${#LIST[#]} -gt 0 ]]; then
local A
for A in "${LIST[#]}"; do
killchildren_ "$A" "$2"
done
fi
}
trap 'killchildren "$BASHPID"' EXIT
endlesscommand "param 1" &
endlesscommand "param 2" &
endlesscommand "param 3" &
while pgrep -P "$BASHPID" >/dev/null; do
wait
done
As for your original code, it would be better to just use arrays, and you also don't need to use a for loop:
#!/bin/bash
trap 'kill "${pids[#]}"' EXIT
pids=()
endlesscommand "param 1" & # async
pids+=("$!")
endlesscommand "param 2" & # async
pids+=("$!")
endlesscommand "param 3" & # syncing this is not a good idea since if the main process would end along with it if it ends earlier.
pids+=("$!")
while pgrep -P "$BASHPID" >/dev/null; do
wait
done
Original function reference: http://www.linuxquestions.org/questions/blog/konsolebox-210384/bash-functions-to-list-and-kill-or-send-signals-to-process-trees-34624/
kill `ps axl | grep "endlesscommand" | awk '{printf $4" "}'`
This will look for the parent processes that is affecting "endlesscommand"

How to terminate script's process tree in Cygwin bash from bash script

I have a Cygwin bash script that I need to watch and terminate under certain conditions - specifically, after a certain file has been created. I'm having difficulty figuring out how exactly to terminate the script with the same level of completeness that Ctrl+C does, however.
Here's a simple script (called test1) that does little more than wait around to be terminated.
#!/bin/bash
test -f kill_me && rm kill_me
touch kill_me
tail -f kill_me
If this script is run in the foreground, Ctrl+C will terminate both the tail and the script itself. If the script is run in the background, a kill %1 (assuming it is job 1) will also terminate both tail and the script.
However, when I try to do the same thing from a script, I'm finding that only the bash process running the script is terminated, while tail hangs around disconnected from its parent. Here's one way I tried (test2):
#!/bin/bash
test -f kill_me && rm kill_me
(
touch kill_me
tail -f kill_me
) &
while true; do
sleep 1
test -f kill_me && {
kill %1
exit
}
done
If this is run, the bash subshell running in the background is terminated OK, but tail still hangs around.
If I use an explicitly separate script, like this, it still doesn't work (test3):
#!/bin/bash
test -f kill_me && rm kill_me
# assuming test1 above is included in the same directory
./test1 &
while true; do
sleep 1
test -f kill_me && {
kill %1
exit
}
done
tail is still hanging around after this script is run.
In my actual case, the process creating files is not particularly instrumentable, so I can't get it to terminate of its own accord; by finding out when it has created a particular file, however, I can at that point know that it's OK to terminate it. Unfortunately, I can't use a simple killall or equivalent, as there may be multiple instances running, and I only want to kill the specific instance.
/bin/kill (the program, not the bash builtin) interprets a negative PID as “kill the process group” which will get all the children too.
Changing
kill %1
to
/bin/kill -- -$$
works for me.
Adam's link put me in a direction that will solve the problem, albeit not without some minor caveats.
The script doesn't work unmodified under Cygwin, so I rewrote it, and with a couple more options. Here's my version:
#!/bin/bash
function usage
{
echo "usage: $(basename $0) [-c] [-<sigspec>] <pid>..."
echo "Recursively kill the process tree(s) rooted by <pid>."
echo "Options:"
echo " -c Only kill children; don't kill root"
echo " <sigspec> Arbitrary argument to pass to kill, expected to be signal specification"
exit 1
}
kill_parent=1
sig_spec=-9
function do_kill # <pid>...
{
kill "$sig_spec" "$#"
}
function kill_children # pid
{
local target=$1
local pid=
local ppid=
local i
# Returns alternating ids: first is pid, second is parent
for i in $(ps -f | tail +2 | cut -b 10-24); do
if [ ! -n "$pid" ]; then
# first in pair
pid=$i
else
# second in pair
ppid=$i
(( ppid == target && pid != $$ )) && {
kill_children $pid
do_kill $pid
}
# reset pid for next pair
pid=
fi
done
}
test -n "$1" || usage
while [ -n "$1" ]; do
case "$1" in
-c)
kill_parent=0
;;
-*)
sig_spec="$1"
;;
*)
kill_children $1
(( kill_parent )) && do_kill $1
;;
esac
shift
done
The only real downside is the somewhat ugly message that bash prints out when it receives a fatal signal, namely "Terminated", "Killed" or "Interrupted" (depending on what you send). However, I can live with that in batch scripts.
This script looks like it'll do the job:
#!/bin/bash
# Author: Sunil Alankar
##
# recursive kill. kills the process tree down from the specified pid
#
# foreach child of pid, recursive call dokill
dokill() {
local pid=$1
local itsparent=""
local aprocess=""
local x=""
# next line is a single line
for x in `/bin/ps -f | sed -e '/UID/d;s/[a-zA-Z0-9_-]\{1,\}
\{1,\}\([0-9]\{1,\}\) \{1,\}\([0-9]\{1,\}\) .*/\1 \2/g'`
do
if [ "$aprocess" = "" ]; then
aprocess=$x
itsparent=""
continue
else
itsparent=$x
if [ "$itsparent" = "$pid" ]; then
dokill $aprocess
fi
aprocess=""
fi
done
echo "killing $1"
kill -9 $1 > /dev/null 2>&1
}
case $# in
1) PID=$1
;;
*) echo "usage: rekill <top pid to kill>";
exit 1;
;;
esac
dokill $PID

How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?

How to wait in a bash script for several subprocesses spawned from that script to finish, and then return exit code !=0 when any of the subprocesses ends with code !=0?
Simple script:
#!/bin/bash
for i in `seq 0 9`; do
doCalculations $i &
done
wait
The above script will wait for all 10 spawned subprocesses, but it will always give exit status 0 (see help wait). How can I modify this script so it will discover exit statuses of spawned subprocesses and return exit code 1 when any of subprocesses ends with code !=0?
Is there any better solution for that than collecting PIDs of the subprocesses, wait for them in order and sum exit statuses?
wait also (optionally) takes the PID of the process to wait for, and with $! you get the PID of the last command launched in the background.
Modify the loop to store the PID of each spawned sub-process into an array, and then loop again waiting on each PID.
# run processes and store pids in array
for i in $n_procs; do
./procs[${i}] &
pids[${i}]=$!
done
# wait for all pids
for pid in ${pids[*]}; do
wait $pid
done
http://jeremy.zawodny.com/blog/archives/010717.html :
#!/bin/bash
FAIL=0
echo "starting"
./sleeper 2 0 &
./sleeper 2 1 &
./sleeper 3 0 &
./sleeper 2 0 &
for job in `jobs -p`
do
echo $job
wait $job || let "FAIL+=1"
done
echo $FAIL
if [ "$FAIL" == "0" ];
then
echo "YAY!"
else
echo "FAIL! ($FAIL)"
fi
Here is simple example using wait.
Run some processes:
$ sleep 10 &
$ sleep 10 &
$ sleep 20 &
$ sleep 20 &
Then wait for them with wait command:
$ wait < <(jobs -p)
Or just wait (without arguments) for all.
This will wait for all jobs in the background are completed.
If the -n option is supplied, waits for the next job to terminate and returns its exit status.
See: help wait and help jobs for syntax.
However the downside is that this will return on only the status of the last ID, so you need to check the status for each subprocess and store it in the variable.
Or make your calculation function to create some file on failure (empty or with fail log), then check of that file if exists, e.g.
$ sleep 20 && true || tee fail &
$ sleep 20 && false || tee fail &
$ wait < <(jobs -p)
$ test -f fail && echo Calculation failed.
How about simply:
#!/bin/bash
pids=""
for i in `seq 0 9`; do
doCalculations $i &
pids="$pids $!"
done
wait $pids
...code continued here ...
Update:
As pointed by multiple commenters, the above waits for all processes to be completed before continuing, but does not exit and fail if one of them fails, it can be made to do with the following modification suggested by #Bryan, #SamBrightman, and others:
#!/bin/bash
pids=""
RESULT=0
for i in `seq 0 9`; do
doCalculations $i &
pids="$pids $!"
done
for pid in $pids; do
wait $pid || let "RESULT=1"
done
if [ "$RESULT" == "1" ];
then
exit 1
fi
...code continued here ...
If you have GNU Parallel installed you can do:
# If doCalculations is a function
export -f doCalculations
seq 0 9 | parallel doCalculations {}
GNU Parallel will give you exit code:
0 - All jobs ran without error.
1-253 - Some of the jobs failed. The exit status gives the number of failed jobs
254 - More than 253 jobs failed.
255 - Other error.
Watch the intro videos to learn more: http://pi.dk/1
Here's what I've come up with so far. I would like to see how to interrupt the sleep command if a child terminates, so that one would not have to tune WAITALL_DELAY to one's usage.
waitall() { # PID...
## Wait for children to exit and indicate whether all exited with 0 status.
local errors=0
while :; do
debug "Processes remaining: $*"
for pid in "$#"; do
shift
if kill -0 "$pid" 2>/dev/null; then
debug "$pid is still alive."
set -- "$#" "$pid"
elif wait "$pid"; then
debug "$pid exited with zero exit status."
else
debug "$pid exited with non-zero exit status."
((++errors))
fi
done
(("$#" > 0)) || break
# TODO: how to interrupt this sleep when a child terminates?
sleep ${WAITALL_DELAY:-1}
done
((errors == 0))
}
debug() { echo "DEBUG: $*" >&2; }
pids=""
for t in 3 5 4; do
sleep "$t" &
pids="$pids $!"
done
waitall $pids
To parallelize this...
for i in $(whatever_list) ; do
do_something $i
done
Translate it to this...
for i in $(whatever_list) ; do echo $i ; done | ## execute in parallel...
(
export -f do_something ## export functions (if needed)
export PATH ## export any variables that are required
xargs -I{} --max-procs 0 bash -c ' ## process in batches...
{
echo "processing {}" ## optional
do_something {}
}'
)
If an error occurs in one process, it won't interrupt the other processes, but it will result in a non-zero exit code from the sequence as a whole.
Exporting functions and variables may or may not be necessary, in any particular case.
You can set --max-procs based on how much parallelism you want (0 means "all at once").
GNU Parallel offers some additional features when used in place of xargs -- but it isn't always installed by default.
The for loop isn't strictly necessary in this example since echo $i is basically just regenerating the output of $(whatever_list). I just think the use of the for keyword makes it a little easier to see what is going on.
Bash string handling can be confusing -- I have found that using single quotes works best for wrapping non-trivial scripts.
You can easily interrupt the entire operation (using ^C or similar), unlike the the more direct approach to Bash parallelism.
Here's a simplified working example...
for i in {0..5} ; do echo $i ; done |xargs -I{} --max-procs 2 bash -c '
{
echo sleep {}
sleep 2s
}'
This is something that I use:
#wait for jobs
for job in `jobs -p`; do wait ${job}; done
This is an expansion on the most-upvoted answer, by #Luca Tettamanti, to make a fully-runnable example.
That answer left me wondering:
What type of variable is n_procs, and what does it contain? What type of variable is procs, and what does it contain? Can someone please update this answer to make it runnable by adding definitions for those variables? I don't understand how.
...and also:
How do you get the return code from the subprocess when it has completed (which is the whole crux of this question)?
Anyway, I figured it out, so here is a fully-runnable example.
Notes:
$! is how to obtain the PID (Process ID) of the last-executed sub-process.
Running any command with the & after it, like cmd &, for example, causes it to run in the background as a parallel suprocess with the main process.
myarray=() is how to create an array in bash.
To learn a tiny bit more about the wait built-in command, see help wait. See also, and especially, the official Bash user manual on Job Control built-ins, such as wait and jobs, here: https://www.gnu.org/software/bash/manual/html_node/Job-Control-Builtins.html#index-wait.
Full, runnable program: wait for all processes to end
multi_process_program.sh (from my eRCaGuy_hello_world repo):
#!/usr/bin/env bash
# This is a special sleep function which returns the number of seconds slept as
# the "error code" or return code" so that we can easily see that we are in
# fact actually obtaining the return code of each process as it finishes.
my_sleep() {
seconds_to_sleep="$1"
sleep "$seconds_to_sleep"
return "$seconds_to_sleep"
}
# Create an array of whatever commands you want to run as subprocesses
procs=() # bash array
procs+=("my_sleep 5")
procs+=("my_sleep 2")
procs+=("my_sleep 3")
procs+=("my_sleep 4")
num_procs=${#procs[#]} # number of processes
echo "num_procs = $num_procs"
# run commands as subprocesses and store pids in an array
pids=() # bash array
for (( i=0; i<"$num_procs"; i++ )); do
echo "cmd = ${procs[$i]}"
${procs[$i]} & # run the cmd as a subprocess
# store pid of last subprocess started; see:
# https://unix.stackexchange.com/a/30371/114401
pids+=("$!")
echo " pid = ${pids[$i]}"
done
# OPTION 1 (comment this option out if using Option 2 below): wait for all pids
for pid in "${pids[#]}"; do
wait "$pid"
return_code="$?"
echo "PID = $pid; return_code = $return_code"
done
echo "All $num_procs processes have ended."
Change the file above to be executable by running chmod +x multi_process_program.sh, then run it like this:
time ./multi_process_program.sh
Sample output. See how the output of the time command in the call shows it took 5.084sec to run. We were also able to successfully retrieve the return code from each subprocess.
eRCaGuy_hello_world/bash$ time ./multi_process_program.sh
num_procs = 4
cmd = my_sleep 5
pid = 21694
cmd = my_sleep 2
pid = 21695
cmd = my_sleep 3
pid = 21697
cmd = my_sleep 4
pid = 21699
PID = 21694; return_code = 5
PID = 21695; return_code = 2
PID = 21697; return_code = 3
PID = 21699; return_code = 4
All 4 processes have ended.
PID 21694 is done; return_code = 5; 3 PIDs remaining.
PID 21695 is done; return_code = 2; 2 PIDs remaining.
PID 21697 is done; return_code = 3; 1 PIDs remaining.
PID 21699 is done; return_code = 4; 0 PIDs remaining.
real 0m5.084s
user 0m0.025s
sys 0m0.061s
Going further: determine live when each individual process ends
If you'd like to do some action as each process finishes, and you don't know when they will finish, you can poll in an infinite while loop to see when each process terminates, then do whatever action you want.
Simply comment out the "OPTION 1" block of code above, and replace it with this "OPTION 2" block instead:
# OR OPTION 2 (comment out Option 1 above if using Option 2): poll to detect
# when each process terminates, and print out when each process finishes!
while true; do
for i in "${!pids[#]}"; do
pid="${pids[$i]}"
# echo "pid = $pid" # debugging
# See if PID is still running; see my answer here:
# https://stackoverflow.com/a/71134379/4561887
ps --pid "$pid" > /dev/null
if [ "$?" -ne 0 ]; then
# PID doesn't exist anymore, meaning it terminated
# 1st, read its return code
wait "$pid"
return_code="$?"
# 2nd, remove this PID from the `pids` array by `unset`ting the
# element at this index; NB: due to how bash arrays work, this does
# NOT actually remove this element from the array. Rather, it
# removes its index from the `"${!pids[#]}"` list of indices,
# adjusts the array count(`"${#pids[#]}"`) accordingly, and it sets
# the value at this index to either a null value of some sort, or
# an empty string (I'm not exactly sure).
unset "pids[$i]"
num_pids="${#pids[#]}"
echo "PID $pid is done; return_code = $return_code;" \
"$num_pids PIDs remaining."
fi
done
# exit the while loop if the `pids` array is empty
if [ "${#pids[#]}" -eq 0 ]; then
break
fi
# Do some small sleep here to keep your polling loop from sucking up
# 100% of one of your CPUs unnecessarily. Sleeping allows other processes
# to run during this time.
sleep 0.1
done
Sample run and output of the full program with Option 1 commented out and Option 2 in-use:
eRCaGuy_hello_world/bash$ ./multi_process_program.sh
num_procs = 4
cmd = my_sleep 5
pid = 22275
cmd = my_sleep 2
pid = 22276
cmd = my_sleep 3
pid = 22277
cmd = my_sleep 4
pid = 22280
PID 22276 is done; return_code = 2; 3 PIDs remaining.
PID 22277 is done; return_code = 3; 2 PIDs remaining.
PID 22280 is done; return_code = 4; 1 PIDs remaining.
PID 22275 is done; return_code = 5; 0 PIDs remaining.
Each of those PID XXXXX is done lines prints out live right after that process has terminated! Notice that even though the process for sleep 5 (PID 22275 in this case) was run first, it finished last, and we successfully detected each process right after it terminated. We also successfully detected each return code, just like in Option 1.
Other References:
*****+ [VERY HELPFUL] Get exit code of a background process - this answer taught me the key principle that (emphasis added):
wait <n> waits until the process with PID is complete (it will block until the process completes, so you might not want to call this until you are sure the process is done), and then returns the exit code of the completed process.
In other words, it helped me know that even after the process is complete, you can still call wait on it to get its return code!
How to check if a process id (PID) exists
my answer
Remove an element from a Bash array - note that elements in a bash array aren't actually deleted, they are just "unset". See my comments in the code above for what that means.
How to use the command-line executable true to make an infinite while loop in bash: https://www.cyberciti.biz/faq/bash-infinite-loop/
I see lots of good examples listed on here, wanted to throw mine in as well.
#! /bin/bash
items="1 2 3 4 5 6"
pids=""
for item in $items; do
sleep $item &
pids+="$! "
done
for pid in $pids; do
wait $pid
if [ $? -eq 0 ]; then
echo "SUCCESS - Job $pid exited with a status of $?"
else
echo "FAILED - Job $pid exited with a status of $?"
fi
done
I use something very similar to start/stop servers/services in parallel and check each exit status. Works great for me. Hope this helps someone out!
I don't believe it's possible with Bash's builtin functionality.
You can get notification when a child exits:
#!/bin/sh
set -o monitor # enable script job control
trap 'echo "child died"' CHLD
However there's no apparent way to get the child's exit status in the signal handler.
Getting that child status is usually the job of the wait family of functions in the lower level POSIX APIs. Unfortunately Bash's support for that is limited - you can wait for one specific child process (and get its exit status) or you can wait for all of them, and always get a 0 result.
What it appears impossible to do is the equivalent of waitpid(-1), which blocks until any child process returns.
The following code will wait for completion of all calculations and return exit status 1 if any of doCalculations fails.
#!/bin/bash
for i in $(seq 0 9); do
(doCalculations $i >&2 & wait %1; echo $?) &
done | grep -qv 0 && exit 1
Here's my version that works for multiple pids, logs warnings if execution takes too long, and stops the subprocesses if execution takes longer than a given value.
[EDIT] I have uploaded my newer implementation of WaitForTaskCompletion, called ExecTasks at https://github.com/deajan/ofunctions.
There's also a compat layer for WaitForTaskCompletion
[/EDIT]
function WaitForTaskCompletion {
local pids="${1}" # pids to wait for, separated by semi-colon
local soft_max_time="${2}" # If execution takes longer than $soft_max_time seconds, will log a warning, unless $soft_max_time equals 0.
local hard_max_time="${3}" # If execution takes longer than $hard_max_time seconds, will stop execution, unless $hard_max_time equals 0.
local caller_name="${4}" # Who called this function
local exit_on_error="${5:-false}" # Should the function exit program on subprocess errors
Logger "${FUNCNAME[0]} called by [$caller_name]."
local soft_alert=0 # Does a soft alert need to be triggered, if yes, send an alert once
local log_ttime=0 # local time instance for comparaison
local seconds_begin=$SECONDS # Seconds since the beginning of the script
local exec_time=0 # Seconds since the beginning of this function
local retval=0 # return value of monitored pid process
local errorcount=0 # Number of pids that finished with errors
local pidCount # number of given pids
IFS=';' read -a pidsArray <<< "$pids"
pidCount=${#pidsArray[#]}
while [ ${#pidsArray[#]} -gt 0 ]; do
newPidsArray=()
for pid in "${pidsArray[#]}"; do
if kill -0 $pid > /dev/null 2>&1; then
newPidsArray+=($pid)
else
wait $pid
result=$?
if [ $result -ne 0 ]; then
errorcount=$((errorcount+1))
Logger "${FUNCNAME[0]} called by [$caller_name] finished monitoring [$pid] with exitcode [$result]."
fi
fi
done
## Log a standby message every hour
exec_time=$(($SECONDS - $seconds_begin))
if [ $((($exec_time + 1) % 3600)) -eq 0 ]; then
if [ $log_ttime -ne $exec_time ]; then
log_ttime=$exec_time
Logger "Current tasks still running with pids [${pidsArray[#]}]."
fi
fi
if [ $exec_time -gt $soft_max_time ]; then
if [ $soft_alert -eq 0 ] && [ $soft_max_time -ne 0 ]; then
Logger "Max soft execution time exceeded for task [$caller_name] with pids [${pidsArray[#]}]."
soft_alert=1
SendAlert
fi
if [ $exec_time -gt $hard_max_time ] && [ $hard_max_time -ne 0 ]; then
Logger "Max hard execution time exceeded for task [$caller_name] with pids [${pidsArray[#]}]. Stopping task execution."
kill -SIGTERM $pid
if [ $? == 0 ]; then
Logger "Task stopped successfully"
else
errrorcount=$((errorcount+1))
fi
fi
fi
pidsArray=("${newPidsArray[#]}")
sleep 1
done
Logger "${FUNCNAME[0]} ended for [$caller_name] using [$pidCount] subprocesses with [$errorcount] errors."
if [ $exit_on_error == true ] && [ $errorcount -gt 0 ]; then
Logger "Stopping execution."
exit 1337
else
return $errorcount
fi
}
# Just a plain stupid logging function to be replaced by yours
function Logger {
local value="${1}"
echo $value
}
Example, wait for all three processes to finish, log a warning if execution takes loger than 5 seconds, stop all processes if execution takes longer than 120 seconds. Don't exit program on failures.
function something {
sleep 10 &
pids="$!"
sleep 12 &
pids="$pids;$!"
sleep 9 &
pids="$pids;$!"
WaitForTaskCompletion $pids 5 120 ${FUNCNAME[0]} false
}
# Launch the function
someting
If you have bash 4.2 or later available the following might be useful to you. It uses associative arrays to store task names and their "code" as well as task names and their pids. I have also built in a simple rate-limiting method which might come handy if your tasks consume a lot of CPU or I/O time and you want to limit the number of concurrent tasks.
The script launches all tasks in the first loop and consumes the results in the second one.
This is a bit overkill for simple cases but it allows for pretty neat stuff. For example one can store error messages for each task in another associative array and print them after everything has settled down.
#! /bin/bash
main () {
local -A pids=()
local -A tasks=([task1]="echo 1"
[task2]="echo 2"
[task3]="echo 3"
[task4]="false"
[task5]="echo 5"
[task6]="false")
local max_concurrent_tasks=2
for key in "${!tasks[#]}"; do
while [ $(jobs 2>&1 | grep -c Running) -ge "$max_concurrent_tasks" ]; do
sleep 1 # gnu sleep allows floating point here...
done
${tasks[$key]} &
pids+=(["$key"]="$!")
done
errors=0
for key in "${!tasks[#]}"; do
pid=${pids[$key]}
local cur_ret=0
if [ -z "$pid" ]; then
echo "No Job ID known for the $key process" # should never happen
cur_ret=1
else
wait $pid
cur_ret=$?
fi
if [ "$cur_ret" -ne 0 ]; then
errors=$(($errors + 1))
echo "$key (${tasks[$key]}) failed."
fi
done
return $errors
}
main
I've had a go at this and combined all the best parts from the other examples here. This script will execute the checkpids function when any background process exits, and output the exit status without resorting to polling.
#!/bin/bash
set -o monitor
sleep 2 &
sleep 4 && exit 1 &
sleep 6 &
pids=`jobs -p`
checkpids() {
for pid in $pids; do
if kill -0 $pid 2>/dev/null; then
echo $pid is still alive.
elif wait $pid; then
echo $pid exited with zero exit status.
else
echo $pid exited with non-zero exit status.
fi
done
echo
}
trap checkpids CHLD
wait
#!/bin/bash
set -m
for i in `seq 0 9`; do
doCalculations $i &
done
while fg; do true; done
set -m allows you to use fg & bg in a script
fg, in addition to putting the last process in the foreground, has the same exit status as the process it foregrounds
while fg will stop looping when any fg exits with a non-zero exit status
unfortunately this won't handle the case when a process in the background exits with a non-zero exit status. (the loop won't terminate immediately. it will wait for the previous processes to complete.)
Wait for all jobs and return the exit code of the last failing job. Unlike solutions above, this does not require pid saving, or modifying inner loops of scripts. Just bg away, and wait.
function wait_ex {
# this waits for all jobs and returns the exit code of the last failing job
ecode=0
while true; do
[ -z "$(jobs)" ] && break
wait -n
err="$?"
[ "$err" != "0" ] && ecode="$err"
done
return $ecode
}
EDIT: Fixed the bug where this could be fooled by a script that ran a command that didn't exist.
Just store the results out of the shell, e.g. in a file.
#!/bin/bash
tmp=/tmp/results
: > $tmp #clean the file
for i in `seq 0 9`; do
(doCalculations $i; echo $i:$?>>$tmp)&
done #iterate
wait #wait until all ready
sort $tmp | grep -v ':0' #... handle as required
I've just been modifying a script to background and parallelise a process.
I did some experimenting (on Solaris with both bash and ksh) and discovered that 'wait' outputs the exit status if it's not zero , or a list of jobs that return non-zero exit when no PID argument is provided. E.g.
Bash:
$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]- Exit 2 sleep 20 && exit 2
[2]+ Exit 1 sleep 10 && exit 1
Ksh:
$ sleep 20 && exit 1 &
$ sleep 10 && exit 2 &
$ wait
[1]+ Done(2) sleep 20 && exit 2
[2]+ Done(1) sleep 10 && exit 1
This output is written to stderr, so a simple solution to the OPs example could be:
#!/bin/bash
trap "rm -f /tmp/x.$$" EXIT
for i in `seq 0 9`; do
doCalculations $i &
done
wait 2> /tmp/x.$$
if [ `wc -l /tmp/x.$$` -gt 0 ] ; then
exit 1
fi
While this:
wait 2> >(wc -l)
will also return a count but without the tmp file. This might also be used this way, for example:
wait 2> >(if [ `wc -l` -gt 0 ] ; then echo "ERROR"; fi)
But this isn't very much more useful than the tmp file IMO. I couldn't find a useful way to avoid the tmp file whilst also avoiding running the "wait" in a subshell, which wont work at all.
I needed this, but the target process wasn't a child of current shell, in which case wait $PID doesn't work. I did find the following alternative instead:
while [ -e /proc/$PID ]; do sleep 0.1 ; done
That relies on the presence of procfs, which may not be available (Mac doesn't provide it for example). So for portability, you could use this instead:
while ps -p $PID >/dev/null ; do sleep 0.1 ; done
There are already a lot of answers here, but I am surprised no one seems to have suggested using arrays... So here's what I did - this might be useful to some in the future.
n=10 # run 10 jobs
c=0
PIDS=()
while true
my_function_or_command &
PID=$!
echo "Launched job as PID=$PID"
PIDS+=($PID)
(( c+=1 ))
# required to prevent any exit due to error
# caused by additional commands run which you
# may add when modifying this example
true
do
if (( c < n ))
then
continue
else
break
fi
done
# collect launched jobs
for pid in "${PIDS[#]}"
do
wait $pid || echo "failed job PID=$pid"
done
This works, should be just as a good if not better than #HoverHell's answer!
#!/usr/bin/env bash
set -m # allow for job control
EXIT_CODE=0; # exit code of overall script
function foo() {
echo "CHLD exit code is $1"
echo "CHLD pid is $2"
echo $(jobs -l)
for job in `jobs -p`; do
echo "PID => ${job}"
wait ${job} || echo "At least one test failed with exit code => $?" ; EXIT_CODE=1
done
}
trap 'foo $? $$' CHLD
DIRN=$(dirname "$0");
commands=(
"{ echo "foo" && exit 4; }"
"{ echo "bar" && exit 3; }"
"{ echo "baz" && exit 5; }"
)
clen=`expr "${#commands[#]}" - 1` # get length of commands - 1
for i in `seq 0 "$clen"`; do
(echo "${commands[$i]}" | bash) & # run the command via bash in subshell
echo "$i ith command has been issued as a background job"
done
# wait for all to finish
wait;
echo "EXIT_CODE => $EXIT_CODE"
exit "$EXIT_CODE"
# end
and of course, I have immortalized this script, in an NPM project which allows you to run bash commands in parallel, useful for testing:
https://github.com/ORESoftware/generic-subshell
Exactly for this purpose I wrote a bash function called :for.
Note: :for not only preserves and returns the exit code of the failing function, but also terminates all parallel running instance. Which might not be needed in this case.
#!/usr/bin/env bash
# Wait for pids to terminate. If one pid exits with
# a non zero exit code, send the TERM signal to all
# processes and retain that exit code
#
# usage:
# :wait 123 32
function :wait(){
local pids=("$#")
[ ${#pids} -eq 0 ] && return $?
trap 'kill -INT "${pids[#]}" &>/dev/null || true; trap - INT' INT
trap 'kill -TERM "${pids[#]}" &>/dev/null || true; trap - RETURN TERM' RETURN TERM
for pid in "${pids[#]}"; do
wait "${pid}" || return $?
done
trap - INT RETURN TERM
}
# Run a function in parallel for each argument.
# Stop all instances if one exits with a non zero
# exit code
#
# usage:
# :for func 1 2 3
#
# env:
# FOR_PARALLEL: Max functions running in parallel
function :for(){
local f="${1}" && shift
local i=0
local pids=()
for arg in "$#"; do
( ${f} "${arg}" ) &
pids+=("$!")
if [ ! -z ${FOR_PARALLEL+x} ]; then
(( i=(i+1)%${FOR_PARALLEL} ))
if (( i==0 )) ;then
:wait "${pids[#]}" || return $?
pids=()
fi
fi
done && [ ${#pids} -eq 0 ] || :wait "${pids[#]}" || return $?
}
usage
for.sh:
#!/usr/bin/env bash
set -e
# import :for from gist: https://gist.github.com/Enteee/c8c11d46a95568be4d331ba58a702b62#file-for
# if you don't like curl imports, source the actual file here.
source <(curl -Ls https://gist.githubusercontent.com/Enteee/c8c11d46a95568be4d331ba58a702b62/raw/)
msg="You should see this three times"
:(){
i="${1}" && shift
echo "${msg}"
sleep 1
if [ "$i" == "1" ]; then sleep 1
elif [ "$i" == "2" ]; then false
elif [ "$i" == "3" ]; then
sleep 3
echo "You should never see this"
fi
} && :for : 1 2 3 || exit $?
echo "You should never see this"
$ ./for.sh; echo $?
You should see this three times
You should see this three times
You should see this three times
1
References
[1]: blog
[2]: gist
set -e
fail () {
touch .failure
}
expect () {
wait
if [ -f .failure ]; then
rm -f .failure
exit 1
fi
}
sleep 2 || fail &
sleep 2 && false || fail &
sleep 2 || fail
expect
The set -e at top makes your script stop on failure.
expect will return 1 if any subjob failed.
There can be a case where the process is complete before waiting for the process. If we trigger wait for a process that is already finished, it will trigger an error like pid is not a child of this shell. To avoid such cases, the following function can be used to find whether the process is complete or not:
isProcessComplete(){
PID=$1
while [ -e /proc/$PID ]
do
echo "Process: $PID is still running"
sleep 5
done
echo "Process $PID has finished"
}
Starting with Bash 5.1, there is a nice new way of waiting for and handling the results of multiple background jobs thanks to the introduction of wait -p:
#!/usr/bin/env bash
# Spawn background jobs
for ((i=0; i < 10; i++)); do
secs=$((RANDOM % 10)); code=$((RANDOM % 256))
(sleep ${secs}; exit ${code}) &
echo "Started background job (pid: $!, sleep: ${secs}, code: ${code})"
done
# Wait for background jobs, print individual results, determine overall result
result=0
while true; do
wait -n -p pid; code=$?
[[ -z "${pid}" ]] && break
echo "Background job ${pid} finished with code ${code}"
(( ${code} != 0 )) && result=1
done
# Return overall result
exit ${result}
I used this recently (thanks to Alnitak):
#!/bin/bash
# activate child monitoring
set -o monitor
# locking subprocess
(while true; do sleep 0.001; done) &
pid=$!
# count, and kill when all done
c=0
function kill_on_count() {
# you could kill on whatever criterion you wish for
# I just counted to simulate bash's wait with no args
[ $c -eq 9 ] && kill $pid
c=$((c+1))
echo -n '.' # async feedback (but you don't know which one)
}
trap "kill_on_count" CHLD
function save_status() {
local i=$1;
local rc=$2;
# do whatever, and here you know which one stopped
# but remember, you're called from a subshell
# so vars have their values at fork time
}
# care must be taken not to spawn more than one child per loop
# e.g don't use `seq 0 9` here!
for i in {0..9}; do
(doCalculations $i; save_status $i $?) &
done
# wait for locking subprocess to be killed
wait $pid
echo
From there one can easily extrapolate, and have a trigger (touch a file, send a signal) and change the counting criteria (count files touched, or whatever) to respond to that trigger. Or if you just want 'any' non zero rc, just kill the lock from save_status.
Trapping CHLD signal may not work because you can lose some signals if they arrived simultaneously.
#!/bin/bash
trap 'rm -f $tmpfile' EXIT
tmpfile=$(mktemp)
doCalculations() {
echo start job $i...
sleep $((RANDOM % 5))
echo ...end job $i
exit $((RANDOM % 10))
}
number_of_jobs=10
for i in $( seq 1 $number_of_jobs )
do
( trap "echo job$i : exit value : \$? >> $tmpfile" EXIT; doCalculations ) &
done
wait
i=0
while read res; do
echo "$res"
let i++
done < "$tmpfile"
echo $i jobs done !!!
solution to wait for several subprocesses and to exit when any one of them exits with non-zero status code is by using 'wait -n'
#!/bin/bash
wait_for_pids()
{
for (( i = 1; i <= $#; i++ )) do
wait -n $#
status=$?
echo "received status: "$status
if [ $status -ne 0 ] && [ $status -ne 127 ]; then
exit 1
fi
done
}
sleep_for_10()
{
sleep 10
exit 10
}
sleep_for_20()
{
sleep 20
}
sleep_for_10 &
pid1=$!
sleep_for_20 &
pid2=$!
wait_for_pids $pid2 $pid1
status code '127' is for non-existing process which means the child might have exited.
I almost fell into the trap of using jobs -p to collect PIDs, which does not work if the child has already exited, as shown in the script below. The solution I picked was simply calling wait -n N times, where N is the number of children I have, which I happen to know deterministically.
#!/usr/bin/env bash
sleeper() {
echo "Sleeper $1"
sleep $2
echo "Exiting $1"
return $3
}
start_sleepers() {
sleeper 1 1 0 &
sleeper 2 2 $1 &
sleeper 3 5 0 &
sleeper 4 6 0 &
sleep 4
}
echo "Using jobs"
start_sleepers 1
pids=( $(jobs -p) )
echo "PIDS: ${pids[*]}"
for pid in "${pids[#]}"; do
wait "$pid"
echo "Exit code $?"
done
echo "Clearing other children"
wait -n; echo "Exit code $?"
wait -n; echo "Exit code $?"
echo "Waiting for N processes"
start_sleepers 2
for ignored in $(seq 1 4); do
wait -n
echo "Exit code $?"
done
Output:
Using jobs
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
PIDS: 56496 56497
Exiting 3
Exit code 0
Exiting 4
Exit code 0
Clearing other children
Exit code 0
Exit code 1
Waiting for N processes
Sleeper 1
Sleeper 2
Sleeper 3
Sleeper 4
Exiting 1
Exiting 2
Exit code 0
Exit code 2
Exiting 3
Exit code 0
Exiting 4
Exit code 0

Resources