kill bash script with wait command - shell

A bash script demo.sh
#!/bin/bash
./prog1 &
./prog2 &
wait
Use timeout -s 9 5m demo.sh to run the script.
The script demo.sh used to be without & and wait. I want to know whether timeout will kill prog1 and prog2 when timeout happens. How can I make sure that all subprocesses would be killed?

The forked jobs will be killed when you kill the shell process started by
demo.sh (unless you do something like disown $PID).
You can ensure this happens with kill -0:
./prog1 &
echo P1=$!
./prog2 &
echo P2=$!
you can then kill -0 ${PID1} and kill -0 ${PID2} and ensure that both
commands return with exit status 1, which means "couldn't find process"

Related

Kill not killing process if exiting properly

I have a simple bash script which I have written to simplify some work I am doing. All it needs to do is start one process, process_1, as a background process then start another, process_2. Once process_2 is finished I then need to terminate process_1.
process_1 starts a program which does not actually stop unless it receives the kill signal, or CTRL+C when I run it myself. The program is output into a file via {program} {args} > output_file
process_2 can take an arbitrary amount of time depending on the arguments it is given.
Code:
#!/bin/bash
#Call this on exit to kill all background processes
function killJobs () {
#Check process is still running before killing
if kill -0 "$PID"; then
kill $PID
fi
}
...Check given arguments are valid...
#Start process_1
eval "./process_1 ${Arg1} ${Arg2} ${Arg3}" &
PID=$!
#Lay a trap to catch any exits from script
trap killJobs TERM INT
#Start process_2 - sleep for 5 seconds before and after
#Need space between process_1 and process_2 starting and stopping
sleep 5
eval "./process_2 ${Arg1} ${Arg2} ${Arg3} ${Arg4} 2> ${output_file}"
sleep 5
#Make sure background job is killed on exit
killJobs
I check process_1 has been terminated by checking of its output file is still being updated after my script has ended.
If I run the script and then press CTRL+C the script is terminated and process_1 is also killed, the output file is no longer updated.
If I let the script run to its completion without my intervention process_2 and the script both terminate but when I check the output from process_1 it is still being updated.
To check this I put an echo statement just after process_1 is started and another within the if statement of killJobs, so it would only be echoed if kill $PID is called.
Doing this I can see that both ways of exiting start process_1 and then also enter the if statement to kill it. Yet kill does not actually kill the process in the case of normal exit. No error messages are produced either.
You're backgrounding the eval instead of process_1, which sets $! to the PID of the script itself, not to process_1. Change to:
#!/bin/bash
#Call this on exit to kill all background processes
function killJobs () {
#Check process is still running before killing
if kill -0 "$PID"; then
kill $PID
fi
}
...Check given arguments are valid...
#Start process_1
./process_1 ${Arg1} ${Arg2} ${Arg3} &
PID=$!
#Lay a trap to catch any exits from script
trap killJobs TERM INT
#Start process_2 - sleep for 5 seconds before and after
#Need space between process_1 and process_2 starting and stopping
sleep 5
./process_2 ${Arg1} ${Arg2} ${Arg3} ${Arg4} 2> ${output_file}
sleep 5
#Make sure background job is killed on exit
killJobs

How can I silence the "Terminated" message when my command is killed by timeout?

By referencing bash: silently kill background function process and Timeout a command in bash without unnecessary delay, I wrote my own script to set a timeout for a command, as well as silencing the kill message.
But I still am getting a "Terminated" message when my process gets killed. What's wrong with my code?
#!/bin/bash
silent_kill() {
kill $1 2>/dev/null
wait $1 2>/dev/null
}
timeout() {
limit=$1 #timeout limit
shift
command=$* #command to run
interval=1 #default interval between checks if the process is still alive
delay=1 #default delay between SIGTERM and SIGKILL
(
((t = limit))
while ((t > 0)); do
sleep $interval;
#kill -0 $$ || exit 0
((t -= interval))
done
silent_kill $$
#kill -s SIGTERM $$ && kill -0 $$ || exit 0
sleep $delay
#kill -s SIGKILL $$
) &> /dev/null &
exec $*
}
timeout 1 sleep 10
There's nothing wrong with your code, that "Terminated" message doesn't come from your script but from the invoking shell (the one you launch your script
from).
You can deactivate if by disabling job control:
$ set +m
$ bash <your timeout script>
Perhaps bash has moved on in 4 years. I do know you can avoid
getting Terminated by disowning a child process. You can no longer job control it though. Eg:
$ sleep 100 &
[1] 15436
$ disown -r
$ kill -9 15436
help disown:
disown [-h] [-ar] [jobspec ...]
Remove jobs from current shell.
Removes each JOBSPEC argument from the table of active jobs. Without
any JOBSPECs, the shell uses its notion of the current job.
-a remove all jobs if JOBSPEC is not supplied
-h mark each JOBSPEC so that SIGHUP is not sent to the job if the shell receives a SIGHUP
-r remove only running jobs
Internally the shell maintains a list of children it forked and wait()s for any of them to exit or be killed. When a child's exit status was collected, the shell prints a message. This is called monitoring in shell parlance.
It seems you want to turn off monitoring. Monitoring is managed with the m option; to turn it on, use set -m (the default at startup). To turn it off, set +m.
Note that monitoring off also disables messages for asynchronous jobs, e.g. no more messages like
$ sleep 5 &
[1] 59468
$
[1] + done sleep 5
$

Kill running background jobs inside a shell script

I have created a shell script that runs multiple processes in the background, and at the end listens for the user's keyboard, when enter is pressed, it kills the previously created processes.
Something like :
#!/bin/sh
process_1 &
process_2 &
process_3 &
read -p "PRESS [ENTER] TO TERMINATE PROCESSES." PRESSKEY
kill -2 `jobs -p`
Notice that I run the processes in the background (the later &), I thought that when I do something like :
kill -2 `jobs -p`
All the jobs running in the background would be killed, but it actually tells me that my command is invalid, so I assume that jobs -p doesn't return anything.
Any idea on how to kill process_1 process_2 and process_3 ?? Thanks in advance.
You can store the PIDs in a space separated list and kill that:
process_1 & pids="${pids-} $!"
process_2 & pids="${pids-} $!"
process_3 & pids="${pids-} $!"
read -p "PRESS [ENTER] TO TERMINATE PROCESSES." PRESSKEY
kill -2 $pids # Without quotes to make the PIDs separate arguments
(The ${pids-} syntax is to avoid errors when using set -o nounset.)
Try this:
kill $(ps|grep 'process_1'|awk '{print $1}')
Explanation:
ps returns a list of running process. We grep for only the one we want to kill and the use awk to get only the PID of the process.

Letting other users stop/restart simple bash daemons – use signals or what?

I have a web server where I run some slow-starting programs as daemons. These sometimes need quick restarting (or stopping) when I recompile them or switch to another installation of them.
Inspired by http://mywiki.wooledge.org/ProcessManagement, I'm writing a script
called daemonise.sh that looks like
#!/bin/sh
while :; do
./myprogram lotsadata.xml
echo "Restarting server..." 1>&2
done
to keep a "daemon" running. Since I sometimes need to stop it, or just
restart it, I run that script in a screen session, like:
$ ./daemonise.sh & DPID=$!
$ screen -d
Then perhaps I recompile myprogram, install it to a new path, start
the new one up and want to kill the old one:
$ screen -r
$ kill $DPID
$ screen -d
This works fine when I'm the only maintainer, but now I want to let
someone else stop/restart the program, no matter who started it. And
to make things more complicated, the daemonise.sh script in fact
starts about 16 programs, making it a hassle to kill every single one
if you don't know their PIDs.
What would be the "best practices" way of letting another user
stop/restart the daemons?
I thought about shared screen sessions, but that just sounds hacky and
insecure. The best solution I've come up with for now is to wrap
starting and killing in a script that catches certain signals:
#!/bin/bash
DPID=
trap './daemonise.sh & DPID=$!' USR1
trap 'kill $DPID' USR2 EXIT
# Ensure trapper wrapper doesn't exit:
while :; do
sleep 10000 & wait $!
done
Now, should another user need to stop the daemons and I can't do it,
she just has to know the pid of the wrapper, and e.g. sudo kill -s
USR2 $wrapperpid. (Also, this makes it possible to run the daemons
on reboots, and still kill them cleanly.)
Is there a better solution? Are there obvious problems with this
solution that I'm not seeing?
(After reading Greg's Bash Wiki, I'd like to avoid any solution involving pgrep or PID-files …)
I recommend a PID based init script. Anyone with sudo privileged to the script will be able to start and stop the server processes.
On improving your approach: wouldn't it be advisable to make sure that your sleep command in sleep 10000 & wait $! gets properly terminated if your pidwrapper script exits somehow?
Otherwise there would remain a dangling sleep process in the process table for quite some time.
Similarly, wouldn't it be cleaner to terminate myprogram in daemonise.sh properly on restart (i. e. if daemonise.sh receives a TERM signal)?
In addition, it is possible to suppress job notification messages and test for pid existence before killing.
#!/bin/sh
# cat daemonise.sh
# cf. "How to suppress Terminated message after killing in bash?",
# http://stackoverflow.com/q/81520
trap '
echo "server shut down..." 1>&2
kill $spid1 $spid2 $spid3 &&
wait $spid1 $spid2 $spid3 2>/dev/null
exit
' TERM
while :; do
echo "Starting server..." 1>&2
#./myprogram lotsadata.xml
sleep 100 &
spid1=${!}
sleep 100 &
spid2=${!}
sleep 100 &
spid3=${!}
wait
echo "Restarting server..." 1>&2
done
#------------------------------------------------------------
#!/bin/bash
# cat pidwrapper
DPID=
trap '
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
./daemonise.sh & DPID=${!}
' USR1
trap '
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
kill -0 $DPID 2>/dev/null && kill $DPID && wait ${DPID} 2>/dev/null
' USR2
trap '
trap - EXIT
kill -0 $DPID 2>/dev/null && kill $DPID && wait ${DPID} 2>/dev/null
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
exit 0
' EXIT
# Ensure trapper wrapper does not exit:
while :; do
sleep 10000 & wait $!
done
#------------------------------------------------------------
# test
{
wrapperpid="`exec sh -c './pidwrapper & echo ${!}' | head -1`"
echo "wrapperpid: $wrapperpid"
for n in 1 2 3 4 5; do
sleep 2
# start daemonise.sh
kill -s USR1 $wrapperpid
sleep 2
# kill daemonise.sh
kill -s USR2 $wrapperpid
done
sleep 2
echo kill $wrapperpid
kill $wrapperpid
}

Set trap in bash for different process with PID known

I need to set a trap for a bash process I'm starting in the background. The background process may run very long and has its PID saved in a specific file.
Now I need to set a trap for that process, so if it terminates, the PID file will be deleted.
Is there a way I can do that?
EDIT #1
It looks like I was not precise enough with my description of the problem. I have full control over all the code, but the long running background process I have is this:
cat /dev/random >> myfile&
When I now add the trap at the beginning of the script this statement is in, $$ will be the PID of that bigger script not of this small background process I am starting here.
So how can I set traps for that background process specifically?
(./jobsworthy& echo $! > $pidfile; wait; rm -f $pidfile)&
disown
Add this to the beginning of your Bash script.
#!/bin/bash
trap 'rm "$pidfile"; exit' EXIT SIGQUIT SIGINT SIGSTOP SIGTERM ERR
pidfile=$(tempfile -p foo -s $$)
echo $$ > "$pidfile"
# from here, do your long running process
You can run your long running background process in an explicit subshell, as already shown by Petesh's answer, and set a trap inside this specific subshell to handle the exiting of your long running background process. The parent shell remains unaffected by this subshell trap.
(
trap '
trap - EXIT ERR
kill -0 ${!} 1>/dev/null 2>&1 && kill ${!}
rm -f pidfile.pid
exit
' EXIT QUIT INT STOP TERM ERR
# simulate background process
sleep 15 &
echo ${!} > pidfile.pid
wait
) &
disown
# remove background process by hand
# kill -TERM ${!}
You do not need trap to just run some command after a background process terminates, you can instead run through a shell command line and add the command following after the background process, separated with semicolon (and let this shell run in the background instead of the background process).
If you still would like to have some notification in your shell script send and trap SIGUSR2 for instance:
#!/bin/sh
BACKGROUND_PROCESS=xterm # for my testing, replace with what you have
sh -c "$BACKGROUND_PROCESS; rm -f the_pid_file; kill -USR2 $$" &
trap "echo $BACKGROUND_PROCESS ended" USR2
while sleep 1
do
echo -n .
done

Resources