Signalling a bash script running in the background in an infinite loop - bash

I have a bash script (sleeping in an infinite loop) running as a background process. I want to periodically signal this process, to do some processing without killing the script, ie. the script on receiving the signal should run a function and then go back to sleep. How do I signal the background process without killing it?
Here's what the code looks like for the script test.sh:
MY_PID=$$
echo $MY_PID > test.pid
while true
do
sleep 5
done
trap 'run' SIGUSR1
run()
{
// data processing
}
This is how I am running it and triggering it:
run.sh &
kill -SIGUSR1 `cat test.pid`

This works for me.
EDITED 2014-01-20: This edit avoids the frequent waking up. Also see this question:
Bash: a sleep in a while loop gets its own pid
#!/bin/bash
MY_PID=$$
echo $MY_PID > test.pid
trap 'kill ${!}; run' SIGUSR1
run()
{
echo "signal!"
}
while true
do
sleep 1000 & wait ${!}
done
Running:
> ./test.sh &
[1] 14094
> kill -SIGUSR1 `cat test.pid`
> signal!
>

Related

TERM trap doesn't reliably kill backgrounded sleep process

I wrote some BASHscript that shows a spinner and noticed that sometimes after the background process would finish and I killed the spinner, the sleep process was still running and delaying processing. I reduced the code to the one below:
#!/bin/bash
trap 'rm deleteme.fifo' EXIT
mkfifo deleteme.fifo
echo we are $$
bar() {
trap 'echo killing $sleep_pid from $$; kill $sleep_pid; wait $sleep_pid; echo exit subshell; exit' TERM
sleep 20 &
sleep_pid=$!
echo sleep is $sleep_pid
echo >deleteme.fifo
echo before wait
wait $sleep_pid
echo after wait
}
foo() {
bar &
bar_pid=$!
read <deleteme.fifo
kill $bar_pid && echo waiting for subshell to terminate && wait $bar_pid; echo returning
}
while true; do
foo &
wait $!
done
In bar I set up the trap and start a 20s sleep then notify the parent process that we are ready through a named pipe. Then I kill the child from foo. This often works, but sometimes the sleep survives and ticks down with the rest waiting for it. That's why I added the loop to trigger the behavior eventually. Does anyone have an explanation for this?
bash 4.4.12(1), Linux 4.14.83, coreutils 8.29

Kill not killing process if exiting properly

I have a simple bash script which I have written to simplify some work I am doing. All it needs to do is start one process, process_1, as a background process then start another, process_2. Once process_2 is finished I then need to terminate process_1.
process_1 starts a program which does not actually stop unless it receives the kill signal, or CTRL+C when I run it myself. The program is output into a file via {program} {args} > output_file
process_2 can take an arbitrary amount of time depending on the arguments it is given.
Code:
#!/bin/bash
#Call this on exit to kill all background processes
function killJobs () {
#Check process is still running before killing
if kill -0 "$PID"; then
kill $PID
fi
}
...Check given arguments are valid...
#Start process_1
eval "./process_1 ${Arg1} ${Arg2} ${Arg3}" &
PID=$!
#Lay a trap to catch any exits from script
trap killJobs TERM INT
#Start process_2 - sleep for 5 seconds before and after
#Need space between process_1 and process_2 starting and stopping
sleep 5
eval "./process_2 ${Arg1} ${Arg2} ${Arg3} ${Arg4} 2> ${output_file}"
sleep 5
#Make sure background job is killed on exit
killJobs
I check process_1 has been terminated by checking of its output file is still being updated after my script has ended.
If I run the script and then press CTRL+C the script is terminated and process_1 is also killed, the output file is no longer updated.
If I let the script run to its completion without my intervention process_2 and the script both terminate but when I check the output from process_1 it is still being updated.
To check this I put an echo statement just after process_1 is started and another within the if statement of killJobs, so it would only be echoed if kill $PID is called.
Doing this I can see that both ways of exiting start process_1 and then also enter the if statement to kill it. Yet kill does not actually kill the process in the case of normal exit. No error messages are produced either.
You're backgrounding the eval instead of process_1, which sets $! to the PID of the script itself, not to process_1. Change to:
#!/bin/bash
#Call this on exit to kill all background processes
function killJobs () {
#Check process is still running before killing
if kill -0 "$PID"; then
kill $PID
fi
}
...Check given arguments are valid...
#Start process_1
./process_1 ${Arg1} ${Arg2} ${Arg3} &
PID=$!
#Lay a trap to catch any exits from script
trap killJobs TERM INT
#Start process_2 - sleep for 5 seconds before and after
#Need space between process_1 and process_2 starting and stopping
sleep 5
./process_2 ${Arg1} ${Arg2} ${Arg3} ${Arg4} 2> ${output_file}
sleep 5
#Make sure background job is killed on exit
killJobs

inconsistent signal behavior? Only works for the first signal?

Trying to have a script that is able to restart itself with exec (so it can pick up any "upgrade") given a specific signal (tried SIGHUP & SIGUSR1).
This seems to work the first time, but not the second, even tho the registration (trap) does recur in the execed instance (which is still the same PID).
#!/usr/bin/env bash
set -x
readonly PROGNAME="${0}"
function run_prog()
{
echo hi
sleep 2
echo ho
sleep 1000 &
wait $!
}
restart()
{
sleep 5
exec "${PROGNAME}"
}
trap restart USR1
echo -e "TRAPS:"
trap
echo
run_prog
This is how I run it:
./tst.sh & TSTPID=$! # Starts ok, see both "hi" & "ho" messages
sleep 10
kill -USR1 ${TSTPID} # Restarts ok, see both "hi" & "ho" messages
sleep 10
kill -USR1 ${TSTPID} # NOTHING HAPPENS
sleep 5
kill ${TSTPID}
Any idea why the second signal is ignored? (some code, like de-registering the trap in the cleanup may just be paranoia)
Maybe because you're execing from a signal handler, the signal code is continuing to run and continuing into oblivion, due to the exec, or preventing other cleanup code or daisy-chained handlers from executing.
Who knows what's going on in the blackbox of the OS signal handling code and bash's own layering over it that might be circumvented by exec. exec is a very draconian measure :-)
Also check out this cool bash site. I'm looking for the bash source code that handles signals. Just curious.
Your solution here is the right approach:
#!/usr/bin/env bash
set -x
readonly PROGNAME="${0}"
DO_RESTART=
function run_prog()
{
echo hi
sleep 2
echo ho
sleep 1000 &
SLEEPPID=$!
#builtin
wait ${SLEEPPID}
}
trap DO_RESTART=1 SIGUSR1
echo -e "TRAPS:"
trap -p
echo
run_prog
if [ -n "${DO_RESTART}" ]; then
sleep 5
kill ${SLEEPPID}
exec "${PROGNAME}"
fi

BASH: Pause and resume a child script

I want to control a child script somehow. I am making a master script which spawns many children scripts and need to RESUME and PAUSE them on demand.
Child
Do stuff
PAUSE
Cleanup
Parent
sleep 10
RESUME child
Is this possible?
AS PER SUGGESTIONS
Trying to do it with signals while the child runs in the background doesn't seem to work.
script1:
#!/bin/bash
"./script2" &
sleep 1
kill -2 "$!"
sleep 1
script2:
#!/bin/bash
echo "~~ENTRY"
trap 'echo you hit ctrl-c, waking up...' SIGINT
trap 'echo you hit ctrl-\, stoppng...; exit' SIGQUIT
while [ 1 ]
do
echo "Waiting for signal.."
sleep 60000
echo "~~EXIT1"
done
echo "~~EXIT2"
Running:
> ./script1
One way to control individual process scripts is with signals. If you combine SIGINT (ctrl-c) to resume with SIGQUIT (ctrl-) to kill then the child process looks like this:
#!/bin/sh
trap 'echo you hit ctrl-c, waking up...' SIGINT
trap 'echo you hit ctrl-\, stoppng...; exit' SIGQUIT
while (true)
do
echo "do the work..."
# pause for a very long time...
sleep 600000
done
If you run this script, and hit ctrl-c, the work continues. If you hit ctrl-\, the script stops.
You would want to run this in the background then send kill -2 $pid to resume and kill -3 $pid to stop (or kill -9 would work) where $pid is the child process's process id.
Here is a good bash signals reference: http://www.ibm.com/developerworks/aix/library/au-usingtraps/
-- here is the parent script...
#!/bin/sh
./child.sh &
pid=$!
echo "child running at $pid"
sleep 2
echo "interrupt the child at $pid"
kill -INT $pid # you could also use SIGCONT
sleep 2
echo "kill the child at $pid"
kill -QUIT $pid
One way is to create a named pipe per child:
mkfifo pipe0
Then redirect stdin of the child to read from the pipe:
child < pipe0
to stop the child:
read _
(the odd _ is just there for read to have a place to store the empty line it will read).
to resume the child:
echo > pipe0
A more simple approach would be to save the stdin which gets passed to the child in form a pure file descriptor but I don't know the exact syntax anymore and can't google a good example ATM.

Set trap in bash for different process with PID known

I need to set a trap for a bash process I'm starting in the background. The background process may run very long and has its PID saved in a specific file.
Now I need to set a trap for that process, so if it terminates, the PID file will be deleted.
Is there a way I can do that?
EDIT #1
It looks like I was not precise enough with my description of the problem. I have full control over all the code, but the long running background process I have is this:
cat /dev/random >> myfile&
When I now add the trap at the beginning of the script this statement is in, $$ will be the PID of that bigger script not of this small background process I am starting here.
So how can I set traps for that background process specifically?
(./jobsworthy& echo $! > $pidfile; wait; rm -f $pidfile)&
disown
Add this to the beginning of your Bash script.
#!/bin/bash
trap 'rm "$pidfile"; exit' EXIT SIGQUIT SIGINT SIGSTOP SIGTERM ERR
pidfile=$(tempfile -p foo -s $$)
echo $$ > "$pidfile"
# from here, do your long running process
You can run your long running background process in an explicit subshell, as already shown by Petesh's answer, and set a trap inside this specific subshell to handle the exiting of your long running background process. The parent shell remains unaffected by this subshell trap.
(
trap '
trap - EXIT ERR
kill -0 ${!} 1>/dev/null 2>&1 && kill ${!}
rm -f pidfile.pid
exit
' EXIT QUIT INT STOP TERM ERR
# simulate background process
sleep 15 &
echo ${!} > pidfile.pid
wait
) &
disown
# remove background process by hand
# kill -TERM ${!}
You do not need trap to just run some command after a background process terminates, you can instead run through a shell command line and add the command following after the background process, separated with semicolon (and let this shell run in the background instead of the background process).
If you still would like to have some notification in your shell script send and trap SIGUSR2 for instance:
#!/bin/sh
BACKGROUND_PROCESS=xterm # for my testing, replace with what you have
sh -c "$BACKGROUND_PROCESS; rm -f the_pid_file; kill -USR2 $$" &
trap "echo $BACKGROUND_PROCESS ended" USR2
while sleep 1
do
echo -n .
done

Resources