How to pipe background processes in a shell script - shell

I have a Shell script that starts a few background processes (using &) and are automatically killed when the user calls Ctrl+C (using trap). This works well:
#!/bin/sh
trap "exit" INT TERM ERR
trap "kill 0" EXIT
command1 &
command2 &
command3 &
wait
Now I would like to filter the output of command3 with a grep -v "127.0.0.1" to exclude all the line with 127.0.0.1. Like this:
#!/bin/sh
trap "exit" INT TERM ERR
trap "kill 0" EXIT
command1 &
command2 &
command3 | grep -v "127.0.0.1" &
wait
The problem is that the signal Ctrl+C doesn't kill command3 anymore.
Is there a way to capture pipe command3 with the grep in order to be able to kill at the end of the process?
Thanks

I will answer my own question. The problem was in the trap too limited. I changed to kill all jobs properly.
#!/bin/sh
killjobs() {
for job in $(jobs -p); do
kill -s SIGTERM $job > /dev/null 2>&1 || (sleep 10 && kill -9 $job > /dev/null 2>&1 &)
done
}
trap killjobs EXIT
command1 &
command2 &
command3 | grep -v "127.0.0.1" &
wait

Related

read command doesn't work in background

Currently, I'm writing a Bash program that runs a program with different parameters based on a condition which changes over time, and exits when a key is pressed by the user. It is run with sudo, if that matters. However, read does not seem to be receiving any characters, and the script continues to run. Why does this occur, and how can I fix it? An example of what I'm trying:
(
read -sN1
exit
) &
while true; do
command param1 &
pid=$!
while condition; do true; done
kill $pid
command param2 &
pid=$!
until condition; do true; done
kill $pid
done
One way is to put the outer while loop in the background and kill it after the user presses a key:
while true; do
pid1=$!
echo $pid1 > /tmp/pid1
while condition; do sleep 1; done
kill $pid1
wait $pid1 2>/dev/null
command param2 &
pid2=$!
echo $pid2 > /tmp/pid2
until condition; do sleep 1; done
kill $pid2
wait $pid2 2>/dev/null
sleep 1
done &
pid3=$!
read -sN1
killlist="$pid3 `pgrep -F /tmp/pid1` `pgrep -F /tmp/pid2`"
kill $killlist
wait $pid3 2>/dev/null
rm -f /tmp/pid1 /tmp/pid2
I added some sleep commands so that the potentially infinite while loops don't thrash the CPU.
I added killing of the first and second process after the keypress. $pid1 and $pid2 won't be set for the script's shell, only for the outer loop's subshell, so their values need to be passed by tmp file.
I added wait commands to suppress the output of the kills (https://stackoverflow.com/a/5722874/1563960).
I used pgrep -F to only kill pids from the tmp files if they are still running.

Bash: Kill Process after timeout or after an event

I have a script which runs a long running process.
This process is currently stopped after a timeout.
#!/bin/bash
timeout 3600 ./longrunningprocess
My problem is now that this process does not return before the timeout is reached and sometimes I need to stop it earlier.
What do I need?
I want to start some other script in parallel which checks periodically if my longrunningprocess should stop. When this bash script returns, the timeout command should be killed.
Any idea how I could achieve that?
Is there anything like the timeout command? Not with a timespan but a script I could start and which is like the event trigger?
E.g.
#!/bin/bash
fancyCommandKillsSecondCommandIfFirstCommandReturns "./myPeriodicScript.sh" "timeout 3600 ./longrunningprocess"
Thanks!
Edit: Something like "Start 2 Processes in parallel and kill both if one returns" would also work...
Edit2: The answers gave me some ideas for a script:
#!/bin/bash
FirstProcess="${1}"
SecondProcess="${2}"
exec $FirstProcess &
PID1=$!
exec $SecondProcess &
PID2=$!
function killall {
if ps -p $PID1 > /dev/null
then
kill -9 $PID1
fi
if ps -p $PID2 > /dev/null
then
kill -9 $PID2
fi
}
trap killall EXIT
while true; do
if ! ps -p $PID1 > /dev/null
then
exit;
fi
if ! ps -p $PID2 > /dev/null
then
exit;
fi
sleep 5;
done
This kind of does what I want. Is there any native functionality or a better way to do this?
Start the longrunningprocess in the background and remember the pid.
#!/bin/bash
timeout 3600 ./longrunningprocess &
long_pid=$!
./myPeriodicScript.sh
kill -9 ${long_pid}
If you parse the output of the longrunningprocess to determine if the process needs to be killed, then you could do something like this:
#!/bin/bash
FIFO="tmpfifo"
TIMEOUT=10
mkfifo $FIFO
timeout 100 ./longrun &> $FIFO &
PID=$!
while read line; do
echo "Parsing $line see if $PID needs to be killed";
if [ "$line" = "5" ]; then
kill $PID
fi
done < $FIFO
exit
This will pipe all output into a FIFO and start reading from that fifo. In addition, it keeps the PID of the timeout process, so it can be killed.

How can I make an external program interruptible in this trap-captured bash script?

I am writing a script which will run an external program (arecord) and do some cleanup if it's interrupted by either a POSIX signal or input on a named pipe. Here's the draft in full
#!/bin/bash
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "echo interrupted && (rm $P || echo 'couldnt delete $P') && echo 'removed fifo' && exit" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
while true
do
echo waiting...
sleep 1
done
#arecord $F
This works perfectly as it is: the script ends when a signal arrives and a signal is generated if the fifo is written-to.
But instead of the while true loop I want the now-commented-out arecord command, but if I run that program instead of the loop the SIGINT doesn't get caught in the trap and arecord keeps running.
What should I do?
It sounds like you really need this to work more like an init script. So, start arecord in the background and put the pid in a file. Then use the trap to kill the arecord process based on the pidfile.
#!/bin/bash
PIDFILE=/var/run/arecord-runner.pid #Just somewhere to store the pid
LOGFILE=/var/log/arecord-runner.log
#Just one option for how to format your trap call
#Note that this does not use &&, so one failed function will not
# prevent other items in the trap from running
trapFunc() {
echo interrupted
(rm $P || echo 'couldnt delete $P')
echo 'removed fifo'
kill $(cat $PIDFILE)
exit 0
}
X=`date '+%Y-%m-%d_%H.%M.%S'`
F=/tmp/$X.wav
P=/tmp/$X.$$.fifo
mkfifo $P
trap "trapFunc" INT
# this forked process will wait for input on the fifo
(echo 'waiting for fifo' && cat $P >/dev/null && echo 'fifo hit' && kill -s SIGINT $$)&
arecord $F 1>$LOGFILE 2>&1 & #Run in the background, sending logs to file
echo $! > $PIDFILE #Save pid of the last background process to file
while true
do
echo waiting...
sleep 1
done
Also... you may have your trap written with '&&' clauses for a reason, but as an alternative, you can give a function name as I did above, or a sort of anonymous function like this:
trap "{ command1; command2 args; command3; exit 0; }"
Just make sure that each command is followed by a semicolon and there are spaces between the braces and the commands. The risk of using && in the trap is that your script will continue to run past the interrupt if one of the commands before the exit fails to execute (but maybe you want that?).

Letting other users stop/restart simple bash daemons – use signals or what?

I have a web server where I run some slow-starting programs as daemons. These sometimes need quick restarting (or stopping) when I recompile them or switch to another installation of them.
Inspired by http://mywiki.wooledge.org/ProcessManagement, I'm writing a script
called daemonise.sh that looks like
#!/bin/sh
while :; do
./myprogram lotsadata.xml
echo "Restarting server..." 1>&2
done
to keep a "daemon" running. Since I sometimes need to stop it, or just
restart it, I run that script in a screen session, like:
$ ./daemonise.sh & DPID=$!
$ screen -d
Then perhaps I recompile myprogram, install it to a new path, start
the new one up and want to kill the old one:
$ screen -r
$ kill $DPID
$ screen -d
This works fine when I'm the only maintainer, but now I want to let
someone else stop/restart the program, no matter who started it. And
to make things more complicated, the daemonise.sh script in fact
starts about 16 programs, making it a hassle to kill every single one
if you don't know their PIDs.
What would be the "best practices" way of letting another user
stop/restart the daemons?
I thought about shared screen sessions, but that just sounds hacky and
insecure. The best solution I've come up with for now is to wrap
starting and killing in a script that catches certain signals:
#!/bin/bash
DPID=
trap './daemonise.sh & DPID=$!' USR1
trap 'kill $DPID' USR2 EXIT
# Ensure trapper wrapper doesn't exit:
while :; do
sleep 10000 & wait $!
done
Now, should another user need to stop the daemons and I can't do it,
she just has to know the pid of the wrapper, and e.g. sudo kill -s
USR2 $wrapperpid. (Also, this makes it possible to run the daemons
on reboots, and still kill them cleanly.)
Is there a better solution? Are there obvious problems with this
solution that I'm not seeing?
(After reading Greg's Bash Wiki, I'd like to avoid any solution involving pgrep or PID-files …)
I recommend a PID based init script. Anyone with sudo privileged to the script will be able to start and stop the server processes.
On improving your approach: wouldn't it be advisable to make sure that your sleep command in sleep 10000 & wait $! gets properly terminated if your pidwrapper script exits somehow?
Otherwise there would remain a dangling sleep process in the process table for quite some time.
Similarly, wouldn't it be cleaner to terminate myprogram in daemonise.sh properly on restart (i. e. if daemonise.sh receives a TERM signal)?
In addition, it is possible to suppress job notification messages and test for pid existence before killing.
#!/bin/sh
# cat daemonise.sh
# cf. "How to suppress Terminated message after killing in bash?",
# http://stackoverflow.com/q/81520
trap '
echo "server shut down..." 1>&2
kill $spid1 $spid2 $spid3 &&
wait $spid1 $spid2 $spid3 2>/dev/null
exit
' TERM
while :; do
echo "Starting server..." 1>&2
#./myprogram lotsadata.xml
sleep 100 &
spid1=${!}
sleep 100 &
spid2=${!}
sleep 100 &
spid3=${!}
wait
echo "Restarting server..." 1>&2
done
#------------------------------------------------------------
#!/bin/bash
# cat pidwrapper
DPID=
trap '
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
./daemonise.sh & DPID=${!}
' USR1
trap '
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
kill -0 $DPID 2>/dev/null && kill $DPID && wait ${DPID} 2>/dev/null
' USR2
trap '
trap - EXIT
kill -0 $DPID 2>/dev/null && kill $DPID && wait ${DPID} 2>/dev/null
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
exit 0
' EXIT
# Ensure trapper wrapper does not exit:
while :; do
sleep 10000 & wait $!
done
#------------------------------------------------------------
# test
{
wrapperpid="`exec sh -c './pidwrapper & echo ${!}' | head -1`"
echo "wrapperpid: $wrapperpid"
for n in 1 2 3 4 5; do
sleep 2
# start daemonise.sh
kill -s USR1 $wrapperpid
sleep 2
# kill daemonise.sh
kill -s USR2 $wrapperpid
done
sleep 2
echo kill $wrapperpid
kill $wrapperpid
}

Set trap in bash for different process with PID known

I need to set a trap for a bash process I'm starting in the background. The background process may run very long and has its PID saved in a specific file.
Now I need to set a trap for that process, so if it terminates, the PID file will be deleted.
Is there a way I can do that?
EDIT #1
It looks like I was not precise enough with my description of the problem. I have full control over all the code, but the long running background process I have is this:
cat /dev/random >> myfile&
When I now add the trap at the beginning of the script this statement is in, $$ will be the PID of that bigger script not of this small background process I am starting here.
So how can I set traps for that background process specifically?
(./jobsworthy& echo $! > $pidfile; wait; rm -f $pidfile)&
disown
Add this to the beginning of your Bash script.
#!/bin/bash
trap 'rm "$pidfile"; exit' EXIT SIGQUIT SIGINT SIGSTOP SIGTERM ERR
pidfile=$(tempfile -p foo -s $$)
echo $$ > "$pidfile"
# from here, do your long running process
You can run your long running background process in an explicit subshell, as already shown by Petesh's answer, and set a trap inside this specific subshell to handle the exiting of your long running background process. The parent shell remains unaffected by this subshell trap.
(
trap '
trap - EXIT ERR
kill -0 ${!} 1>/dev/null 2>&1 && kill ${!}
rm -f pidfile.pid
exit
' EXIT QUIT INT STOP TERM ERR
# simulate background process
sleep 15 &
echo ${!} > pidfile.pid
wait
) &
disown
# remove background process by hand
# kill -TERM ${!}
You do not need trap to just run some command after a background process terminates, you can instead run through a shell command line and add the command following after the background process, separated with semicolon (and let this shell run in the background instead of the background process).
If you still would like to have some notification in your shell script send and trap SIGUSR2 for instance:
#!/bin/sh
BACKGROUND_PROCESS=xterm # for my testing, replace with what you have
sh -c "$BACKGROUND_PROCESS; rm -f the_pid_file; kill -USR2 $$" &
trap "echo $BACKGROUND_PROCESS ended" USR2
while sleep 1
do
echo -n .
done

Resources