Killall doesn't work if I call it from a bash script - bash

I'm starting a tcpdump inside a script and I also kill it from the same script. So I'm currently using the killall command for this: The script gets executed from an udev rule: This is the section, which should terminate the tcpdump: In addition I also use -s SIDKILL, because I've read that this could also help.
What is the problem that killall doesn't terminate the tcpdump. When I start the script manually it is all working properly.
if [[ "$pid1" != "" ]];then
sudo killall -s SIGKILL tcpdump
sh /tmp/scripts/autoumount.sh &
sudo kill -9 $$
echo "autodump stopped"

Since you're starting tcpdump from the same script, there's no need for killall.
If you're running multiple background processes, use an array, like so:
pids=( ) # initialize empty array
tcpdump & pids+=( "$!" ) # extend said array
...later on, you can kill those PIDs:
kill "${pids[#]}"

Related

ash: you need to specify whom to kill

I'm trying to give a mqtt call the output of cat as value; however it is important that the cat call gets killed immediatly after putting something in the variable, since the targeted file has to emtpy itself and only does so, when not being read. But currently the call doesn't get killed. When I kill it manually by killing it's PID, it runs the rest of the code smoothly but prints out "ash: you need to specify whom to kill .
Here is the code:
#!/bin/ash
while true;do
rs=$( cat /dev/rs232 & )
pid=$!
kill ${pid}
mosquitto_pub -h IP -m "$rs" -t rs_test/message
done
Using kill $( pgrep cat ) insteda of pid=$! and kill ${pid} seems to also not work

How to make Bash background command exit when parent exits?

I would like python3 command to exit when parent exits (after 10 seconds).
#! /bin/bash
sudo bash -c "python3 screen-buttons.py &"
sleep 10
printf "%s\n" "Done"
Well, you have to know the pid you want to kill, one way or the other.
#!/bin/bash
# assuming screen-buttons.py does not output anything to stdout
pid=$(sudo bash -c 'python3 screen-buttons.py & echo $!')
sleep 10
kill "$pid"
If screen-buttons.py outputs something to stdout, save the pid to a file and read it from parent.
But it looks like you want to implement a timeout. If so see.. timeout utility.
To kill all descendant see https://unix.stackexchange.com/questions/124127/kill-all-descendant-processes
And anyway, why call bash and background in child process? Just call what you want to call.
#!/bin/bash
sudo python3 screen-buttons.py &
pid=$!
sleep 10
sudo kill "$pid"
and anyway:
sudo timeout 10 python3 screen-buttons.py

Letting other users stop/restart simple bash daemons – use signals or what?

I have a web server where I run some slow-starting programs as daemons. These sometimes need quick restarting (or stopping) when I recompile them or switch to another installation of them.
Inspired by http://mywiki.wooledge.org/ProcessManagement, I'm writing a script
called daemonise.sh that looks like
#!/bin/sh
while :; do
./myprogram lotsadata.xml
echo "Restarting server..." 1>&2
done
to keep a "daemon" running. Since I sometimes need to stop it, or just
restart it, I run that script in a screen session, like:
$ ./daemonise.sh & DPID=$!
$ screen -d
Then perhaps I recompile myprogram, install it to a new path, start
the new one up and want to kill the old one:
$ screen -r
$ kill $DPID
$ screen -d
This works fine when I'm the only maintainer, but now I want to let
someone else stop/restart the program, no matter who started it. And
to make things more complicated, the daemonise.sh script in fact
starts about 16 programs, making it a hassle to kill every single one
if you don't know their PIDs.
What would be the "best practices" way of letting another user
stop/restart the daemons?
I thought about shared screen sessions, but that just sounds hacky and
insecure. The best solution I've come up with for now is to wrap
starting and killing in a script that catches certain signals:
#!/bin/bash
DPID=
trap './daemonise.sh & DPID=$!' USR1
trap 'kill $DPID' USR2 EXIT
# Ensure trapper wrapper doesn't exit:
while :; do
sleep 10000 & wait $!
done
Now, should another user need to stop the daemons and I can't do it,
she just has to know the pid of the wrapper, and e.g. sudo kill -s
USR2 $wrapperpid. (Also, this makes it possible to run the daemons
on reboots, and still kill them cleanly.)
Is there a better solution? Are there obvious problems with this
solution that I'm not seeing?
(After reading Greg's Bash Wiki, I'd like to avoid any solution involving pgrep or PID-files …)
I recommend a PID based init script. Anyone with sudo privileged to the script will be able to start and stop the server processes.
On improving your approach: wouldn't it be advisable to make sure that your sleep command in sleep 10000 & wait $! gets properly terminated if your pidwrapper script exits somehow?
Otherwise there would remain a dangling sleep process in the process table for quite some time.
Similarly, wouldn't it be cleaner to terminate myprogram in daemonise.sh properly on restart (i. e. if daemonise.sh receives a TERM signal)?
In addition, it is possible to suppress job notification messages and test for pid existence before killing.
#!/bin/sh
# cat daemonise.sh
# cf. "How to suppress Terminated message after killing in bash?",
# http://stackoverflow.com/q/81520
trap '
echo "server shut down..." 1>&2
kill $spid1 $spid2 $spid3 &&
wait $spid1 $spid2 $spid3 2>/dev/null
exit
' TERM
while :; do
echo "Starting server..." 1>&2
#./myprogram lotsadata.xml
sleep 100 &
spid1=${!}
sleep 100 &
spid2=${!}
sleep 100 &
spid3=${!}
wait
echo "Restarting server..." 1>&2
done
#------------------------------------------------------------
#!/bin/bash
# cat pidwrapper
DPID=
trap '
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
./daemonise.sh & DPID=${!}
' USR1
trap '
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
kill -0 $DPID 2>/dev/null && kill $DPID && wait ${DPID} 2>/dev/null
' USR2
trap '
trap - EXIT
kill -0 $DPID 2>/dev/null && kill $DPID && wait ${DPID} 2>/dev/null
kill -0 ${!} 2>/dev/null && kill ${!} && wait ${!} 2>/dev/null
exit 0
' EXIT
# Ensure trapper wrapper does not exit:
while :; do
sleep 10000 & wait $!
done
#------------------------------------------------------------
# test
{
wrapperpid="`exec sh -c './pidwrapper & echo ${!}' | head -1`"
echo "wrapperpid: $wrapperpid"
for n in 1 2 3 4 5; do
sleep 2
# start daemonise.sh
kill -s USR1 $wrapperpid
sleep 2
# kill daemonise.sh
kill -s USR2 $wrapperpid
done
sleep 2
echo kill $wrapperpid
kill $wrapperpid
}

Starting a process over ssh using bash and then killing it on sigint

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.
Here is a short example of what I'm trying to do:
#!/bin/bash
trap "aborted" SIGINT SIGTERM
aborted() {
kill -SIGTERM $bash2_pid
exit
}
ssh -t remote_machine /foo/bar.sh &
bash2_pid=$!
wait
However the bar.sh process is still running the remote machine. If I do the same commands in a terminal window it shuts down the process on the remote host.
Is there an easy way to make this happen when I run the bash script? Or do I need to make it log on to the remote machine, find the right process and kill it that way?
edit:
Seems like I have to go with option B, killing the remotescript through another ssh connection
So no I want to know how do I get the remotepid?
I've tried a something along the lines of :
remote_pid=$(ssh remote_machine '{ /foo/bar.sh & } ; echo $!')
This doesn't work since it blocks.
How do I wait for a variable to print and then "release" a subprocess?
It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.
When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.
That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:
command & read; kill $!
This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.
To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.
With bash 4 we can use co-processes:
coproc ssh user#host 'command & read; kill $!'
trap 'echo >&"${COPROC[1]}"' EXIT
...
Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROC array. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the read and kills the command.
Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
This should work in pretty much any bash version.
Try this:
ssh -tt host command </dev/null &
When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.
Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-input I came up with this script
run.sh:
#/bin/bash
log="log"
eval "$#" \&
PID=$!
echo "running" "$#" "in PID $PID"> $log
{ (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0
trap "echo EXIT >> $log" EXIT
wait $PID
The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.
$ ssh localhost ./run.sh true; echo $?; cat log
0
running true in PID 19247
EXIT
$ ssh localhost ./run.sh false; echo $?; cat log
1
running false in PID 19298
EXIT
$ ssh localhost ./run.sh sleep 99; echo $?; cat log
^C130
running sleep 99 in PID 20499
killed
EXIT
$ ssh localhost ./run.sh sleep 2; echo $?; cat log
0
running sleep 2 in PID 20556
EXIT
For a one-liner:
ssh localhost "sleep 99 & PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
For convenience:
HUP_KILL="& PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
ssh localhost "sleep 99 $HUP_KILL"
Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.
Update:
A secondary job control channel is better than reading from stdin.
ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2
Set job control mode and monitor the job control channel:
set -m
trap "kill %1 %2 %3" EXIT
(sleep infinity | netcat -l 127.0.0.1 9001) &
(netcat -d 127.0.0.1 9002; kill -INT $$) &
"$#" &
wait %3
Finally, here's another approach and a reference to a bug filed on openssh:
https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14
This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.
ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1
It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.
The solution for bash 3.2:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.
Writing again into the pipe does not terminate the process.
So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.
You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):
/sbin/lsmod | grep -i fuse
You can then mount the remote file system with the following command:
sshfs user#remote_system: mount_point
Now just run your script on the file located in mount_point.

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources