Killing Subshell with SIGTERM - shell

I'm sure this is really simple, but it's biting me in the face anyway, and I'm a little frustrated and stumped.
So, I have a script which I've managed to boil down to:
#!/bin/sh
sleep 50 | echo
If I run that at the command line, and hit Ctrl-C it stops, like I would expect.
If I send it sigint, using kill, it does nothing.
I thought that was strange, since I thought those should have been the same.
Then, if I send it sigterm, then it also dies, but if I look in ps, the sleep is still running.
What am I missing, here?
This is obviously not the real script, which runs python, and it's more of a problem when it keeps running after start-stop-daemon tries to kill the daemon.
Help me people. I'm dumb.

The reason this happens is that the Ctrl-C is delivered to the sleep process, whereas the sigint you are sending is delivered only to the script itself. See Child process receives parent's SIGINT for details on this.
You can verify this yourself by using strace -p when hitting ctrl-c or sending sigint; strace will tell you what signals are delivered.
EDIT: I don't think you are dumb. Processes and how they work are seemingly simple, but the details are often complicated, and even experts get confused by this sort of thing.

I did the same thing I written script named as test.sh with below containt.
#!/bin/sh
sleep 50 | echo
After executing , I did Ctrl-C -> its working fine means closing it.
Again executed and in another terminal i checked the PID by ps -ef|grep test.sh after finding the pid , i did kill <pid> and it killed the process , to verify again i executed ps -ef|grep test.sh and didnt get any pid.

Related

Trying to close all child processes when I interrupt my bash script

I have written a bash script to carry out some tests on my system. The tests run in the background and in parallel. The tests can take a long time and sometimes I may wish to abort the tests part way through.
If I Control+C then it aborts the parent script, but leaves the various children running. I wish to make it so that I can hit Control+C or otherwise to quit and then kill all child processes running in the background. I have a bit of code that does the job if I'm running running the background jobs directly from the terminal, but it doesn't work in my script.
I have a minimal working example.
I have tried using trap in combination with pgrep -P $$.
#!/bin/bash
trap 'kill -n 2 $(pgrep -P $$)' 2
sleep 10 &
wait
I was hoping that on hitting control+c (SIGINT) would kill everything that the script started but it actually says:
./breakTest.sh: line 1: kill: (3220) - No such process
This number changes, but doesn't seem to apply to any running processes, so I don't know where it is coming from.
I guess if the contents of the trap command get evaluated where the trap command occurs then it might explain the outcome. The 3220 pid might be for pgrep itself.
I'd appreciate some insight here
Thanks
I have found a solution using pkill. This example also deals with many child processes.
#!/bin/bash
trap 'pkill -P $$' SIGINT SIGTERM
for i in {1..10}; do
sleep 10 &
done
wait
This appears to kill all the child processes elegantly. Though I don't properly understand what the issue was with my original code, apart from sending the correct signal.
in bash whenever you you use & after a command it places that command as a background job ( this background jobs are called job_spec ) which is incremented by one until you exit that terminal session. You can use the jobs command to get the list of the background jobs running. To work with this jobs you have to use the % with the job id. The jobs command also accept other options such as jobs -p to see the proces sids of all jobs , jobs -p %JOB_SPEC to see the process of id of that particular job.
#!/usr/bin/env bash
trap 'kill -9 %1' 2
sleep 10 &
wait
or
#!/usr/bin/env bash
trap 'kill -9 $(jobs -p %1)' 2
sleep 10 &
wait
I implemented something like this few years back, you can take a look at it async bash
You can try something like the following:
pkill -TERM -P <your_parent_id_here>

How do I handle stopping my service?

I've turned a program I wrote into a service, and I have a bash script that runs to start up the actual program, since there are things that have to be started in a particular order. The contents of the startup script (called with start-stop-daemon from the init.d script look like :
./rfid_reader &
sleep 2
java ReaderClass &
This works fine, but how do I go about killing them when it comes time to stop the process?
I've seen pidfiles used, do I just get the PIDs of the two programs, write them to a file, and then kill them when it comes time to shut them down, or do I have to ps -ef | grep program to get their PIDs?
I don't think killall is a good idea to do this. You'd better to record the PID of the program started in background in some file(e.g. PID_FILE) and then kill $(<$PID_FILE) to stop it.
Please refer to this thread for how to get the PID the previous started background program.
Assuming you know the name of your program, you can kill them as below :
killall -KILL program_name

killing a background process with shell in Ubuntu

I tried killing a process with shell in Ubuntu which is created with like:
#!/bin/bash
<!--There should be codes which can kill my app(run in the background)
echo "app will be run."
java -jar path/to/my/jar/file.jar /arguman/of/myApp.txt & << 'ENDAPP'
disown ENDAPP
I know how to kill an app with manuel which is like:
ps -ax -u| grep appName
and find processid then,
kill [processId]
Is it possible to do?İf yes,how?
Thank you.
You can actually find examples of how to do this right here, on this site, by doing a simple search. (Or Google it: "bash shell wait.") When you execute any background process, you can get the PID ("process id") of the new child. You can wait on the child to finish. You can also kill it.
Shell commands that show you executing jobs also provide their PID.
However, bear in mind that "killing a child" ... while it won't land you in prison in this case ;-) ... "is generally not a good thing to do." You have no idea what the child was doing, what it had or had not finished doing, what data might now be in an inconsistent state, when you put that bullet through its brain. It is impossible to reliably debug any process that relies on murdering its children.
You can "send a different signal," such as SIGHUP or SIGUSR1, to a process, using the same kill command, and design the child process to be listening for that signal as an indication that it must "shut itself down, quickly." Always give a process a signal to "put its own affairs in order, and then to leave 'this mortal coil' ..."

bash restart sub-process using trap SIGCHLD?

I've seen monitoring programs either in scripts that check process status using 'ps' or 'service status(on Linux)' periodically, or in C/C++ that forks and wait on the process...
I wonder if it is possible to use bash with trap and restart the sub-process when SIGCLD received?
I have tested a basic suite on RedHat Linux with following idea (and certainly it didn't work...)
#!/bin/bash
set -o monitor # can someone explain this? discussion on Internet say this is needed
trap startProcess SIGCHLD
startProcess() {
/path/to/another/bash/script.sh & # the one to restart
while [ 1 ]
do
sleep 60
done
}
startProcess
what the bash script being started just sleep for a few seconds and exit for now.
several issues observed:
when the shell starts in foreground, SIGCHLD will be handled only once. does trap reset signal handling like signal()?
the script and its child seem to be immune to SIGINT, which means they cannot be stopped by ^C
since cannot be closed, I closed the terminal. The script seems to be HUP and many zombie children left.
when run in background, the script caused terminal to die
... anyway, this does not work at all. I have to say I know too little about this topic.
Can someone suggest or give some working examples?
Are there scripts for such use?
how about use wait in bash, then?
Thanks
I can try to answer some of your questions but not all based on what I
know.
The line set -o monitor (or equivalently, set -m) turns on job
control, which is only on by default for interactive shells. This seems
to be required for SIGCHLD to be sent. However, job control is more of
an interactive feature and not really meant to be used in shell scripts
(see also this question).
Also keep in mind this is probably not what you intended to do
because once you enable job control, SIGCHLD will be sent for every
external command that exists (e.g. every time you run ls or grep or
anything, a SIGCHLD will fire when that command completes and your trap
will run).
I suspect the reason the SIGCHLD trap only appears to run once is
because your trap handler contains a foreground infinite loop, so your
script gets stuck in the trap handler. There doesn't seem to be a point
to that loop anyways, so you could simply remove it.
The script's "immunity" to SIGINT seems to be an effect of enabling
job control (the monitor part). My hunch is with job control turned on,
the sub-instance of bash that runs your script no longer terminates
itself in response to a SIGINT but instead passes the SIGINT through to
its foreground child process. In your script, the ^C i.e. SIGINT
simply acts like a continue statement in other programming languages
case, since SIGINT will just kill the currently running sleep 60,
whereupon the while loop will immediately run a new sleep 60.
When I tried running your script and then killing it (from another
terminal), all I ended up with were two stray sleep processes.
Backgrounding that script also kills my shell for me, although
the behavior is not terribly consistent (sometimes it happens
immediately, other times not at all). It seems typing any keys other
than enter causes an EOF to get sent somehow. Even after the terminal
exits the script continues to run in the background. I have no idea
what is going on here.
Being more specific about what you want to accomplish would help. If
you just want a command to run continuously for the lifetime of your
script, you could run an infinite loop in the background, like
while true; do
some-command
echo some-command finished
echo restarting some-command ...
done &
Note the & after the done.
For other tasks, wait is probably a better idea than using job control
in a shell script. Again, it would depend on what exactly you are trying
to do.

Terminate running commands when shell script is killed [duplicate]

This question already has answers here:
What's the best way to send a signal to all members of a process group?
(34 answers)
Closed 6 years ago.
For testing purposes I have this shell script
#!/bin/bash
echo $$
find / >/dev/null 2>&1
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
Running it in the background, and killing the shell only will orphan the commands running in the script.
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
I would do something like this:
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. : is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the find command is run in the background (from the script's perspective) and we call the wait builtin for it to finish. Since we gave a real command to trap above, when a signal is handled, wait will exit with a status greater than 128. If the process waited for completes, wait will return the exit status of that process.
Last, if the wait returns that error status, we want to kill the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting kill -- -$$ as your argument to trap is another option if you don't care about leaving any information around post-exit.
For trap to work the way you want, you do need to pair it up with wait - the bash man page says "If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes." wait is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
Was looking for an elegant solution to this issue and found the following solution elsewhere.
trap 'kill -HUP 0' EXIT
My own man pages say nothing about what 0 means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
Send a signal to the group.
So instead of kill 13231 do:
kill -- -13231
If you're starting from python then have a look at:
http://www.pixelbeat.org/libs/subProcess.py
which shows how to mimic the shell in starting
and killing a group
#Patrick's answer almost did the trick, but it doesn't work if the parent process of your current shell is in the same group (it kills the parent too).
I found this to be better:
trap 'pkill -P $$' EXIT
See here for more info.
Just add a line like this to your script:
trap "kill $$" SIGINT
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
The thing you would need to do is trap the kill signal, kill the find command and exit.

Resources