How do I kill background processes / jobs started by a bash script after it finishes executing? - bash

So I want to start a docker image, then a Django back-end and finally an angular front-end, let them run as long as I need to do tests/develop and then kill them when I'm done. To do this I first tried starting them all in a script and have them run in a background, and have a second script do kill %n for both processes. This doesn't work because the background processes are in another context, so the second script cannot reference them.
Then I tried this:
#!/bin/bash
# Exit Angular, Django and kill docker_img
function clean_up()
{
echo "Exiting..."
kill %2
kill %1
docker stop docker_img
reset
exit
}
# Trigger cleanup on CTRL + C
trap clean_up SIGINT
# Start docker database
docker start docker_img
# Start django backend
cd ~/Projects/DjangoBackend
source venv/bin/activate
python src/manage.py runserver &
sleep 3
echo 'Done starting django, starting angular'
sleep 1
# Start angular front end
cd ~/Projects/AngularFront
npm start &
However, after npm start & runs, the trap stops working, so it effectively becomes useless. I'm guessing it could be because once my script is done running the trap is no longer active, but I don't know how to fix this. What can I do?

If you are looking to kill a process in unix/linux, one way of doing it is you can record their PID in a file using ps -ef command.
And then use kill -9 to kill the process.
Example:
$ ps -ef | grep <process_name> | awk -F ' ' '{print $2}' > pid.txt
$ kill -9 `cat pid.txt`
ps -ef command will give all the running processes, using grep and process name, you can get PID of the particular process
awk is used to extract only PID from above command
kill -9 will forcefully kill the process

The answer seems to have been pretty easy, all I had to do was add wait to the end of the script, which allows the script to wait until the processes are done executing. Since two of the processes are servers, they don't stop unless prompted, so it'll just wait until SIGINT is received, at that point it'll run the clean_up function and exit gracefully.
Additionally, one could use the same trap but with the EXIT trigger instead of SIGINT to clean up when the script exits on it's own due to the processes closing.

Related

Trying to close all child processes when I interrupt my bash script

I have written a bash script to carry out some tests on my system. The tests run in the background and in parallel. The tests can take a long time and sometimes I may wish to abort the tests part way through.
If I Control+C then it aborts the parent script, but leaves the various children running. I wish to make it so that I can hit Control+C or otherwise to quit and then kill all child processes running in the background. I have a bit of code that does the job if I'm running running the background jobs directly from the terminal, but it doesn't work in my script.
I have a minimal working example.
I have tried using trap in combination with pgrep -P $$.
#!/bin/bash
trap 'kill -n 2 $(pgrep -P $$)' 2
sleep 10 &
wait
I was hoping that on hitting control+c (SIGINT) would kill everything that the script started but it actually says:
./breakTest.sh: line 1: kill: (3220) - No such process
This number changes, but doesn't seem to apply to any running processes, so I don't know where it is coming from.
I guess if the contents of the trap command get evaluated where the trap command occurs then it might explain the outcome. The 3220 pid might be for pgrep itself.
I'd appreciate some insight here
Thanks
I have found a solution using pkill. This example also deals with many child processes.
#!/bin/bash
trap 'pkill -P $$' SIGINT SIGTERM
for i in {1..10}; do
sleep 10 &
done
wait
This appears to kill all the child processes elegantly. Though I don't properly understand what the issue was with my original code, apart from sending the correct signal.
in bash whenever you you use & after a command it places that command as a background job ( this background jobs are called job_spec ) which is incremented by one until you exit that terminal session. You can use the jobs command to get the list of the background jobs running. To work with this jobs you have to use the % with the job id. The jobs command also accept other options such as jobs -p to see the proces sids of all jobs , jobs -p %JOB_SPEC to see the process of id of that particular job.
#!/usr/bin/env bash
trap 'kill -9 %1' 2
sleep 10 &
wait
or
#!/usr/bin/env bash
trap 'kill -9 $(jobs -p %1)' 2
sleep 10 &
wait
I implemented something like this few years back, you can take a look at it async bash
You can try something like the following:
pkill -TERM -P <your_parent_id_here>

Bash: Start and kill child process

I have a program I want to start. Let' say this program will run a while(true)-loop (so it does not terminate. I want to write a bash script which:
Starts the program (./endlessloop &)
Waits 1 second (sleep 1)
Kills the program --> How?
I cannot use $! to get pid from child because server is running a lot of instances concurrently.
Store the PID:
./endlessloop & endlessloop_pid=$!
sleep 1
kill "$endlessloop_pid"
You can also check whether the process is still running with kill -0:
if kill -0 "$endlessloop_pid"; then
echo "Endlessloop is still running"
fi
...and storing the content in a variable means it scales to multiple processes:
endlessloop_pids=( ) # initialize an empty array to store PIDs
./endlessloop & endlessloop_pids+=( "$!" ) # start one in background and store its PID
./endlessloop & endlessloop_pids+=( "$!" ) # start another and store its PID also
kill "${endlessloop_pids[#]}" # kill both endlessloop instances started above
See also BashFAQ #68, "How do I run a command, and have it abort (timeout) after N seconds?"
The ProcessManagement page on the Wooledge wiki also discusses relevant best practices.
You can use the pgrep command for the same:
kill $(pgrep endlessloop)

BASH script suspend/continue a process within script

In my bash script I am writing, I am trying to start a process (sleep) in the background and then suspend it. Finally, the process with be finished. For some reason through, when I send the kill command with the stop signal, it just keeps running as if it received nothing. I can do this from the command line, but the bash script is not working as intended.
sleep 15&
pid=$!
kill -s STOP $pid
jobs
kill -s CONT $pid
You can make it work by enabling 'monitor mode' in your script: set -m
Please see why-cant-i-use-job-control-in-a-bash-script for further information

Getting pid of a background gnome-terminal process

I can easily start a background process, find its pid and search it in the list of running processes.
$gedit &
$PID=$!
$ps -e | grep $PID
This works for me. But if I start gnome-terminal as the background process
$gnome-terminal &
$PID=$!
$ps -e | grep $PID
Then, it is not found in the list of all running process.
Am I missing something here?
If you use the "--disable-factory" option to gnome-terminal it's possible to use gnome-terminal in the way you desire. By default it attempts to use an already active terminal, so this would allow you to grab the pid of the one you launch. The following script opens a window for 5 seconds, then kills it:
#!/bin/bash
echo "opening a new terminal"
gnome-terminal --disable-factory &
pid=$!
echo "sleeping"
sleep 5;
echo "closing gnome-terminal"
kill -SIGHUP $pid
This appears to be because the gnome-terminal process you start starts a process itself and then exits. So the PID you capture is the pid of the "stub" process which starts up and then forks the real terminal. It does this so it can be completely detached from the calling terminal.
Unfortunately I do not know of any way of capturing the pid of the "granchild" gnome-terminal process which is the one left running. If you do a ps you will see the gnome-terminal "grandchild" process running with a parent pid of 1.
(This is just a footnote) As #Sodved said, gnome-terminal starts a process itself and then exits, there is no way to get the grandchild pid. (See also APUE Chapter 7 why a child process won't re-attach to the grandparent process when its parent process was terminated. )
I found that gnome-terminal instantiates only once, so here is just a short script for your specific task:
GNOME_TERMINAL_PID=`pidof gnome-terminal`
If you don't have pidof:
GNOME_TERMINAL_PID=`grep Name: */status | grep gnome-terminal | cut -d/ -f1`

Terminate running commands when shell script is killed [duplicate]

This question already has answers here:
What's the best way to send a signal to all members of a process group?
(34 answers)
Closed 6 years ago.
For testing purposes I have this shell script
#!/bin/bash
echo $$
find / >/dev/null 2>&1
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
Running it in the background, and killing the shell only will orphan the commands running in the script.
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
I would do something like this:
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. : is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the find command is run in the background (from the script's perspective) and we call the wait builtin for it to finish. Since we gave a real command to trap above, when a signal is handled, wait will exit with a status greater than 128. If the process waited for completes, wait will return the exit status of that process.
Last, if the wait returns that error status, we want to kill the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting kill -- -$$ as your argument to trap is another option if you don't care about leaving any information around post-exit.
For trap to work the way you want, you do need to pair it up with wait - the bash man page says "If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes." wait is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
Was looking for an elegant solution to this issue and found the following solution elsewhere.
trap 'kill -HUP 0' EXIT
My own man pages say nothing about what 0 means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
Send a signal to the group.
So instead of kill 13231 do:
kill -- -13231
If you're starting from python then have a look at:
http://www.pixelbeat.org/libs/subProcess.py
which shows how to mimic the shell in starting
and killing a group
#Patrick's answer almost did the trick, but it doesn't work if the parent process of your current shell is in the same group (it kills the parent too).
I found this to be better:
trap 'pkill -P $$' EXIT
See here for more info.
Just add a line like this to your script:
trap "kill $$" SIGINT
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
The thing you would need to do is trap the kill signal, kill the find command and exit.

Resources