Bash: spawn child processes that quit when parent script quits - bash

I'd like to spawn several child processes in Bash, but I'd like the parent script to remain running, such that signals send to the parent script also affect the spawned children processes.
This doesn't do that:
parent.bash:
#!/usr/bin/bash
spawnedChildProcess1 &
spawnedChildProcess2 &
spawnedChildProcess3 &
parent.bash ends immediately, and the spawned processes continue running independently of it.

If you want your parent to not exit immediately after spawning its children, then as Barmar told you, use wait.
Now, if you want your child processes to die when the parent exits, then send them a SIGTERM (or any other) signal just before exiting:
kill 0
(0 is a special PID that means "every process in the parent's process group")
If the parent may exit unexpectedly (e.g. upon receiving a signal, because of a set -u or set -e, etc.) then you can use trap to send the TERM signal to the child just before exiting:
trap 'kill 0' EXIT
[edit] In conclusion, this is how you should write your parent process:
#!/usr/bin/bash
trap 'kill 0' EXIT
...
spawnedChildProcess1 &
spawnedChildProcess2 &
spawnedChildProcess3 &
...
wait
That way no need to send your signal to a negative process ID since this won't cover all the cases when your parent process may die.

Use wait to have the parent process wait for all the children to exit.
#!/usr/bin/bash
spawnedChildProcess1 &
spawnedChildProcess2 &
spawnedChildProcess3 &
wait
Keyboard signals are sent to the entire process group, so typing Ctl-c will kill the children and the parent.

Related

Why doesn't bash script wait for its child processes to finish before exiting the parent script on receiving Sigterm?

trap exit_gracefully TERM
exit_gracefully() {
echo "start.sh got SIGTERM"
echo "Sending TERM to child_process_1_pid: ${child_process_1_pid}"
echo "Sending TERM to child_process_2_pid: ${child_process_2_pid}"
echo "Sending TERM to child_process_3_pid: ${child_process_3_pid}"
kill -TERM ${child_process_1_pid} ${child_process_2_pid} ${child_process_3_pid}
}
consul watch -http-addr=${hostIP}:8500 -type=key -key=${consul_kv_key} /child_process_1.sh 2>&1 &
child_process_1_pid=$!
/child_process_2.sh &
child_process_2_pid=$!
/child_process_3.sh &
child_process_3_pid=$!
/healthcheck.sh &
/configure.sh
# sleep 36500d &
# wait $!
wait ${child_process_1_pid} ${child_process_2_pid} ${child_process_3_pid}
echo 'start.sh exiting'
start.sh is the parent script. When SIGTERM is trapped, it is forwarded to 3 of its child processes. If # sleep 36500d &
# wait $! is commented (removed from code), start.sh does not wait for child_process_1.sh, child_process_2.sh and child_process_3.sh to receive SIGTERM, handle it and exit before exiting the parent process (start.sh), instead start.sh exits immediately on receiving SIGTERM even before child processes could handle it. But if I keep sleep 36500d & wait $! uncommented in the code, parent process (start.sh) waits for child processes (1, 2, and 3) to receive, handle Sigterm and exit first before exiting itself.
Why does this difference exist even though I wait for 3 pids (of child processes) in either case? Why should I need sleep when I am waiting for 3 pids?
Receiving a signal will cause any wait command in progress to return.
This is because the purpose of a signal is to interrupt a process in whatever it's currently doing.
All the effects you see are simply the result of the current wait returning, the handler running, and the script continuing from where the wait exited.

How child processes gets terminated when the parent is killed using SIGINT?

#!/usr/bin/env bash
for i in $(seq 1 $1);
do
./extended&
done
wait
This is my bash script and I execute the extended binary as many times as specified in command line argument. When I kill the bash script using SIGINT the child processes also killed. I've called wait in the bash script I couldn't figure how the child processes are killed. I know that wait will make the parent to wait till child terminates.
bash sends a SIGHUP (hang-up signal) to all children on exit by default. If you don't want this behaviour use disown -h
From man bash:
To prevent the shell from sending the signal to a particular job, it should be removed from the jobs table with the disown builtin or marked to not receive SIGHUP
using disown -h.

Running Child Process In Sequential Statement Before Exiting Parent?

I'm trying to write a Bash script that, when it receives a SIGINT signal, creates a copy of itself before exiting. So, when a user tries to kill this script using a SIGINT signal a copy of the process reapppears.
trap "echo Exiting...?; ./ghoul.sh; exit 1" SIGINT
while :
do
echo Process Number $$, with PPID $PPID!
sleep 1
done
However, whenever I suspend the process and check ps -f, there are multiple processes of the script (children and children of children). The exit command never seems to run since it's waiting for the children to exit. I want to find a way to run the script in the trap statement and exit afterward while maintaining the resulting child process. Is there any way to do this besides creating the child as a background process?
I find it much simpler to put exit code into a function. For example, your unquoted echo contains a bare ? which is a glob (file expansion) character. To avoid the parent killing the child you can use disown, and yes, you need to run it in background.
Try this:
f_exit() {
echo 'Exiting...?'
./ghoul.sh &
disown -h %1
exit 1
}
trap "f_exit" SIGINT
while :
do
echo "Process Number $$, with PPID $PPID!"
sleep 1
done

Background process getting killed when its parent is terminated?

I have code that looks something like this
function doTheThing{
# a potentially infinite while loop...
}
# other stuff...
doTheThing &
trap "kill $!" SIGINT SIGTERM
Strangely, when I ctrl-C out of the parent process before the loop is done, I get a message that the process doesn't exist. Furthermore, if I get rid of the trap, I can't find the process with a ps -aF. It looks like the background process is getting killed when its parent is terminated, but my understanding was that wasn't supposed to happen. I just want to make sure that I can safely leave out the trap and not leave zombie processes everywhere.
The POSIX specification says that when you type the interrupt character (normally Control-C) the SIGINT is sent to the foreground process group. So as long as the background process is running in the same process group as the script that invoked it, it will receive the signal at the same time as the script process.
Shells generally use process groups to implement job control, and by default this is only enabled in interactive shells, not shells running scripts. There's no standard way to run a function in its own process group, but you could use setsid to run it in a new session, which is an even higher level of grouping than process groups. Then it wouldn't receive the interrupt.
You might still want to write a trap command that kills the function on EXIT, though.
doTheThing&
trap "kill $!" EXIT
since exiting the script doesn't automatically kill the rest of the process group.

How do you stop two concurrent processes?

In my web development workflow, I have two processes:
watching my folder for changes
previewing my site in the browser
I want to be able to run them and then later stop them both at the same time. I've seen everyone suggesting using the ampersand operator:
process_1 & process_2
But pressing Ctrl + C only stops the second one. I have to kill the first one manually. What am I missing in this approach?
You can have the foreground script explicitly kill the subprocesses in response to SIGINT:
#!/bin/sh
trap 'kill $pid1 $pid2' 2
cmd1 &
pid1=$!
cmd2 &
pid2=$!
wait
There is a race condition in this example: if you send SIGINT to the parent before pid1 is assigned, kill will emit a warning message and neither child will be terminated. If you send SIGINT before pid2 is assigned, only the process running cmd1 will be sent the signal. In either case, the parent will continue running and a second SIGINT can be sent. Some versions of kill allow you to avoid this race condition by sending a signal to the process group using kill -$$, but not all versions of kill support that usage. (Note that if either child process does not terminate in response to the signal, the parent will not exit but continue waiting.)
How about writing two scripts, one containing
./process_1 &
./process_2 &
and a second containing
killall process_1
killall process_2
Start both prcesses by running the first script, and end them by running the second script.

Resources