Call to ls within trapped SIGUSR1 signal results in undesired shell logout - bash

I am attempting to troubleshoot an example regarding bash script signal traps given during a lecture. I copy-pasted the code into a script, but my execution of it results in an undesired and unexpected logout at the end. The instructor makes no mention that this might happen, and I am under the impression that it shouldn't. However, after failing to make contact with my TAs and fellow students, I turn to you for help!
The full slide I am reviewing can be found on page 15 here: http://web.engr.oregonstate.edu/~brewsteb/CS344Slides/3.3%20Signals.pdf
Expected Output:
Triggering a child process termination with a silent ls SIGCHLD Received!
Exiting! [1]+ Done sigchldtest
Actual Output:
Triggering a child process termination with a silent ls
os1 ~/344_operating_systems/examples 1062$ SIGCHLD Received! Exiting!
logout
---------------
The script, sigchldtest:
#!/bin/bash
set -m trap "echo 'Triggering a child process termination with a silent ls'; ls > /dev/null" USR1
trap "echo 'SIGCHLD Received! Exiting!'; exit 0" CHLD
while [ 1 -eq 1 ]
do
echo "nothing" > /dev/null
done
Console input:
sigchldtest &
kill -SIGUSR1 [sigchldtest pID]
Steps Taken
Removed "ls >/dev/null" from line 3. Code executes but does not trigger the CHLD signal trap, as expected.
Removed "exit 0" from line 4. Shell does not log out upon completion, but user must press "enter" to return to the command prompt and the script continues running in the background (not desired). Additionally, the path name should not be printed before "SIGCHLD Received!". It appears the shell prompt is being activated during the middle of my script?
Triggering a child process termination with a silent ls
os1 ~/344_operating_systems/examples 1010$ SIGCHLD Received! Exiting!
{USER MUST PRESS ENTER HERE}
$
Attempted process in both PuTTY and Windows 10 Console, same result.
Attempted to kill the process in another terminal per Ferrybig. Killing the process in the second terminal logged me out of the first one, lol!
Thank you for your time!

Related

Makefile: how to run bash script and ignore its exit status?

Minimized test case for the problem:
I have following Makefile:
test:
bash test.sh || true
echo OK
and the test.sh contains
#!/bin/bash
while read -p "Enter some text or press Ctrl+C to exit > " input
do
echo "Your input was: $input"
done
When I run make test and press Ctrl+C to exit the bash read the make will emit
Makefile:2: recipe for target 'test' failed
make: *** [test] Interrupt
How can I tell make to ignore the exit status of the script? I already have || true after the script which usually is enough to get make to keep going but for some reason, the SIGINT interrupting the read will cause make to behave different for this case.
I'm looking for a generic answer that works for processes other than while read loop in bash, too.
This has nothing to do with the exit status of the script. When you press ^C you're sending an interrupt signal to the make program, not just to your script. That causes the make program to stop, just like ^C always does.
There's no way to have make ignore ^C operations; whenever you press ^C at the terminal, make will stop.
ctrl+c sends a signal to the program to tell it to stop. What you want is ctrl+d which sends the signal EOT (end of transmission). You will need to send ctrl+d twice unless you are at the beginning of a line.
some text<c-d><c-d>
or
some text<return>
<c-d>
I found a way to make this work. It's a bit tricky so I'll explain the solution first. The important thing to understand that Ctrl+C is handled by your terminal and not by the currently running process in the terminal as I previously thought. When the terminal catches your Ctrl+C it will check the foreground process group and then send SIGINT to all processes in that group immediately. When you run something via Makefile and press Ctrl+C the SIGINT be immediately sent to Makefile and all processes that it started because those all belong in the foreground process group. And GNU Make handles SIGINT by waiting for any currently executed child process to stop and then exit with a message
Makefile:<line number>: recipe for target '<target>' failed
make: *** [<target>] Interrupt
if the child exited with non-zero exit status. If child handled the SIGINT by itself and exited with status 0, GNU Make will exit silently. Many programs exit via status code 130 on SIGINT but this is not required. In addition, kernel and wait() C API interface can differentiate with status code 130 and status code 130 + child received SIGINT so if Make wanted to behave different for these cases, it would be possible regardless of exit code. bash doesn't support testing for child process SIGINT status but only supports exit status codes.
The solution is to setup processes so that your foreground process group does not include GNU Make while you want to handle Ctrl+C specially. However, as POSIX doesn't define a tool to create any process groups, we have to use bash specific trick: use bash job control to trigger bash to create a new process group. Be warned that this causes some side-effects (e.g. stdin and stdout behaves slightly different) but at least for my case it was good enough.
Here's how to do it:
I have following Makefile (as usual, nested lines must have TAB instead of spaces):
test:
bash -c 'set -m; bash ./test.sh'
echo OK
and the test.sh contains
#!/bin/bash
int_handler()
{
printf "\nReceived SIGINT, quitting...\n" 1>&2
exit 0
}
trap int_handler INT
while read -p "Enter some text or press Ctrl+C to exit > " input
do
echo "Your input was: $input"
done
The set -m triggers creating a new foreground process group and the int_handler takes care of returning successful exit code on exit. Of course, if you want to have some other exit code but zero on Ctrl+C, feel free to any any value suitable. If you want to have something shorter, the child script only needs trap 'exit 0' INT instead of separate function and setup for it.
For additional information:
https://unix.stackexchange.com/a/99134/20336
https://unix.stackexchange.com/a/386856/20336
https://stackoverflow.com/a/18479195/334451
https://www.cons.org/cracauer/sigint.html

stop currently running bash script lazily/gracefully

Say I have a bash script like this:
#!/bin/bash
exec-program zero
exec-program one
the script issued a run command to exec-program with the arg "zero", right? say, for instance, the first line is currently running. I know that Ctrl-C will halt the process and discontinue executing the remainder of the script.
Instead, is there a keypress that will allow the current-line to finish executing and then discontinue the script execution (not execute "exec-program one") (without modifying the script directly)? In this example it would continue running "exec-program zero" but after would return to the shell rather than immediately halting "exec-program zero"
TL;DR Something runtime similar to "Ctrl-C" but more lazy/graceful ??
In the man page, under SIGNALS section it reads:
If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes.
This is exactly what you're asking for. You need to set an exit trap for SIGINT, then run exec-program in a subshell where SIGINT is ignored; so that it'll inherit the SIG_IGN handler and Ctrl+C won't kill it. Below is an implementation of this concept.
#!/bin/bash -
trap exit INT
foo() (
trap '' INT
exec "$#"
)
foo sleep 5
echo alive
If you hit Ctrl+C while sleep 5 is running, bash will wait for it to complete and then exit; you will not see alive on the terminal.
exec is for avoiding another fork() btw.

shell script process termination issue

/bin/sh -version
GNU sh, version 1.14.7(1)
exitfn () {
# Resore signal handling for SIGINT
echo "exiting with trap" >> /tmp/logfile
rm -f /var/run/lockfile.pid # Growl at user,
exit # then exit script.
}
trap 'exitfn; exit' SIGINT SIGQUIT SIGTERM SIGKILL SIGHUP
The above is my function in shell script.
I want to call it in some special conditions...like
when:
"kill -9" fires on pid of this script
"ctrl + z" press while it is running on -x mode
server reboots while script is executing ..
In short, with any kind of interrupt in script, should do some action
eg. rm -f /var/run/lockfile.pid
but my above function is not working properly; it works only for terminal close or "ctrl + c"
Kindly don't suggest to upgrade "bash / sh" version.
SIGKILL cannot be trapped by the trap command, or by any process. It is a guarenteed kill signal, that by it's definition cannot be trapped. Thus upgrading you sh/bash will not work anyway.
You can't trap kill -9 that's the whole point of it, to destroy processes violently that don't respond to other signals (there's a workaround for this, see below).
The server reboot should first deliver a signal to your script which should be caught with what you have.
As to the CTRL-Z, that also gives you a signal, SIGSTOP from memory, so you may want to add that. Though that wouldn't normally be a reason to shut down your process since it may be then put into the background and restarted (with bg).
As to what do do for those situations where your process dies without a catchable signal (like the -9 case), the program should check for that on startup.
By that, I mean lockfile.pid should store the actual PID of the process that created it (by using echo $$ >/var/run/myprog_lockfile.pid for example) and, if you try to start your program, it should check for the existence of that process.
If the process doesn't exist, or it exists but isn't the right one (based on name usually), your new process should delete the pidfile and carry on as if it was never there. If the old process both exists and is the right one, your new process should log a message and exit.

How does trap / kill work in bash on Linux?

My sample file
traptest.sh:
#!/bin/bash
trap 'echo trapped' TERM
while :
do
sleep 1000
done
$ traptest.sh &
[1] 4280
$ kill %1 <-- kill by job number works
Terminated
trapped
$ traptest.sh &
[1] 4280
$ kill 4280 <-- kill by process id doesn't work?
(sound of crickets, process isn't killed)
If I remove the trap statement completely, kill process-id works again?
Running some RHEL 2.6.18-194.11.4.el5 at work. I am really confused by this behaviour, is it right?
kill [pid]
send the TERM signal exclusively to the specified PID.
kill %1
send the TERM signal to the job #1's entire process group, in this case to the script pid + his children (sleep).
I've verified that with strace on sleep process and on script process
Anyway, someone got a similar problem here (but with SIGINT instead of SIGTERM): http://www.vidarholen.net/contents/blog/?p=34.
Quoting the most important sentence:
kill -INT %1 sends the signal to the job’s process group, not the backgrounded pid!
This is expected behavior. Default signal sent by kill is SIGTERM, which you are catching by your trap. Consider this:
#!/bin/bash
# traptest.sh
trap "echo Booh!" SIGINT SIGTERM
echo "pid is $$"
while : # This is the same as "while true".
do
a=1
done
(sleep really creates a new process and the behavior is clearer with my example I guess).
So if you run traptest.sh in one terminal and kill TRAPTEST_PROCESS_ID from another terminal, output in the terminal running traptest will be Booh! as expected (and the process will NOT be killed). If you try sending kill -s HUP TRAPTEST_PROCESS_ID, it will kill the traptest process.
This should clear up the %1 confusion.
Note: the code example is taken from tldp
Davide Berra explained the difference between kill %<jobspec> and kill <PID>, but not how that difference results in what you observed. After all, Unix signal handlers should be called pretty much instantaneously, so why does sending a SIGTERM to the script alone not trigger its trap handler?
The bash man page explains why, in the last paragraph of the SIGNALS section:
If bash is waiting for a command to complete and receives a signal for
which a trap has been set, the trap will not be executed until the
command completes.
So, the signal was delivered immediately, but the handler execution was deferred until sleep exited.
Hence, with kill %<jobspec>:
Both the script and sleep received SIGTERM
bash registered the signal, noticed that a trap was set for it, and queued the handler for future execution
sleep exited immediately
bash noted sleep's exit, and ran the trap handler
whereas with kill <script_PID>:
Only the script received SIGTERM
bash registered the signal, noticed that a trap was set for it, and queued the handler for future execution
sleep exited after 1000 seconds
bash noted sleep's exit, and ran the trap handler
Obviously, you didn't want long enough to see that last bit. :)
If you're interested in the gory details, download the bash source code and look in trap.c, specifically the trap_handler() and run_pending_traps() functions.

Terminate running commands when shell script is killed [duplicate]

This question already has answers here:
What's the best way to send a signal to all members of a process group?
(34 answers)
Closed 6 years ago.
For testing purposes I have this shell script
#!/bin/bash
echo $$
find / >/dev/null 2>&1
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
Running it in the background, and killing the shell only will orphan the commands running in the script.
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
I would do something like this:
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. : is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the find command is run in the background (from the script's perspective) and we call the wait builtin for it to finish. Since we gave a real command to trap above, when a signal is handled, wait will exit with a status greater than 128. If the process waited for completes, wait will return the exit status of that process.
Last, if the wait returns that error status, we want to kill the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting kill -- -$$ as your argument to trap is another option if you don't care about leaving any information around post-exit.
For trap to work the way you want, you do need to pair it up with wait - the bash man page says "If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes." wait is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
Was looking for an elegant solution to this issue and found the following solution elsewhere.
trap 'kill -HUP 0' EXIT
My own man pages say nothing about what 0 means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
Send a signal to the group.
So instead of kill 13231 do:
kill -- -13231
If you're starting from python then have a look at:
http://www.pixelbeat.org/libs/subProcess.py
which shows how to mimic the shell in starting
and killing a group
#Patrick's answer almost did the trick, but it doesn't work if the parent process of your current shell is in the same group (it kills the parent too).
I found this to be better:
trap 'pkill -P $$' EXIT
See here for more info.
Just add a line like this to your script:
trap "kill $$" SIGINT
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
The thing you would need to do is trap the kill signal, kill the find command and exit.

Resources