Cleaning up after a ruby script -- trapping signals - ruby

My ruby script creates a tempfile and spawns an potentially long-running external process. Neither may continue to exist after the script ends, no matter the way in which the script terminates.
I thought the following lines would take care of things:
stderr = File.open(Tempfile.new(__FILE__),'w')
trap("EXIT") { FileUtils.rm_f stderr.path }
pid = spawn("dd", *ARGV, STDERR => stderr )
trap("EXIT") { FileUtils.rm_f stderr.path; Process.kill pid }
they're supposed to be a rewrite of the following bash code, which seems to work fine,
dd_output=`mktemp`
trap "rm -f $dd_output" EXIT
dd "$#" 2>| $dd_output & pid=$!
trap "rm -f $dd_output; kill $pid" EXIT
but they don't.
If an exception is raised later on, the spawned process doesn't die, otherwise it does.
Could anyone tell me what I'm doing wrong?
Edit:
Traps do work.
The above code has multiple blemishes:
Tempfile takes car of itself -- it is likely to already have been
deleted in the trap handler, which may cause FileUtils.rm_f to raise
another error, preventing.
Process.kill needs a signal -- Process.kill "TERM", pid (or "KILL"). The raised error shadowed the error for my faulty invocation of Process.kill.
Fixed code:
stderr = Tempfile.new(__FILE__)
pid = spawn("dd", *ARGV, STDERR => stderr )
trap("EXIT") { Process.kill "TERM", pid }
Ensure works too.

I think ensure might be able to help you here, it will always execute the code inside. It is similary to Java's finally.
stderr = Tempfile.new(__FILE__)
begin
pid = spawn("dd", *ARGV, STDERR => stderr )
ensure
FileUtils.rm_f stderr.path
Process.kill pid
end
If that doesn't do the trick you could try adding an at_exit handler.

Related

stop currently running bash script lazily/gracefully

Say I have a bash script like this:
#!/bin/bash
exec-program zero
exec-program one
the script issued a run command to exec-program with the arg "zero", right? say, for instance, the first line is currently running. I know that Ctrl-C will halt the process and discontinue executing the remainder of the script.
Instead, is there a keypress that will allow the current-line to finish executing and then discontinue the script execution (not execute "exec-program one") (without modifying the script directly)? In this example it would continue running "exec-program zero" but after would return to the shell rather than immediately halting "exec-program zero"
TL;DR Something runtime similar to "Ctrl-C" but more lazy/graceful ??
In the man page, under SIGNALS section it reads:
If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes.
This is exactly what you're asking for. You need to set an exit trap for SIGINT, then run exec-program in a subshell where SIGINT is ignored; so that it'll inherit the SIG_IGN handler and Ctrl+C won't kill it. Below is an implementation of this concept.
#!/bin/bash -
trap exit INT
foo() (
trap '' INT
exec "$#"
)
foo sleep 5
echo alive
If you hit Ctrl+C while sleep 5 is running, bash will wait for it to complete and then exit; you will not see alive on the terminal.
exec is for avoiding another fork() btw.

Basic signal communication

I have a bash script, its contents are:
function foo {
echo "Foo!"
}
function clean {
echo "exiting"
}
trap clean EXIT
trap foo SIGTERM
echo "Starting process with PID: $$"
while :
do
sleep 60
done
I execute this on a terminal with:
./my_script
And then do this on another terminal
kill -SIGTERM my_script_pid # obviously the PID is the one echoed from my_script
I would expect to see the message "Foo!" from the other terminal, but It's not working. SIGKILL works and the EXIT code is also executed.
Using Ctrl-C on the terminal my_script is running on triggers foo normally, but somehow I can't send the signal SIGTERM from another terminal to this one.
Replacing SIGTERM with any other signal doesn't change a thing (besides Ctrl-C not triggering anything, it was actually mapped to SIGUSR1 in the beginning).
It may be worth mentioning that just the signal being trapped is not working, and any other signal is having the default behaviour.
So, what am I missing? Any clues?
EDIT: I also just checked it wasn't a privilege issue (that would be weird as I'm able to send SIGKILL anyway), but it doesn't seem to be that.
Bash runs the trap only after sleep returns.
To understand why, think in C / Unix internals: While the signal is dispatched instantly to bash, the corresponding signal handler that bash has setup only does something like received_sigterm = true.
Only when sleep returns, and the wait system call which bash issued after starting the sleep process returns also, bash resumes its normal work and executes your trap (after noticing received_sigterm).
This is done this way for good reasons: Doing I/O (or generally calling into the kernel) directly from a signal handler generally results in undefined behaviour as far as I know - although I can't tell more about that.
Apart from this technical reason, there is another reason why bash doesn't run the trap instantly: This would actually undermine the fundamental semantics of the shell. Jobs (this includes pipelines) are executed strictly in a sequential manner unless you explicitly mess with background jobs.
The PID that you originally print is for the bash instance that executes your script, not for the sleep process that it is waiting on. During sleep, the signal is likely to be ignored.
If you want to see the effect that you are looking for, replace sleep with a shorter-lived process like ps.
function foo {
echo "Foo!"
}
function clean {
echo "exiting"
}
trap clean EXIT
trap foo SIGTERM
echo "Starting process with PID: $$"
while :
do
ps > /dev/null
done

How do I stop a signal from killing my Bash script?

I want an infinite loop to keep on running, and only temporarily be interrupted by a kill signal. I've tried SIGINT, SIGUSR1, SIGUSR2. All of them seem to halt the loop. I even tried SIGINFO, but that wasn't supported by Linux.
#!/bin/bash
echo $$ > /tmp/pid # Save the pid
function do_something {
echo "I am doing stuff" #let's do this now, and go back to doing the thing that is to be done over and over again.
#exit
}
while :
do
echo "This should be done over and over again, but always wait for someething else to be done in between"
trap do_something SIGINT
while `true`
do
sleep 1 #so we're waiting for that other thing.
done
done
My code runs the function once, after getting a INT signal from another script, but then never again. It halts.
EDIT: Although I accidentally put en exit at the end of the function, here on Stack Overflow, I didn't in the actual code I used. Either way, it made no difference. The solution is SIGTERM as described by Tiago.
I believe you're looking for SIGTERM:
Example:
#! /bin/bash
trap -- '' SIGINT SIGTERM
while true; do
date +%F_%T
sleep 1
done
Running this example cTRL+C won't kill it nor kill <pid> you can however kill it with kill -9 <pid>.
If you don't want CTRL+Z to interrupt use: trap -- '' SIGINT SIGTERM SIGTSTP
trap the signal, then either react to it appropriately, in the function associate with the trap, or ignore it by for example associate : as command to get executed when the signal occurs.
to trap signals, bash knows the trap command
Reset trap to former action by executing trap with signal name only.
Therefore you want to (i think that's what you say you want with "only temporarily be interrupted by a kill signal"):
trap the signal at the begin of your script: trap signal custom_action
just before you want the signal to allow interrupting your script, execute: trap signal
At the end of that phase, trap again by: signal custom_action
to specify signals, you can also use their respective signal numbers. A list of signal names is printed with the command:
trap -l
the default signal sent by kill is SIGTERM (15), unless you specify a different signal after the kill command
don't exit in your do_something function. Simply let the function return to the section in your code where it was interrupted when the signal occured.
The mentioned ":" command has another potential use in your script, if you feel thusly inclined:
while :
do
sleep 1
done
can be an alternative to "while true" - no backticks needed for that, btw.
You just want to ignore the exit status.
If you want your script to keep running and not exit, without worrying about handling traps.
(my_command) || true
The parentheses execute that command in a subshell. The true is for compatibility with set -e, if you use it. It simply overrides the status to always report a success.
See the source.
I found this question to be helpful:
How to run a command before a Bash script exits?

bash shell: Can a control-c cause shell to write an empty file?

I have a bash shell script. It writes out to a text file. Most of the it works find if I stop the script with a control-c at the command level. Sometimes the file that's been written to such as
echo "hello world" >myfile.txt
will end up being empty. So it it possible that when I hit control-c to stop the shell script running it is caught it at the instance where it's opening a write to the file and before it puts anything in it, it doesn't get the chance and leaves it empty?
If that's the case. What can I do in the bash shell script so that it will exit gracefully after it's written to the file and before it gets a chance to write to the file again, because it's doing this in a while loop. Thanks!
Yes, it's possible that you end up with an empty file.
A solution would be to trap the signal that's caused by ^C (SIGINT), and set a flag which you can check in your loop:
triggered=0
trap "triggered=1" SIGINT
while true
do
if [ $triggered = 1 ]
then
echo "quitting"
exit
fi
...do stuff...
done
EDIT: didn't realize that even though the shell's own SIGINT handling will get trapped, it will still pass the SIGINT to its subprocesses, and they'll get killed if they don't handle SIGINT themselves.
Since echo is a shell builtin, it might survive the killing, but I'm not entirely sure. A quick test seems to work okay (file is always written, whereas without trapping SIGINT, I occasionally end up with an empty file as well).
As #spbnick suggests in the comments, on Linux you can use the setsid command to create a new process group for any subprocesses you start, which will prevent them from being killed by a SIGINT sent to the shell.

kill all subprocesses of a daemon

I am writing an /etc/init.d/mydaemon:
# ...
source functions # LSB compliant
EXEC=/usr/local/bin/mydaemon
PROG=mydaemon
function start() {
daemon --pidfile=/var/run/mydeamon.pid ${EXEC}
}
function stop() {
killproc ${PROG}
}
# ...
my /usr/local/bin/mydaemon:
#!/bin/bash
trap "trap TERM ; kill 0" TERM
binary with some args
AFAIK, this should work because:
daemon records the mydaemon's PID in /var/run/mydaemon.pid
killproc read that PID and send SIGTERM to that PID.
mydaemon trap this signal, disable the trap and send SIGTERM to the entire PGRP, including the process of binary with some args.
However this doesn't work. After stopping the service, mydaemon terminates, but binary is still running.
What am I missing, and what is the best practice for stopping the daemon and all its' children?
BTW:
When my /usr/local/bin/mydaemon is:
#!/bin/bash
binary with some args &
echo $! $$ > /var/run/mydaemon.pid
wait
It works properly, but this seems less robust to me, and there are times where this is not appropriate (when the binary invocation is less straight forward, or it has it's own children, etc).
If you give the parent process' id to pkill, it'll kill all the children:
pkill -TERM -P parentID
You can set up a trap, which takes care of the cleanup process when SIGINT is received. For example:
function cleanup { kill $CHILDPID; exit 0; }
trap cleanup SIGINT SIGTERM
See here for more examples.
For the specific scenario presented in the question, it is also worth considering the following option for /usr/local/bin/mydaemon:
#!/bin/bash
exec binary with some args
Rather than being run in a subprocess with a new PID, binary will instead take over the shell process PID, and hence receive the signals from the init script directly.

Resources