I try to send a signal from one terminal A to another terminal B. Both run an interactive shell.
In terminal B, I trap signal SIGUSR1 like so :
$ trap 'source ~/mycommand' SIGUSR1
Now in terminal A I send a signal like so :
$ kill -SIGUSR1 pidOfB
Unfortunately, nothing happens in B. If I want to have my command executed, I need to switch to B and either input a new command or press enter.
How can I avoid this drawback and immediately execute my command instead ?
EDIT :
It's important to note that I want to interact directly with the interactive shell in terminal B from terminal A.
For this reason, every solution where the trap command would be executed in a subshell would not work for me...
Also, terminal B must stay interactive.
The shell may simply be stuck in a blocking read, waiting for command-line input. Hitting enter causes the handler to execute before the entered command. Running a non-blocking command like wait:
$ sleep 60 & wait
then sending the signal causes wait to terminate immediately, followed by the output of the handler.
Based on the answers and my numerous attempt to solve this, I don't think it's possible to catch a trap signal immediately in an interactive bash terminal.
For it to trigger, there must be an interaction from the user.
This is due to the readline program blocks until a newline is entered. And there is no way to stop this read.
My solution is to use dtach, a small program that emulate the detach feature of screen.
This program can run a fully interactive shell and features in its last version a way to communicate via a custom socket to this shell (or whatever program you launch)
To start a new dtach session running an interactive bash, in terminal B :
$ dtach -a /tmp/MySocket bash -i
Now from terminal A, we can send a message to the bash session in terminal B like so :
$ echo 'echo hello' | dtach -p /tmp/MySocket
In terminal B, we now see :
$ echo hello
hello
To expand on that if I now do in terminal A :
$ trap 'echo "cd $(pwd)" | dtach -p /tmp/MySocket' DEBUG
I'll have the directory of the two terminals synced
PS :I'd still like to know if there is a way to do this in pure bash
I use a similar trap so that periodically I can (from a separate cron job) force all idle bash processes to do a 'history -a'. I found that if I trap SIGALRM instead of SIGUSR1, then the bash blocking read seems not to be a problem: the trap runs now, rather than next time one hits return. I tried SIGINT, but that caused an annoying "^C", followed by a new prompt line, to be displayed. I haven't yet found any drawbacks of using SIGALRM, but perhaps they will arise.
It may be buffering.
As a test, try installing a loop trigger. In window A:
{ trap 'ls' USR1; while sleep 1; do echo>/dev/null;done } &
[1] 7316
in window B:
kill -usr1 7316
back in window A the ls is firing when the loop does an echo.
Don't know if that will help, but it's something.
Related
I know that have a file called .bash_profile that executes code (bashscript) when you open a terminal.
And there is another file that is called .bash_logout that executes code when you exit the terminal.
How I would execute some script when terminal is killed?
(.bash_logout do not cover this when terminal is killed).
How I would execute some script when terminal is killed?
I interpret this as "execute a script when the terminal window is closed". To do so, add the following inside your .bashrc or .bash_profile:
trap '[ -t 0 ] || command to execute' EXIT
Of course you can replace command to execute with source ~/.bash_exit and put all the commands inside the file .bash_exit in your home directory.
The special EXIT trap is executed whenever the shell exits (e.g. by closing the terminal, but also by pressing CtrlD on the prompt, or executing exit, or ...).
[ -t 0 ] checks whether stdin is connected to a terminal. Due to || the next command is executed only if that test fails, which it does when closing the terminal, but doesn't for other common ways to exit bash (e.g. pressing CtrlD on the prompt or executing exit).
Failed attempts (read only if you try to find and alternative)
In the terminals I have heard of, bash always receives a SIGHUP signal when the window is closed. Sometimes there are even two SIGHUPs; one from the terminal, and one from the kernel when the pty (pseudoterminal) is closed. However, sometimes both SIGHUPs are lost in interactive sessions, because bash's readline temporarily uses its own traps. Strangely enough, the SIGHUPs always seem to get caught when there is an EXIT trap; even if that EXIT trap does nothing.
However, I strongly advise against setting any trap on SIGHUP. Bash processes non-EXIT traps only after the current command finished. If you ran sh -c 'while true; do true; done' and closed the terminal, bash would continue to run in the background as if you had used disown or nohup.
I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I can solve the first part of it like this:
#!/bin/bash
cmd='
#commands here, avoiding single quotes...
'
nohup bash -c "$cmd" &
tail -f nohup.out
But pressing Ctrl+C obviously just kills the tail process, not the main body. Can I have both? Maybe using Screen?
I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I think this is exactly the answer on the question you formulated, this one without screen:
#!/bin/bash
cmd=`cat <<EOF
# commands here
EOF
`
nohup bash -c "$cmd" &
# store the process id of the nohup process in a variable
CHPID=$!
# whenever ctrl-c is pressed, kill the nohup process before exiting
trap "kill -9 $CHPID" INT
tail -f nohup.out
Note however that nohup is not reliable. When the invoking user logs out, chances are that nohup also quits immediately. In that case disown works better.
bash -c "$cmd" &
CHPID=$!
disown
This is probably the simplest form using screen:
screen -S SOMENAME script.sh
Then, if you get disconnected, on reconnection simply run:
screen -r SOMENAME
Ctrl+C should continue to work as expected
Fact 1: When a terminal (xterm for example) gets closed, the shell is supposed to send a SIGHUP ("hangup") to any processes running in it. This harkens back to the days of analog modems, when a program needed to clean up after itself if mom happened to pick up the phone while you were online. The signal could be trapped, so that a special function could do the cleanup (close files, remove temporary junk, etc). The concept of "losing your connection" still exists even though we use sockets and SSH tunnels instead of analog modems. (Concepts don't change; all that changes is the technology we use to implement them.)
Fact 2: The effect of Ctrl-C depends on your terminal settings. Normally, it will send a SIGINT, but you can check by running stty -a in your shell and looking for "intr".
You can use these facts to your advantage, using bash's trap command. For example try running this in a window, then press Ctrl-C and check the contents of /tmp/trapped. Then run it again, close the window, and again check the contents of /tmp/trapped:
#!/bin/bash
trap "echo 'one' > /tmp/trapped" 1
trap "echo 'two' > /tmp/trapped" 2
echo "Waiting..."
sleep 300000
For information on signals, you should be able to man signal (FreeBSD or OSX) or man 7 signal (Linux).
(For bonus points: See how I numbered my facts? Do you understand why?)
So ... to your question. To "survive" disconnection, you want to specify behaviour that will be run when your script traps SIGHUP.
(Bonus question #2: Now do you understand where nohup gets its name?)
I'm trying to use a shell script to start a command. I don't care if/when/how/why it finishes. I want the process to start and run, but I want to be able to get back to my shell immediately...
You can just run the script in the background:
$ myscript &
Note that this is different from putting the & inside your script, which probably won't do what you want.
Everyone just forgot disown. So here is a summary:
& puts the job in the background.
Makes it block on attempting to read input, and
Makes the shell not wait for its completion.
disown removes the process from the shell's job control, but it still leaves it connected to the terminal.
One of the results is that the shell won't send it a SIGHUP(If the shell receives a SIGHUP, it also sends a SIGHUP to the process, which normally causes the process to terminate).
And obviously, it can only be applied to background jobs(because you cannot enter it when a foreground job is running).
nohup disconnects the process from the terminal, redirects its output to nohup.out and shields it from SIGHUP.
The process won't receive any sent SIGHUP.
Its completely independent from job control and could in principle be used also for foreground jobs(although that's not very useful).
Usually used with &(as a background job).
nohup cmd
doesn't hangup when you close the terminal. output by default goes to nohup.out
You can combine this with backgrounding,
nohup cmd &
and get rid of the output,
nohup cmd > /dev/null 2>&1 &
you can also disown a command. type cmd, Ctrl-Z, bg, disown
Alternatively, after you got the program running, you can hit Ctrl-Z which stops your program and then type
bg
which puts your last stopped program in the background. (Useful if your started something without '&' and still want it in the backgroung without restarting it)
screen -m -d $command$ starts the command in a detached session. You can use screen -r to attach to the started session. It is a wonderful tool, extremely useful also for remote sessions. Read more at man screen.
I can write shell scripts that trap SIGINT just fine, but I can't seem to trap SIGQUIT.
#!/bin/bash
function die {
echo "Dying on signal $1"
exit 0
}
trap 'die "SIGINT"' SIGINT
trap 'die "SIGQUIT"' SIGQUIT
while true; do
echo "sleeping..."
sleep 5
done
Executing this script and pressing CTRL-C has the desired effect, but pressing CTRL-\ (which, as I understand, should trigger SIGQUIT) does nothing except print ^\ in the terminal. Why?
I have two running theories. The first is that the semantics of SIGINT and SIGQUIT are different such that SIGQUIT only gets sent to the child process sleep, while SIGINT gets sent to both the child process and the parent bash process. If this is the case, where is it documented?
My second theory is that bash not only ignores (i.e., has a no-op handler for) SIGQUIT by default (as the man page suggests), but does not allow it to be trapped at all. This theory overlaps with the first theory, since it could be the case that SIGQUIT is going to both parent and child, but the parent (bash) just can't trap it. If this is the case, is there any way to trap SIGQUIT in a bash script?... perhaps some shopt I can set?
Edit: this is on Ubuntu 10.10 in gnome-terminal 2.32.0 running bash 4.1.5, and yes ^\ is configured to issue SIGQUIT (as reported by stty -a and confirmed by issuing ^\ SIGQUITs to other programs like ping).
UPDATE:
I just discovered that the problem must be due somehow to gnome-terminal. If I run this script from a virtual console (i.e., ctrl-alt-f1 to get out of X), it traps SIGQUIT perfectly fine when I press ^\. Same bash and everything, so the only difference must be the terminal emulator. So now my question becomes: how can I configure gnome-terminal to behave like the virtual console in this respect? I diff'd the outputs of stty -a in the virtual console and in gnome-terminal, and while there are differences, nothing seems immediately relevant (e.g., they both have quit = ^\;).
UPDATE 2:
Another experiment. Simply execute $ sleep 60 in gnome-terminal; press ^\ and the signal goes uncaught. Now execute $ sleep 60 in the virtual console; press ^\ and the signal is caught -- the process prints Quit and exits. But now run $ ping google.com in gnome-terminal and press ^\ -- the signal is caught and handled as expected. So there is something weird about gnome-terminal such that SIGQUIT can be caught by some programs, but not by others, even if those other others do catch it when invoked from the virtual console. Perhaps I should just upgrade my gnome-terminal.
I can only assume that this was some sort of bug in gnome-terminal 2.32.0; I have since upgraded to Ubuntu 11.04, with gnome-terminal 2.32.1 (and bash 4.2.8) and SIGQUIT is now trapped as expected.
I observe the same behavior on Fedora 19 with XFCE: according to ps s, bash in the xfce4-terminal has SIGQUIT ignored, running yes >/dev/null & and then ps s shows that even a subprocess has SIGQUIT ignored. When I (from the same terminal) run ssh localhost, the shell in the ssh session also has SIGQUIT ignored, but yes >/dev/null & has not.
Given the above comment which mentions gnome-terminal, I would guess the bug is in the common part of the two terminals: the vte library. Mine is vte-0.28.2-9.fc19.x86_64.
This question already has answers here:
What's the best way to send a signal to all members of a process group?
(34 answers)
Closed 6 years ago.
For testing purposes I have this shell script
#!/bin/bash
echo $$
find / >/dev/null 2>&1
Running this from an interactive terminal, ctrl+c will terminate bash, and the find command.
$ ./test-k.sh
13227
<Ctrl+C>
$ ps -ef |grep find
$
Running it in the background, and killing the shell only will orphan the commands running in the script.
$ ./test-k.sh &
[1] 13231
13231
$ kill 13231
$ ps -ef |grep find
nos 13232 1 3 17:09 pts/5 00:00:00 find /
$
I want this shell script to terminate all its child processes when it exits regardless of how it's called. It'll eventually be started from a python and java application - and some form of cleanup is needed when the script exits - any options I should look into or any way to rewrite the script to clean itself up on exit?
I would do something like this:
#!/bin/bash
trap : SIGTERM SIGINT
echo $$
find / >/dev/null 2>&1 &
FIND_PID=$!
wait $FIND_PID
if [[ $? -gt 128 ]]
then
kill $FIND_PID
fi
Some explanation is in order, I guess. Out the gate, we need to change some of the default signal handling. : is a no-op command, since passing an empty string causes the shell to ignore the signal instead of doing something about it (the opposite of what we want to do).
Then, the find command is run in the background (from the script's perspective) and we call the wait builtin for it to finish. Since we gave a real command to trap above, when a signal is handled, wait will exit with a status greater than 128. If the process waited for completes, wait will return the exit status of that process.
Last, if the wait returns that error status, we want to kill the child process. Luckily we saved its PID. The advantage of this approach is that you can log some error message or otherwise identify that a signal caused the script to exit.
As others have mentioned, putting kill -- -$$ as your argument to trap is another option if you don't care about leaving any information around post-exit.
For trap to work the way you want, you do need to pair it up with wait - the bash man page says "If bash is waiting for a command to complete and receives a signal for which a trap has been set, the trap will not be executed until the command completes." wait is the way around this hiccup.
You can extend it to more child processes if you want, as well. I didn't really exhaustively test this one out, but it seems to work here.
$ ./test-k.sh &
[1] 12810
12810
$ kill 12810
$ ps -ef | grep find
$
Was looking for an elegant solution to this issue and found the following solution elsewhere.
trap 'kill -HUP 0' EXIT
My own man pages say nothing about what 0 means, but from digging around, it seems to mean the current process group. Since the script get's it's own process group, this ends up sending SIGHUP to all the script's children, foreground and background.
Send a signal to the group.
So instead of kill 13231 do:
kill -- -13231
If you're starting from python then have a look at:
http://www.pixelbeat.org/libs/subProcess.py
which shows how to mimic the shell in starting
and killing a group
#Patrick's answer almost did the trick, but it doesn't work if the parent process of your current shell is in the same group (it kills the parent too).
I found this to be better:
trap 'pkill -P $$' EXIT
See here for more info.
Just add a line like this to your script:
trap "kill $$" SIGINT
You might need to change 'SIGINT' to 'INT' on your setup, but this will basically kill your process and all child processes when you hit Ctrl-C.
The thing you would need to do is trap the kill signal, kill the find command and exit.