SSH doesnt exit from command line - bash

I ssh to another server and run a shell script like this nohup ./script.sh 1>/dev/null 2>&1 &
Then type exit to exit from the server. However it just hangs. The server is Solaris.
How can I exit properly without hanging??
Thanks.

I assume that this script is a long running one. In this case you need to detach the process from the terminal that you wish to close when you terminate your ssh session.
Actually you already done most of the work by reassigning both stdout and stderr to /dev/null, however you didn't do that for stdin.
I used the test case of:
ssh localhost
nohup sleep 10m &> /dev/null &
^D
# hangs
While
ssh localhost
nohup sleep 10m &> /dev/null < /dev/null &
^D
# exits
I second the recommendation to use the excellent gnu screen, that will do this service for you, among others.
Oh, and have you considered running the script directly and not within a shell? I.e.:
ssh user#host script.sh

If you're trying to leave a command running remotely after you close your SSH link, I strongly recommend you use screen and learn to detach the screen. That's much better than leaving background processes around; it also lets you reconnect and see what the process is up to.
Since you haven't provided us with script.sh, I don't think we can know for sure why the command is hanging.

You can use the command :
~.
This command close the ssh session.

sh -c ./script.sh &

Related

Run multiple commands simultaneously in bash in one line

I am looking for an alternative to something like ssh user#node1 uptime && ssh user#node2 uptime, where both of the SSH-commands are run simultaneosly. As they are both blocking until the command returns, && and ; between them don't work.
My goal is to run infinite while loops on both nodes via SSH. So the first one would never return, and the second one would never be run. I would then like to save the output after terminating the loops with Ctrl+C to a log-file and read that one via Python.
Is there an easy solution to this?
Thanks in advance!
Capturing SSH output
On the one hand, you need to capture the ssh output/error and store it into a file so that you can process it afterwards with Python. To this purpose you can:
1- Store output and error directly into a file
ssh user#node cmd 2>&1 > session.log
2- Show output/error in the console while storing it into a file (I would recommend this one)
ssh user#node cmd 2>&1 | tee session.log
Check this for further information about the tee command.
Running commands in parallel
On the other hand, you want to run both commands in parallel and block the current bash process. You can achieve this by:
1- Blocking the current bash process until their childs are done.
cmd1 & ; cmd2 & ; wait
Check this for further information about the wait command.
2- Spawning the child processes and freeing the current bash process. Notice that the processes will be kept alive although the main process ends.
nohup cmd & ; nohup cmd &
The whole thing
I would recommend combining both approaches using tee (so you can still see the ssh outputs on your terminal) and blocking the current process until everything is done (so that when you kill the main process all the processes are killed too).
ssh user#node1 uptime 2>&1 | tee session1.log & ; ssh user#node2 uptime 2>&1 | tee session2.log & ; wait

Shell Script Start a Server and return

I have a shell script that starts a server. I actually ssh into my server and run the shell script. As soon as it starts, it logs everything to the console and the console does not return. The problem starts when I close my Machine, the ssh connection is disconnected and the server that I started is shutdown. I guess I need to start the server and return from the shell. Here is what I have so far:
#!/bin/bash
java -Xmx1G -Dhttp.port=8080 -Dconfig.file=MyProject/conf/application.conf -cp ".:MyProject/lib/*" play.core.server.NettyServer .
exit 0
Any suggestions on how to return after calling this shell script?
After ssh to the server Just backgrounding your script (./myscript &) will not daemonize it. You must disconnect stdin, stdout, and stderr, and make it ignore the hangup signal (SIGHUP).
nohup ./myscript 0<&- &>/dev/null &
will do the job. Or, to capture all output:
nohup ./myscript 0<&- &> my.admin.log.file &
To avoid script termination on ssh session close use nohup (No hangup) with output redirection to a log file:
nohup bash /path/to/startScript.sh > script.log 2>&1 &
You can redirect stdout and stderr to files, background and disown the process (or nohup it) and then exit the script.
However, the correct way to do this is to use some kind of process manager daemon like upstart.

Bash script that will survive disconnection, but not user break

I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I can solve the first part of it like this:
#!/bin/bash
cmd='
#commands here, avoiding single quotes...
'
nohup bash -c "$cmd" &
tail -f nohup.out
But pressing Ctrl+C obviously just kills the tail process, not the main body. Can I have both? Maybe using Screen?
I want to write a bash script that will continue to run if the user is disconnected, but can be aborted if the user presses Ctrl+C.
I think this is exactly the answer on the question you formulated, this one without screen:
#!/bin/bash
cmd=`cat <<EOF
# commands here
EOF
`
nohup bash -c "$cmd" &
# store the process id of the nohup process in a variable
CHPID=$!
# whenever ctrl-c is pressed, kill the nohup process before exiting
trap "kill -9 $CHPID" INT
tail -f nohup.out
Note however that nohup is not reliable. When the invoking user logs out, chances are that nohup also quits immediately. In that case disown works better.
bash -c "$cmd" &
CHPID=$!
disown
This is probably the simplest form using screen:
screen -S SOMENAME script.sh
Then, if you get disconnected, on reconnection simply run:
screen -r SOMENAME
Ctrl+C should continue to work as expected
Fact 1: When a terminal (xterm for example) gets closed, the shell is supposed to send a SIGHUP ("hangup") to any processes running in it. This harkens back to the days of analog modems, when a program needed to clean up after itself if mom happened to pick up the phone while you were online. The signal could be trapped, so that a special function could do the cleanup (close files, remove temporary junk, etc). The concept of "losing your connection" still exists even though we use sockets and SSH tunnels instead of analog modems. (Concepts don't change; all that changes is the technology we use to implement them.)
Fact 2: The effect of Ctrl-C depends on your terminal settings. Normally, it will send a SIGINT, but you can check by running stty -a in your shell and looking for "intr".
You can use these facts to your advantage, using bash's trap command. For example try running this in a window, then press Ctrl-C and check the contents of /tmp/trapped. Then run it again, close the window, and again check the contents of /tmp/trapped:
#!/bin/bash
trap "echo 'one' > /tmp/trapped" 1
trap "echo 'two' > /tmp/trapped" 2
echo "Waiting..."
sleep 300000
For information on signals, you should be able to man signal (FreeBSD or OSX) or man 7 signal (Linux).
(For bonus points: See how I numbered my facts? Do you understand why?)
So ... to your question. To "survive" disconnection, you want to specify behaviour that will be run when your script traps SIGHUP.
(Bonus question #2: Now do you understand where nohup gets its name?)

Script: SSH command execute and leave shell open, pipe output to file

I would like to execute a ssh command and pipe the output to a file.
In general I would do:
ssh user#ip "command" >> /myfile
the problem is that ssh close the connection once the command is executed, however - my command sends the output to the ssh channel via another programm in the background, therefore I am not receiving the output.
How can I treat ssh to leave my shell open?
cheers
sven
My understanding is that command starts some background process that perhaps will write some output to the terminal later. If command terminates before that the ssh session will be terminated and there will be no terminal for the background program to write to.
One simple and naive solution is to just sleep long enough
ssh user#ip "command; sleep 30m" >> /myfile
A better solution than sleep would be to wait for the background process(es) to finish in some more intelligent way, but that is impossible to say without further details.
Something more powerful than bash would be Python with Paramiko and PyExpect.

How to make ssh to kill remote process when I interrupt ssh itself?

In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.

Resources