tmux session closes when ssh terminal closes - bash

What I am trying to do:
I am trying to execute couple of bash commands on a remote machine using ssh and I want those commands to finish execution even after ssh-session is closed in the middle of execution.
What I have done so far :
I am using Putty to connect to ssh (one of the reasons of using putty is that one machine is windows and remote machine is mac-os and I need some way to initiate ssh through Python command). I am passing command.txt file and it contains all the commands I want to execute.
putty -ssh start-ts#ip.000.001.101 -m command.txt ( not real ip)
command.txt : looks like this :
export PATH=$PATH:/usr/local/bin
echo $PATH; sleep 1
tmux -c 'queue.sh'
sleep 100
After connecting to ssh, to make sure that my script/commands continue running on remote machine even after ssh-session is closed on another machine, I am using 'tmux'
But the Problem is :
Even after using tmux, processes called by queue.sh terminate as soon as I close ssh-session.
I have also tried
export PATH=$PATH:/usr/local/bin
echo $PATH; sleep 1
tmux
queue.sh
sleep 100
does the same thing.
What I also tried :
If I just pass following commands using ssh (in command.txt)
export PATH=$PATH:/usr/local/bin
echo $PATH; sleep 1
tmux
and then manually type queue.sh in tmux terminal, is that case I can close ssh-terminal and remote machine continues execution of processes.
Any suggestion ?
I want to be able to pass everything though script files and keeping the processes running on remote machine (mac-os) even after closing ssh-session on another machine.
Thanks

The -c option doesn't actually start a new session; it's for compatibility with other shells if you use tmux as a login shell. In order to run queue.sh in a tmux session, try starting tmux with
tmux new-session queue.sh

Related

Keep ssh tunnel open after running script

I have a device with intermittent connectivity that "calls home" to open a reverse tunnel to allow me to SSH into it. This works very reliably started by systemd and automatically restarted on any exit:
ssh -R 1234:localhost:22 -N tunnel-user#reliable.host
Now however I want to run a script on the reliable-host on connect. This is easy enough with a simple change to the ssh ... cli: swap -N for a script name on the remote reliable-host:
ssh -R 1234:localhost:22 tunnel-user#reliable.host ./on-connect.sh
The problem is that once the script exits, it closes the tunnels if they're not in use.
One workaround I've found is to put a long sleep at the end of my script. This however leaves sleep processes around after the connection drops since sleep doesn't respond to SIGHUP. I could put a shorter sleep in an infinite loop (I think) but that feels hacky.
~/on-connect.sh
#!/bin/bash
# Do stuff...
sleep infinity
How can I get ssh to behave like -N has been used so that it stays connected with no activity but also runs a script on initial connection? Ideally without needing to have a special sleep (or equivalent) in the remote script but, if not possible, proper cleanup on the reliable-host when the connection drops.

mosh: sources .bashrc twice with different PIDs

I use tmux in my build server. Recently I wrote a small .bashrc script that will automate attaching to tmux session if one exits. The script looks like follows
# Automate tmux Startup
if [ -z "$TMUX" ]; then
# we're not in a tmux session
if [ `ps -o comm= -p $PPID` == "sshd" ]; then
# even VNC can have $SSH_TTY and $SSH_CONNECTION set so we cant find out
# if we want to attach to tmux during ssh so we need to see if parent
# process is sshd see
# http://unix.stackexchange.com/questions/145780/linux-ssh-connection-is-set-even- without-sshing-to-the-server
# Only attach to tmux if its me
WHOAMI=$(whoami)
if tmux has-session -t $WHOAMI 2>/dev/null; then
tmux -2 attach-session -t $WHOAMI
else
echo "Start tmux with username as session name 'tmux new -s $WHOAMI' "
fi
fi #parent process check
else
echo "Inside tmux"
fi
The problem is whenever I ssh using mosh it just hangs inside the tmux window. I found that if I remove this script and then use mosh to just ssh and attach to tmux manually then I don't hit on this issue. This issue only happens if I place the above script in .bashrc.
My suspicion was that mosh is waiting for .bashrc to complete and waits for it indefinitely and in the mean time it also does not pass the mouse and keystroke controls over to tmux. I confirmed this by killing the tmux session from another terminal and found that mosh recovered and tried to execute my previously buffered keystrokes.
The strange thing is that how mosh managed to cross the ps -o comm= -p $PPID == "sshd" check. This is because for mosh the Process name of shell is bash and Parent process name of the shell is mosh-server and not sshd. Further investigation revealed that mosh executes .bashrc twice once as sshd and once as mosh-server. This is reproducible by putting ps -o comm= -p $PID -p $$ >> moshbash in .bashrc.My tmux attach was happening in sshd and hanging mosh forever.
A simple workaround I found was to do
mosh user#server -- tmux attach -t `whoami`
I can do a similar thing to my ssh as well to fire command from the client side and eliminate the .bashrc script altogether but I do not wish to spill over my server side automation to the client side.
Actually mosh does not seem to be wrong. It is sourcing a .bashrc file only once per PID .I think it is a bad design to have .bashrc kick of a blocking session to tmux since tmux also needs terminal we cant start off as a background process either so & also wont work.
Is there any other way around this problem ? I think if we can distingusih between a mosh client setting up a sshd and a ssh client setting up sshd that information can be used.
.bashrc always executes everytime an interactive non-login bash instantiates so use .bash_profile instead so it would only run once during login to ssh. If the script or processes of the script summons bash, it would cause repeated summoning.
See Bash Startup Files for more info and other startup files.

bash: parallel command execution over ssh [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
my problem is as follows: I'm writing a script whose purpose is to run a certain scripts on different servers in parallel. I.e., I want to log into a remote server, enter the password (this is not negotiable because boss), starts the script, close the connection while the script is still running, connect to the next server and do the whole dance again. How do I do it?
After logging in you can use screen utility to start new terminal session and later detach from it, example:
[user#local]$ ssh machine
[user#machine]$ screen -S my_session
# you are now switched to new terminal session named "my_session"
# now you can start long operation, that you want to keep in background
[user#machine]$ yes
# press "Ctrl-a d" to detach, you will go back to original session
[detached from 11271.my_session]
# now you can leave ssh (your session with "yes" running will be kept in background)
# to list existing screen sessions run:
[user#machine]$ screen -list
There is a screen on:
11271.my_session (Detached)
1 Socket in /var/run/screen/S-user.
# to re-attach use -r
[user#machine]$ screen -r my_session
# you will be back to session with "yes" still running (press Ctrl-C to stop it)
Once you will understand how screen works, you can try scripting it; this is how you start screen session in detached state running my_command:
$ screen -d -m my_command
Assuming bash is the target shell on all of the remote machines, you could do for one machine:
ssh user#host /path/to/script '&>/dev/null' '</dev/null' '&' disown
For multiple machines you could then script this like:
hosts='user1#host1 user2#host2'
for host in $hosts
do
ssh $host /path/to/script '&>/dev/null' '</dev/null' '&' disown
done
This should just have you typing in passwords.
Alternatively if you can use the same password for multiple machines, I would recommend looking in to ClusterSSH. With this you can log into multiple machines simultaneously and type commands to be sent to all of them in a single window or in individually in each xterm window. This would save you typing the same password repeatedly. Once logged in you could run the command as above (without the ssh user#host part) and exit.
Update
Couple of extra thoughts here. First it probably isn't a great idea to discard the script output completely, since you will never know what happened if something went wrong. You could just put it in a file to look at later by replacing '&>/dev/null' with '&>filename'. Another approach would be to have the remote machine email it to you (provided it is properly set up to do this):
host=user#host
ssh $host \(/path/to/script '2>&1' \| mail -s "$host output" me#me.com\) \
'&>/dev/null' '</dev/null' '&' disown
Second if the script is on the local host, you can copy it across and execute it in one command. Assumes a shell script, if not just replace sh with the correct interpretor)
</path/to/script ssh user#host \
cat \>script \; sh ./script '&>/dev/null' '</dev/null' '&' disown
expect can automate this pretty easily:
#!/usr/bin/expect -f
set hosts {your list of hosts goes here ...}
foreach host $hosts {
spawn ssh -t user#$host screen ./script.sh
expect "assword:"
send -- "secret\r"
sleep 1
send -- "d"
expect eof
}

Spawn subshell for SSH and continue with program flow

I'm trying to write a shell script that automates certain startup tasks based on my location (home/campusA/campusB). I go to University and take classes in two different campuses (hence campusA/campusB). My location is determined by which wireless network I'm connected to. For the purposes of this script, we can assume that I will be connected to one of these networks when the script is called and my script knows which one I'm connected to based on a call to iwconfig.
This is what I want it to do:
cat file1 > file2 # always do this, regardless of where I am
if Im at home:
start tweetdeck, thunderbird, skype
else if Im at campusA:
activate the login script # I need to login on a webform before I get internet access.
# I have written a script to automate this.
# Wait for this script to finish before doing anything else
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
else if Im at campusB:
ssh username#domain # this is the problematic line
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
start tweetdeck, thunderbird
close the terminal with the "exit" command
The problem is that campusB's wireless network is behind a firewall, which grants me internet access ONLY after I successfully ssh by username#domain. After a successful ssh, I need to keep the terminal window active in order to hold keep the internet access. If I close the terminal window, I lose internet access (this is bad).
When I try doing just ssh username#domain, the script stops because I don't exit the ssh command. I can't ^C out of it, which means that the rest of the script is never executed. I also have the same problem if I just close the terminal window in an attempt to kill the ssh session.
Some googling brought me to subshell, which I'm either using wrong or can't use to solve my problem. So how should I go about solving this problem? I'd appreciate any help - I've been at this for a while now and am unable to find anything helpful. If it makes a difference, I'd rather not store my ssh password in the script
Further, ampersanding the ssh call (ssh username#domain &) doesn't seem to do any good (can anyone explain why?)
Thank you in advance
EDIT
I must clarify, that the ssh connection has to be active in order for me to have internet access. Thus, when I close the terminal window, I need the ssh connection to still be active.
I had a script that looped on 6 servers, calling via ssh in the background. In 1 part of the script, there was a mis-behaving vendor application; the application didn't 'let go' of the connection properly. (other parts of the script using ssh in background worked fine).
I found that using ssh -t -t cured the problem. Maybe this can help you too.
(a teammate found this on the web, and we had spent so much time, I never went back to read the article that suggested this. The man page on our system gave no hint that such a thing was possible)
Hope this helps.
You may want to try to double background myProg2 to detach it from the tty:
# cf. "Wizard Boot Camp, Part Six: Daemons & Subshells",
# http://www.linux-mag.com/id/5981
(myProg2 &) &
Another option may be to use the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
Having a ssh with pseudy tty on background shell
In addition to #shellter's answer, I would like make some precision:
where #shelter said:
The man page on our system gave no hint that such a thing was possible
On my system (Debian 7 GNU/Linux), if I hit:
man -Pcol\ -b ssh| grep -A3 '^ *-t '
I could read:
-t Force pseudo-tty allocation. This can be used to execute arbi‐
trary screen-based programs on a remote machine, which can be
very useful, e.g. when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty.
Yes: Multiple -t options force tty allocation, even if ssh has no local tty.
This mean: If you remotely run a tool that require access to pseudo terminal ( pty like /dev/pts/0), you could run them by using -t switch.
But this would work only if ssh is run from a shell console (aka having his own pty). If you plan to run them is shell session without console, like background scripts, you may use Multiple -t to enforce pseudo tty allocation from ssh.
Multiple ssh shell on one ssh connection
In addition to answers from #tommy and #geekosaur, I would make some precision:
#tommy point to a very intersting feature of ssh. Not sure this have a lot to do with answer, but speaking around long time connection, this feature has to be clearly understood.
Once a connection is established, ssh could (and know how to) use them to drive a lot of thing in this one connection:
-L let you drive remote TCP connections to local machines/network. (full syntax is: -L localip:localport:distip:distport) where localip could be specified to permit other hosts from same local domain to access same tcp bind, and distip could by any host from distant network ( not only localhost ) sample: -L192.168.1.31:8443:google.com:443 permit any host from local domain to reach google through your host: http://192.168.1.31:8443
-R Same remarks in reverse way!
-M Tell ssh to open a local unix socket for bindind next ssh consoles. Simply open two terminal window. First in both window, hit: ssh somewhere than hit netstat -tan | grep :22 or netstat -tan | grep 192.168.1.31:22 (assuming 192.168.1.31 is your onw host's ip)
Than compare close all your ssh session and in first terminal, hit: ssh -M somewhere and in second, simply ssh somewhere. you may see in second terminal:
$ ssh somewhere
+ ssh somewhere
Last login: Mon Feb 3 08:58:01 2014 from elsewhere
If now you hit netstat -tan | grep 192.168.1.31:22 (on any of two oppened ssh session;) you must see that there is only one tcp connection.
This kind of features could be used in combination with -L and maybe some sleep 86399...
To work around a tcp killer router that close every inactive TCP connection from more than 120 seconds, I run:
ssh -M somewhere 'while :;do uptime;sleep 60;done'
This ensure connection stay up even if I dont hit a key for more than two minutes.
Here's a few thoughts that might help.
Sub-shells
Sub-shells fork new processes, but don't return control to the calling shell. If you want to fork a sub-shell to do the work for you, then you'll need to append a & to the line.
(ssh username#domain) &
But this doesn't look like a compelling reason to use a sub-shell. If you had a number commands you wanted to execute in order from each other, yet in parallel from the calling shell, then maybe it would be worth it. For example...
(dothis.sh; thenthis.sh; andthislastthingtoo.sh) &
Forking
I'm not sure why & isn't working for you, but it may be worth looking into nohup as well. This makes the command "immune" to hang up signals.
nohup ssh username#domain (try with and without the & at the end)
Passwords
Not storing passwords in the script is essential for any ssh automation. You can accomplish that using public key cryptography which is an inherent feature of ssh. I wont go into the details here because there are a number of great resources all across the interwebs on setting this up. I strongly suggest investigating this further.
HOWTO: set up ssh keys - Paul Keck, 2001
SSH Keys - archlinux.org
SSH with authentication key instead of password - Debian Administration
Secure Shell - Wikipedia, the free encyclopedia
If you do go this route, I also suggest running ssh in "batch mode" which will disable password querying and will automatically disconnect from the server if it becomes unresponsive after 5 minutes.
ssh -o 'BatchMode=yes' username#domain
Persistence
Then if you want to persist the connection, run some silly loop in bash! :)
ssh -o 'BatchMode=yes' username#domain "while (( 1 == 1 )); do sleep 60; done"
The problem with & is that ssh loses access to its standard input (the terminal), so when it goes to read something to send to the other side it either gets an error and exits, or is killed by the system with SIGTTIN which will implicitly suspend it. The -n and -f options are used to deal with this: -n tells it not to use standard input, -f tells it to set up any necessary tunnels etc., then close the terminal stream.
So the best way to do this is probably to do
ssh -L 9999:localhost:9999 -f host & # for some random unused port
and then manually kill the ssh before logout. Alternately,
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' </dev/null &
(The redirection is to make sure the SIGTTIN doesn't happen anyway.)
While you're at it, you may want to save the process ID and shut it down from your .logout/.bash_logout:
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' < /dev/null & echo $! >~.ssh_pid; chmod 0600 ~/.ssh_pid
and in .bash_logout:
if test -f ~/.ssh_pid; then
set -- $(sed -n 's/^\([0-9][0-9]*\)$/\1/p' ~/.ssh_pid)
if [ $# = 1 ]; then
kill $1 >/dev/null 2>&1
fi
rm ~/.ssh_pid
fi
The extra code there attempts to avoid someone sabotaging your ~/.ssh_pid, because I'm a professional paranoid.
(Code untested and may have typoes)
It's been a while since I've used ssh, and I can't test it right now, but have you tried the -f switch?
ssh -f username#domain
The man page says it backgrounds ssh. Not sure why & wouldn't work, but I guess it's interpreting it as a command to be run on the remote machine.
Maybe screen + ssh would fit the bill as well?
Something like:
screen -d -m -S sessionName cmd
screen -d -m -S sessionName cmd &
# reconnect with
screen -r sessionName

How to make ssh to kill remote process when I interrupt ssh itself?

In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.

Resources