mosh: sources .bashrc twice with different PIDs - bash

I use tmux in my build server. Recently I wrote a small .bashrc script that will automate attaching to tmux session if one exits. The script looks like follows
# Automate tmux Startup
if [ -z "$TMUX" ]; then
# we're not in a tmux session
if [ `ps -o comm= -p $PPID` == "sshd" ]; then
# even VNC can have $SSH_TTY and $SSH_CONNECTION set so we cant find out
# if we want to attach to tmux during ssh so we need to see if parent
# process is sshd see
# http://unix.stackexchange.com/questions/145780/linux-ssh-connection-is-set-even- without-sshing-to-the-server
# Only attach to tmux if its me
WHOAMI=$(whoami)
if tmux has-session -t $WHOAMI 2>/dev/null; then
tmux -2 attach-session -t $WHOAMI
else
echo "Start tmux with username as session name 'tmux new -s $WHOAMI' "
fi
fi #parent process check
else
echo "Inside tmux"
fi
The problem is whenever I ssh using mosh it just hangs inside the tmux window. I found that if I remove this script and then use mosh to just ssh and attach to tmux manually then I don't hit on this issue. This issue only happens if I place the above script in .bashrc.
My suspicion was that mosh is waiting for .bashrc to complete and waits for it indefinitely and in the mean time it also does not pass the mouse and keystroke controls over to tmux. I confirmed this by killing the tmux session from another terminal and found that mosh recovered and tried to execute my previously buffered keystrokes.
The strange thing is that how mosh managed to cross the ps -o comm= -p $PPID == "sshd" check. This is because for mosh the Process name of shell is bash and Parent process name of the shell is mosh-server and not sshd. Further investigation revealed that mosh executes .bashrc twice once as sshd and once as mosh-server. This is reproducible by putting ps -o comm= -p $PID -p $$ >> moshbash in .bashrc.My tmux attach was happening in sshd and hanging mosh forever.
A simple workaround I found was to do
mosh user#server -- tmux attach -t `whoami`
I can do a similar thing to my ssh as well to fire command from the client side and eliminate the .bashrc script altogether but I do not wish to spill over my server side automation to the client side.
Actually mosh does not seem to be wrong. It is sourcing a .bashrc file only once per PID .I think it is a bad design to have .bashrc kick of a blocking session to tmux since tmux also needs terminal we cant start off as a background process either so & also wont work.
Is there any other way around this problem ? I think if we can distingusih between a mosh client setting up a sshd and a ssh client setting up sshd that information can be used.

.bashrc always executes everytime an interactive non-login bash instantiates so use .bash_profile instead so it would only run once during login to ssh. If the script or processes of the script summons bash, it would cause repeated summoning.
See Bash Startup Files for more info and other startup files.

Related

Executing new tmux session in shell script with exec "$#"

I only have a shell (sh, not bash) login over an SSH connection to a remote device. The script ends with exec "$#", which, as I understand, passes on all parameters as strings.
Now I want to execute that last command in a new tmux session, so that the program would continue in case of a connection loss.
This is for a robotics project, therefore I am assuming the connection to be partially interrupted.
How can I wrap this into a command, that will run exec "$#" in tmux?
I already tried:
tmux new "exec \"$#\""
and
tmux new -s session send-keys "exec \"$#\""
as well as both variations with only "exec $#"
However, nothing seems to work in shellchecker or my draft .sh files. I would be very grateful for any help.
Try:
exec tmux new -- "$#"
If it doesn't work, you can try something like:
exec tmux new -- sleep 10000
And try to attach to make sure tmux is being run (for example that the script can find tmux).
sh -x may also be useful to see what is actually being run.
Also remember that if tmux is already started the environment inside may not be the same as in the script.

tmux session closes when ssh terminal closes

What I am trying to do:
I am trying to execute couple of bash commands on a remote machine using ssh and I want those commands to finish execution even after ssh-session is closed in the middle of execution.
What I have done so far :
I am using Putty to connect to ssh (one of the reasons of using putty is that one machine is windows and remote machine is mac-os and I need some way to initiate ssh through Python command). I am passing command.txt file and it contains all the commands I want to execute.
putty -ssh start-ts#ip.000.001.101 -m command.txt ( not real ip)
command.txt : looks like this :
export PATH=$PATH:/usr/local/bin
echo $PATH; sleep 1
tmux -c 'queue.sh'
sleep 100
After connecting to ssh, to make sure that my script/commands continue running on remote machine even after ssh-session is closed on another machine, I am using 'tmux'
But the Problem is :
Even after using tmux, processes called by queue.sh terminate as soon as I close ssh-session.
I have also tried
export PATH=$PATH:/usr/local/bin
echo $PATH; sleep 1
tmux
queue.sh
sleep 100
does the same thing.
What I also tried :
If I just pass following commands using ssh (in command.txt)
export PATH=$PATH:/usr/local/bin
echo $PATH; sleep 1
tmux
and then manually type queue.sh in tmux terminal, is that case I can close ssh-terminal and remote machine continues execution of processes.
Any suggestion ?
I want to be able to pass everything though script files and keeping the processes running on remote machine (mac-os) even after closing ssh-session on another machine.
Thanks
The -c option doesn't actually start a new session; it's for compatibility with other shells if you use tmux as a login shell. In order to run queue.sh in a tmux session, try starting tmux with
tmux new-session queue.sh

GNU screen: Launch command in session without changing window to it

We have an attended-upgrade script which launches apt-get update && apt-get upgrade simultaneously on all our administered systems. Ideally, we'd want to launch them all inside a screen session. When I do it like this:
File: upgrade.sh
for host in $ALLHOSTS
do
some_commands_which_take_considerable_time
screen -X screen sh -c "ssh $host \"apt-get update && apt-get upgrade\""
done
$ screen ./upgrade.sh
, it works, but as there are new windows arriving on the session, they are automatically being switched to. Instead, I'd rather have a version where the active window is fixed unless contained process quits or I switch manually using ^A n.
Bonus points if there is a possibility to preserve windows with exited processes, but keeping them separate from windows with active processes.
You can do this with tmux. For example:
# Start a session named "apt-get" and leave it running in the background.
tmux session-new -d -s apt-get
# Preserve windows with inactive processes.
tmux set-option -t apt-get set-remain-on-exit on
# Start a new window without switching to it. You can also do this from
# within a running tmux session, not just outside of it.
tmux new-window -d -t apt-get -n upgrade-$host \
"ssh $host 'apt-get update && apt-get upgrade'"
Note that you can have multiple windows with the same name, or modify the argument to the -n flag for unique names. The tmux application doesn't care.
As an example, you could name each window "upgrade," but that would make it difficult to identify your SSH sessions at a glance. Instead, this example appends the value of the host variable (populated by your for-loop) to each window name. This makes it easier to navigate, or to programmatically close windows you are no longer interested in. This is especially valuable when you have a lot of unclosed windows displaying terminated processes.
Overall, the syntax is a little cleaner and more intuitive than GNU screen's for this sort of task, but your mileage may vary.
W/r/t preserving windows after a subprocess exits, one possibility is to invoke the zombie command in your screen config file(s), which requires that you specify two keyboard characters that kill or resurrect the window, respectively. E.g.:
zombie KR
Then, K would kill a window whose subprocess has terminated, while R would attempt to relaunch the subprocess in the same window. Note that, in a zombie window, those keys are captured at the top level (i.e., do not precede them with your normal screen control character prefix sequence).
In order to prevent automatic switching to a newly created window, try altering your invocation of screen to something like the following:
screen -X eval 'screen sh -c "ssh $host \"apt-get update && apt-get upgrade\""' 'other'
Thanks to CodeGnome, I'll probably go with tmux as I believe there is no way to do it with screen (that's unfortunate, really!).
To give a better sketch of how it can be used:
#!/bin/sh
tmux new-session -d -s active
tmux new-session -d -s inactive
tmux set-option -t active set-remain-on-exit on
for host in $ALLHOSTS
do
tmux new-window -d -t active: -n upgrade-$host "
./do_upgrade_stuff.sh $host
tmux move-window -s active:upgrade-$host -t inactive:
"
done

SSH doesnt exit from command line

I ssh to another server and run a shell script like this nohup ./script.sh 1>/dev/null 2>&1 &
Then type exit to exit from the server. However it just hangs. The server is Solaris.
How can I exit properly without hanging??
Thanks.
I assume that this script is a long running one. In this case you need to detach the process from the terminal that you wish to close when you terminate your ssh session.
Actually you already done most of the work by reassigning both stdout and stderr to /dev/null, however you didn't do that for stdin.
I used the test case of:
ssh localhost
nohup sleep 10m &> /dev/null &
^D
# hangs
While
ssh localhost
nohup sleep 10m &> /dev/null < /dev/null &
^D
# exits
I second the recommendation to use the excellent gnu screen, that will do this service for you, among others.
Oh, and have you considered running the script directly and not within a shell? I.e.:
ssh user#host script.sh
If you're trying to leave a command running remotely after you close your SSH link, I strongly recommend you use screen and learn to detach the screen. That's much better than leaving background processes around; it also lets you reconnect and see what the process is up to.
Since you haven't provided us with script.sh, I don't think we can know for sure why the command is hanging.
You can use the command :
~.
This command close the ssh session.
sh -c ./script.sh &

How to make ssh to kill remote process when I interrupt ssh itself?

In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.

Resources