open gnome terminal tabs programmatically and execute commands in sequence - shell

When working remotely, I have a series of tabs that I open in gnome-terminal, and commands that I execute in them. I would like to automate all this setup as a single command.
If these commands could run independently and in parallel, I'd just adapt the answer to this question. In fact, I tried, using the following shell script:
gnome-terminal --working-directory="/home/superelectric" --tab -t "gate" -e 'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' --tab -t "mydesktop" -e 'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
Spread out over multiple lines, for readability:
gnome-terminal \
--working-directory="/home/superelectric" \
--tab \
-t "gate" \
-e \
'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' \
--tab \
-t "mydesktop" \
-e \
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
The first part opens a tab, names it 'gate', and executes 'ssh gate_tunnel' within it. This is an ssh alias that opens a tunnel to 'mydesktop' at school, through the school's outward-facing server, 'gate'.
The second part opens another tab, names it 'mydesktop', and executes 'ssh tunneled_mydesktop' within it. This is another ssh alias, which connects to mydesktop through the tunnel.
~/.ssh/config:
Host gate_tunnel
LocalForward 8023 <my_desktop_at_school>:22
HostName <my_school_server>
That's the theory. In practice, the two commands execute in parallel, whereas I need to ensure that the first tab's command (open tunnel) completes before executing the second tab's command (connect through tunnel).
Is there maybe some command I can execute in the second tab, that 'waits' until the ssh tunnel is opened?

Ok, I think i get it. As i mentioned in the comments the first thing that comes to mind for reaching your school desktop from the outside is to ssh into the school gate and from there ssh into your desktop with something like:
$ ssh -t gate.school.edu ssh desktop_name
There's only one tab then, so your problem doesn't exist.
However there's something very cool with your current setup:
From home it's almost as if you had a direct connection to your desktop machine, so you can scp into it directly and forget about gate. With the solution above that's not possible anymore because we end up with an indirect connection: If you want to scp you have to do it from gate and that sucks.
Check out this article on using ssh's ProxyCommand feature:
Transparent Multi-hop SSH
You get the best of both worlds then :)

Hmm... this may not be a perfect solution. Ideally you should use something that monitors the ssh connection. But, you can check the ssh process with ps. And wait for ssh command to come alive.
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do # try 10 times
if ps aux ¦ grep <my_desktop_at_school> then
# the tunnel connected now execute the second command
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
else
continue # or you could do something here if you wish
fi
sleep 10 # sleep for 10 seconds and try again
let COUNTER=COUNTER+1
done
You will have to run this script in the second tab.
Hope it helps.

Related

Capture output of double-ssh (ssh twice) session as BASH variable

I'd like to capture the output of an ssh session. However, I first need to ssh twice (from my local computer to the remote portal to the remote server), then run a command and capture the output.
Doing this line-by-line, I would do:
ssh name#remote.portal.com
ssh remote.server.com
remote.command.sh
I have tried the following:
server=remote.server.com ##define in the script, since it varies
sshoutput=$(ssh -tt name#remote.portal.com exec "ssh -tt ${server} echo \"test\"")
echo $sshoutput
I would expect the above script to echo "test" after the final command. However, the outer ssh prompt just hangs after I enter my command and, once I Ctrl+c or fail to enter my password, the inner ssh session fails (I believe since stdout is no longer printed to screen and I no longer get my password prompt).
If I run just the inner command (i.e., without "sshoutput=$(" to save it as a variable), then it works but (obviously) does not capture output. I have also tried without the "exec".
I have also tried saving the inner ssh as a variable like
sshoutput=$(ssh -tt name#portal myvar=$(ssh -tt ${server} echo \"test\"") && echo $myvar)
but that fails because BASH tries to execute the inner ssh before sending it to the outer ssh session (I believe), and the server name is not recognized.
(I have looked at https://unix.stackexchange.com/questions/89428/ssh-twice-in-bash-alias-function but they simply say "more flags required if using interactive passwords" and do not address capturing output)
Thanks in advance for any assistance!
The best-practice approach here is to have ssh itself do the work of jumping through your bouncehost.
result=$(ssh \
-o 'ProxyCommand=ssh name#remote.portal.com nc -w 120 %h %p' \
name#remote.server.com \
"remote.command.sh")
You can automate that in your ~/.ssh/config, like so:
Host remote.server.com
ProxyCommand ssh name#remote.portal.com nc -w 120 %h %p
...after which any ssh remote.server.com command will automatically jump through remote.portal.com. (Change nc to netcat or similar, as appropriate for tools that are installed on the bouncehost).
That said, if you really want to do it yourself, you can:
printf -v inner_cmd '%q ' "remote.command.sh"
printf -v outer_cmd '%q ' ssh name#remote.server.com "$inner_cmd"
ssh name#remote.portal.com bash -s <<EOF
$outer_cmd
EOF
...the last piece of which can be run in a command substitution like so:
result=$(ssh name#remote.portal.com bash -s <<EOF
$outer_cmd
EOF
)

Go root, create tmux, send commands and then attach - all via a single SSH command in a bash script

How would you go about sending a command via SSH:
ssh user#server -t
Which will create a tmux session, send commands to it and then attach - allowing you to manually work on the interactivity presented from the commands? This has to be done while at the same time logging in as root - NOT via sudo.
ssh user#server -t "bash -c \"su - -c \"tmux -d \; apt-get update \; apt-get upgrade\" root\""
or simply without bash -c (which was an attempt at getting SSH to show the tmux window - as it (when using su - instead of sudo) would not display the shell. This was only when running it from a script - when run directly as a shell command there was no issue.
Another attempt have been:
ssh user#server -t "su - -c \"tmux new-session -n $session_name && tmux send-keys -t $session_name \"$command\"\" root"
This latter work relatively well - only it attaches to the tmux session and as such commands are sent after exiting the tmux manually only.
The idea is basically to automatically create a tmux session in which some commands are run - and then attach to it. This is to be done as a part of a bash script.
Any ideas?
For those interested, the command I found to work was:
ssh -p22 user#server -t "su - root -c \"tmux -d $session_name\;\"-t $session_name -c \"apt-get update\"\" \; \"-t $session_name -c \"apt-get upgrade\"\"\""
However it does not make you enter the tmux - merely sends back the input from the tmux to the user. A step in the right direction but not quite there..!
Very open to more efficient answers!

how can know ssh is disconected and retry with bash script

I'm using reverse ssh for connecting to remote client , Operator run reverse one time and leave client system
how can i write bash script , when reverse ssh disconnected from server retry to connect to server (ssh)
Use autossh. Autossh "automatically restart[s] SSH sessions and tunnels"
sudo apt-get install autossh
I use autossh to to keep open reverse tunnel that I depend on. It works very well, even with long periods of lost connection.
Here is the script I use to create the tunnel:
#!/bin/bash
AUTOSSH_GATETIME=0
export AUTOSSH_GATETIME
autossh -f -N -R 8022:localhost:22 username#host -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
I execute this script at boot with this cronjob:
#reboot /home/scripts/./persistent-tunnel.sh
If you simply want to retry a command until it succeeds, you can use this pattern:
while ! ssh [...]
do
echo "Command failed, retrying..." >&2
done
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
#/bin/bash
if [ -z "$1" ]
then
echo '''
Please also provide ssh connection details.
'''
exit 1
fi
retries=0
repeat=true
today=$(date)
while "$repeat"
do
((retries+=1)) &&
echo "Try number $retries..." &&
today=$(date) &&
ssh "$#" &&
repeat=false
sleep 5
done
echo """
Disconnected sshx after a successful login.
Total number of tries = $retries
Connected at:
$today
"""
You might want to take a look into ssh options ServerAliveInterval, ServerAliveCountMax and TCPKeepAlive because sometimes your line dies without making this obvious, let me demonstrate:
#!/bin/sh
while true; do
ssh -T user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
pkill -f "^sshd:\ user\ \ \ \ $" # needs to be edited for nearly every case
sleep 2
ssh -T -N user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
-o Batchmode=yes \
-o ExitOnForwardFailure=yes \
-o ServerAliveCountMax=1 \
-o ServerAliveInterval=60 \
-o LocalForward=127.0.0.1:2501=127.0.0.1:25 \
-o RemoteForward=127.0.0.1:2501=127.0.0.1:25
sleep 60
done
You can use netstat -ntp | grep ":22" or ss -ntp | grep ":22" to see established connections to ssh port, then use grep to filter the ip address you're looking for. If you don't find a connection then reconnect the tunnel.
Use autossh if it works on your version of Linux. It did not on mine as it was an outdated Linux distribution for a custom NAS box.
The alternative is a simple bash script in crontab like this:
maintain_reverse_ssh_tunnel.sh
if ! netstat -planet |grep myserver_ip_or_name |grep ESTABLISHED > /dev/null; then
echo "REVERSE SSH DOWN - Restarting the tunnels"
ssh -fN -R 32999:localhost:22 -R 28080:localhost:80 myusername#myserver_ip_or_name
fi
Replace myusername and myserver_ip_or_name with those of your user and server.
Then add an entry to crontab by typing crontab -e and adding the following line:
1 * * * * /path_to_my_script/maintain_reverse_ssh_tunnel.sh
Make sure to have the execute permissions on the script:
chmod 755 maintain_reverse_ssh_tunnel.sh

How to create a new terminal session and execute multiple commands

I'm looking for a way to automate the start-up of my development environment. I have three virtual machines that have to be started, then have to ssh to each of them and open VPN on them.
So far I've gotten them to start and managed to ssh to them:
#!/bin/sh
virsh start virtual_1
virsh start virtual_2
virsh start virtual_3
sleep 2m
gnome-terminal --title "virtual_3: server" -x ssh root#192.168.1.132 &
gnome-terminal --title "virtual_2: 11.100" -x ssh root#192.168.11.100 &
gnome-terminal --title "virtual_1: 12.100" -x ssh root#192.168.12.100 &
How do I execute an additional command in each of the terminals which starts openvpn?
For simplicity I'm trying to echo 1 in each terminal instead of starting VPN.
I've found that multiple commands on terminal start can be run like:
gnome-terminal -x bash -c "cmd1; cmd2"
So for one terminal to keep it simple I changed:
gnome-terminal --title "virtual_3: server" -x ssh root#192.168.1.132 &
to:
gnome-terminal --title "virtual_3: server" -x bash -c "ssh root#192.168.1.132 ; echo 1" &
But 1 wasn't printed in the terminal of virtual_3.
Then I thought, maybe the command is being executed too quickly, before the terminal is ready, so I tried adding &&:
gnome-terminal --title "virtual_3: server" -x bash -c "ssh root#192.168.1.132 &&; echo 1" &
But that gave no result either.
First of all, if you run
gnome-terminal -x bash -c "cmd1; cmd2"
you get bash to execute cmd1 and cmd2. It doesn't first execute cmd1 and then give cmd2 to its result. ssh is a program run in the terminal and your cmd2 won't be executed until that is finished.
So you need to run ssh and tell that to execute your command.
You can do so by:
ssh user#address "command_to_execute"
However, ssh exits after the command is finished. As you can see in "With ssh, how can you run a command on the remote machine without exiting?", you can execute ssh with the -t option so it doesn't quit:
ssh -t user#address "command_to_execute"
So your command in the end becomes:
gnome-terminal --title "virtual_3: server" -x bash -c "ssh -t root#192.168.1.132 'echo 1'"
You are right, giving -t alone is not enough (although necessary). -t allocates buffer for the tty but doesn't execute bash for you. From the manual of ssh:
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
If command is specified, it is executed on the remote host instead of a login shell.
So what you need is, to execute bash your self. Therefore:
ssh -t user#address "command_to_execute; bash"

How can I execute a script from my local machine in a specific (but variable) directory on a remote host?

From a previous question, I have found that it is possible to run a local script on a remote host using:
ssh -T remotehost < localscript.sh
Now, I need to allow others to specify the directory in which the script will be run on the remote host.
I have tried commands such as
ssh -T remotehost "cd /path/to/dir" < localscript.sh
ssh -T remotehost:/path/to/dir < localscript.sh
and I have even tried adding DIR=$1; cd $DIR to the script and using
ssh -T remotehost < localscript.sh "/path/to/dir/"
alas, none of these work. How am I supposed to do this?
echo 'cd /path/to/dir' | cat - localscript.sh | ssh -T remotehost
Note that if you're doing this for anything complex, it is very, very important that you think carefully about how you will handle errors in the remote system. It is very easy to write code that works just fine as long as the stars align. What is much harder - and often very necessary - is to write code that will provide useful debugging messages if stuff breaks for any reason.
Also you may want to look at the venerable tool http://en.wikipedia.org/wiki/Expect. It is often used for scripting things on remote machines. (And yes, error handling is a long term maintenance issue with it.)
Two more ways to change directory on the remote host (variably):
echo '#!/bin/bash
cd "$1" || exit 1
pwd -P
shift
printf "%s\n" "$#" | cat -n
exit
' > localscript.sh
ssh localhost 'bash -s "$#"' <localscript.sh '/tmp' 2 3 4 5
ssh localhost 'source /dev/stdin "$#"' <localscript.sh '/tmp' 2 3 4 5
# make sure it's the bash shell to source & execute the commands
#ssh -T localhost 'bash -c '\''source /dev/stdin "$#"'\''' _ <localscript.sh '/tmp' 2 3 4 5

Resources