I'm using reverse ssh for connecting to remote client , Operator run reverse one time and leave client system
how can i write bash script , when reverse ssh disconnected from server retry to connect to server (ssh)
Use autossh. Autossh "automatically restart[s] SSH sessions and tunnels"
sudo apt-get install autossh
I use autossh to to keep open reverse tunnel that I depend on. It works very well, even with long periods of lost connection.
Here is the script I use to create the tunnel:
#!/bin/bash
AUTOSSH_GATETIME=0
export AUTOSSH_GATETIME
autossh -f -N -R 8022:localhost:22 username#host -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
I execute this script at boot with this cronjob:
#reboot /home/scripts/./persistent-tunnel.sh
If you simply want to retry a command until it succeeds, you can use this pattern:
while ! ssh [...]
do
echo "Command failed, retrying..." >&2
done
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
I have a slightly different method.
My method always tries to reconnect you if you have a dirty disconnection: '~.' or 'Connection closed by remote host.'
But if you disconnect with 'CRTL+D' or with 'exit' it just disconnects and show you some info of the connections.
#/bin/bash
if [ -z "$1" ]
then
echo '''
Please also provide ssh connection details.
'''
exit 1
fi
retries=0
repeat=true
today=$(date)
while "$repeat"
do
((retries+=1)) &&
echo "Try number $retries..." &&
today=$(date) &&
ssh "$#" &&
repeat=false
sleep 5
done
echo """
Disconnected sshx after a successful login.
Total number of tries = $retries
Connected at:
$today
"""
You might want to take a look into ssh options ServerAliveInterval, ServerAliveCountMax and TCPKeepAlive because sometimes your line dies without making this obvious, let me demonstrate:
#!/bin/sh
while true; do
ssh -T user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
pkill -f "^sshd:\ user\ \ \ \ $" # needs to be edited for nearly every case
sleep 2
ssh -T -N user#host \
-o IdentityFile=~/.ssh/tunnel \
-o UserKnownHostsFile=~/.ssh/known_hosts.tunnel \
-o Batchmode=yes \
-o ExitOnForwardFailure=yes \
-o ServerAliveCountMax=1 \
-o ServerAliveInterval=60 \
-o LocalForward=127.0.0.1:2501=127.0.0.1:25 \
-o RemoteForward=127.0.0.1:2501=127.0.0.1:25
sleep 60
done
You can use netstat -ntp | grep ":22" or ss -ntp | grep ":22" to see established connections to ssh port, then use grep to filter the ip address you're looking for. If you don't find a connection then reconnect the tunnel.
Use autossh if it works on your version of Linux. It did not on mine as it was an outdated Linux distribution for a custom NAS box.
The alternative is a simple bash script in crontab like this:
maintain_reverse_ssh_tunnel.sh
if ! netstat -planet |grep myserver_ip_or_name |grep ESTABLISHED > /dev/null; then
echo "REVERSE SSH DOWN - Restarting the tunnels"
ssh -fN -R 32999:localhost:22 -R 28080:localhost:80 myusername#myserver_ip_or_name
fi
Replace myusername and myserver_ip_or_name with those of your user and server.
Then add an entry to crontab by typing crontab -e and adding the following line:
1 * * * * /path_to_my_script/maintain_reverse_ssh_tunnel.sh
Make sure to have the execute permissions on the script:
chmod 755 maintain_reverse_ssh_tunnel.sh
Related
I have written this script:
#!/bin/bash
SSH_USER=${SSH_USER:=$USER}
for department in A B C E L M V
do
mkdir -p ./resources/${div}
rsync -Pruzh --copy-links \
${SSH_USER}#server:${department}/foo/files \
${SSH_USER}#server:${department}/foo/photos \
./resources/${department}/foo
rsync -Pruzh \
${SSH_USER}#server:${department}/bar/documents \
./resources/${department}/bar
done
It works perfect except that I have to write my password 14 times which is not really practical.
I have heard of ssh_agent but for some reasons it does not work on my WSL.
Is there any alternative that I can use to type my password only once?
If you are using openssh, then you can set up a master connection and reuse it with something like:
DEST="${SSH_USER}#server"
TMPL=/tmp/sshctl/"%L-%r#%h:%p"
mkdir -p /tmp/sshctl
if ! ssh -nNf -o ControlMaster=yes -o ControlPath="${TMPL}" "${DEST}"; then
echo "# Failed to setup SSH ControlMaster. Aborting."
exit
fi
# ...
rsync -e "ssh -o 'ControlPath=${TMPL}'" ... "${DEST}":... ...
rsync -e "ssh -o 'ControlPath=${TMPL}'" ... "${DEST}":... ...
# ...
ssh -O exit -o ControlPath="${TMPL}" "${DEST}"
Be sure to secure the socket.
Best practice would be to set up SSH key pairs for automated authentication; i.e. create an SSH key pair and copy the public key to the server where these files are located, then use the private key in the rsync command: rsync -Pruzh --copy-links -e "ssh -i /path/to/private.key" .... This is fairly simple, secure, and gets rid of the prompt.
You can also use a utility like sshpass to enter the password in the prompt, but that kind of approach is less secure.
I execute my bash script PLCCheck as process
./PLCCheck &
PLCCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OKConnection" | netcat -u -c $HOST $PORT
done < <(netcat -u -l -p 6001)
It listens on UDP Port 6001.
When I want to execute my second bash script SQLCheck as process that listens on UDP Port 4001
./SQLCheck &
SQLCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OPENEF1" | netcat -u -c $HOST $PORT
done < <(nc -l -p 4001)
I got this error:
Error: Couldn't setup listening socket (err=-3)
Port 6001 and 4001 are open in the iptables and both scripts work as a single process. Why do I get this error?
I have checked the man page of nc. I think it is used on a wrong way:
-l Used to specify that nc should listen for an incoming connection rather
than initiate a connection to a remote host. It is an error to use this
option in conjunction with the -p, -s, or -z options. Additionally,
any timeouts specified with the -w option are ignored.
...
-p source_port
Specifies the source port nc should use, subject to privilege restrictions
and availability. It is an error to use this option in conjunction with the
-l option.
According to this one should not use -l option with -p option!
Try to use without -p, just nc -l 4001. Maybe this is the error...
When working remotely, I have a series of tabs that I open in gnome-terminal, and commands that I execute in them. I would like to automate all this setup as a single command.
If these commands could run independently and in parallel, I'd just adapt the answer to this question. In fact, I tried, using the following shell script:
gnome-terminal --working-directory="/home/superelectric" --tab -t "gate" -e 'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' --tab -t "mydesktop" -e 'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
Spread out over multiple lines, for readability:
gnome-terminal \
--working-directory="/home/superelectric" \
--tab \
-t "gate" \
-e \
'bash -c "export BASH_POST_RC=\"ssh gate_tunnel\"; exec bash"' \
--tab \
-t "mydesktop" \
-e \
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
The first part opens a tab, names it 'gate', and executes 'ssh gate_tunnel' within it. This is an ssh alias that opens a tunnel to 'mydesktop' at school, through the school's outward-facing server, 'gate'.
The second part opens another tab, names it 'mydesktop', and executes 'ssh tunneled_mydesktop' within it. This is another ssh alias, which connects to mydesktop through the tunnel.
~/.ssh/config:
Host gate_tunnel
LocalForward 8023 <my_desktop_at_school>:22
HostName <my_school_server>
That's the theory. In practice, the two commands execute in parallel, whereas I need to ensure that the first tab's command (open tunnel) completes before executing the second tab's command (connect through tunnel).
Is there maybe some command I can execute in the second tab, that 'waits' until the ssh tunnel is opened?
Ok, I think i get it. As i mentioned in the comments the first thing that comes to mind for reaching your school desktop from the outside is to ssh into the school gate and from there ssh into your desktop with something like:
$ ssh -t gate.school.edu ssh desktop_name
There's only one tab then, so your problem doesn't exist.
However there's something very cool with your current setup:
From home it's almost as if you had a direct connection to your desktop machine, so you can scp into it directly and forget about gate. With the solution above that's not possible anymore because we end up with an indirect connection: If you want to scp you have to do it from gate and that sucks.
Check out this article on using ssh's ProxyCommand feature:
Transparent Multi-hop SSH
You get the best of both worlds then :)
Hmm... this may not be a perfect solution. Ideally you should use something that monitors the ssh connection. But, you can check the ssh process with ps. And wait for ssh command to come alive.
#!/bin/bash
COUNTER=0
while [ $COUNTER -lt 10 ]; do # try 10 times
if ps aux ¦ grep <my_desktop_at_school> then
# the tunnel connected now execute the second command
'bash -c "export BASH_POST_RC=\"ssh tunneled_mydesktop\"; exec bash"'
else
continue # or you could do something here if you wish
fi
sleep 10 # sleep for 10 seconds and try again
let COUNTER=COUNTER+1
done
You will have to run this script in the second tab.
Hope it helps.
I have been trying to automatically enter a ssh connection using a script. This previous SOF post has helped me so far. Using one connection works (the first ssh statement). However, I want to create another ssh connection once connected, which I thought could look like this:
#! /bin/bash
# My ssh script
sshpass -p "MY_PASSWORD1" ssh -o StrictHostKeyChecking=no *my_hostname_1*
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
When running the script, I get only connected to the my_hostname_1 and the second ssh command is not run until I exit the first ssh connection.
I've tried using an if statement like this:
if [ "$HOSTNAME" = my_host_name_1 ]; then
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
fi
but I can't get any commands to be read until I exit the first connection.
Here is a ProxyCommand example as suggested by #lihao:
#!/bin/bash
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no \
-o ProxyCommand='sshpass -p "MY_PASSWORD1" ssh m_hostname_1 netcat -w 1 %h %p' \
my_hostname_2
You are proxying through the first host to get to the second. This assumes you have netcat installed on my_hostname_2. If not, you'll need to install it.
You can also set this up in your ~/.ssh/config file so you don't need the proxy stuff on the command line:
Host my_hostname_1
HostName my_hostname_1
Host my_hostname_2
HostName my_hostname_2
ProxyCommand ssh my_hostname_1 netcat -w 1 %h %p
However, this is a little trickier with the password handling. While you could put the sshpass here, it's not a great idea to have passwords in plain text. Using key based authentication might be better.
A Bash script is a sequence of commands.
echo moo
echo bar
will run echo moo and wait for it to complete, then run the next command.
You can run a remote command like this:
ssh remote echo moo
which will connect to remote, run the command, and exit. If there are additional commands in the script file after this, the shell which is executing these commands will continue with the next one, obviously on the host where you started the script.
To connect to one host from another, you could in principle do
ssh host1 ssh host2
but the proxy command suggested by #zerodiff improves on several aspects of the experience.
From a previous question, I have found that it is possible to run a local script on a remote host using:
ssh -T remotehost < localscript.sh
Now, I need to allow others to specify the directory in which the script will be run on the remote host.
I have tried commands such as
ssh -T remotehost "cd /path/to/dir" < localscript.sh
ssh -T remotehost:/path/to/dir < localscript.sh
and I have even tried adding DIR=$1; cd $DIR to the script and using
ssh -T remotehost < localscript.sh "/path/to/dir/"
alas, none of these work. How am I supposed to do this?
echo 'cd /path/to/dir' | cat - localscript.sh | ssh -T remotehost
Note that if you're doing this for anything complex, it is very, very important that you think carefully about how you will handle errors in the remote system. It is very easy to write code that works just fine as long as the stars align. What is much harder - and often very necessary - is to write code that will provide useful debugging messages if stuff breaks for any reason.
Also you may want to look at the venerable tool http://en.wikipedia.org/wiki/Expect. It is often used for scripting things on remote machines. (And yes, error handling is a long term maintenance issue with it.)
Two more ways to change directory on the remote host (variably):
echo '#!/bin/bash
cd "$1" || exit 1
pwd -P
shift
printf "%s\n" "$#" | cat -n
exit
' > localscript.sh
ssh localhost 'bash -s "$#"' <localscript.sh '/tmp' 2 3 4 5
ssh localhost 'source /dev/stdin "$#"' <localscript.sh '/tmp' 2 3 4 5
# make sure it's the bash shell to source & execute the commands
#ssh -T localhost 'bash -c '\''source /dev/stdin "$#"'\''' _ <localscript.sh '/tmp' 2 3 4 5