How can i exit, stop, kill autossh if connection timed out, ip, port not exists or response without using ssh and sshd config files? - bash

I run autossh in a script for remote port forwarding and i need to exit, kill, stop the script if connection timed out, ip, port not exists or response, without the using of the ssh, sshd config files, is this possible?
No answer, found on stacksites or the manpage of autossh.
Example 1:
myautossh script
#!/bin/bash
/usr/bin/autossh -NT -o "ExitOnForwardFailure=yes" -R 5555:localhost:443 -l user 1.1.1.1
if [ $? -eq 0 ]; then
echo "SUCCESS" >> errorlog
else
echo "FAIL" >> errorlog
fi
Example 2:
myautossh script
#!/bin/bash
/usr/bin/autossh -f -NT -M 0 -o "ServerAliveInterval=5" -o "ServerAliveCountMax=1" -o "ExitOnForwardFailure=yes" -R 5555:localhost:443 -l user 1.1.1.1 2>> errorlog
if [ $? -eq 0 ]; then
echo "SUCCESS" >> errorlog
else
echo "FAIL" >> errorlog
kill $(ps aux | grep [m]yautossh | awk '{print $2}')
fi
IP 1.1.1.1 not exists in my network so it get a connection timeout, but the script and autossh is still running, checked with:
ps aux | grep [m]yautossh
or
ps x | grep [a]utossh
Can only terminate the script with ctrl+c
I want to run autossh in a script, try to connect to a not existing ip or port and terminate, exit, kill the process of autossh to continue my script, without config ssh & sshd config files, only with the options/commands of autossh and the using of -f for background, is this possible?

the use of timeout with --preserve-status is what you need
timeout allows you to run a cmmand with a time limit
Preserving the Exit Status, timeout with --preserve-status returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command
this will terminate the command after 2 seconds and returns the exit status of your command if not equal 0, command not success, you could not establish a successful conection
#!/bin/bash
timeout --preserve-status 2 /usr/bin/autossh -NT -o "ExitOnForwardFailure=yes" -R 33333:localhost:443 -l user 1.1.1.1
if [ $? -eq 0 ]; then
echo "Connection success"
else
echo "Connection fail"
fi
https://linuxize.com/post/timeout-command-in-linux/

Related

How to check the SSH login status of routers in bash script with password prompts

I am running a task where i need to check the SSH login status on 400 remote routers. I have made scripts using expect in bash which SSH the remote routers and run some commands over it. However, there are some routers that are not responding to SSH. I am using if statement to avoid those routers which are failing on SSH. Sample code to check the status on remote router works only if we have password less entry or the private key saved. Could you please help how can I check the SSH status on the remote routers?
If I get the password prompt while doing SSH to the router, I can say that the server is able to SSH the router. There is no need to supply password to it.
#!/bin/bash
ssh -q -o BatchMode=yes -o ConnectTimeout=7 username#IP exit
echo $?
if [ $? -ne 0 ]
then
# Do stuff here if example.com SSH is down
echo "Can not connect to the device"
fi
Well,
If you are using expect package then there is timeout option there as well.
Else your shell code above is correct way of doing it except just a few corrections:
#!/bin/bash
ssh -q -o BatchMode=yes -o ConnectTimeout=7 username#IP date
ret=$?
echo $ret
if [ $ret -ne 0 ]
then
# Do stuff here if example.com SSH is down
echo "Can not connect to the device"
fi
You can see, we are assigning $? immediately to a var. If you don't, then $? will contain the return value of echo $? command which will be always 0. Hence giving you true for all ssh.
Also I suggest to run some other command rather than exit with ssh.
Hope this helps
===Edited====>>>
well since you don't have passwordless ssh enabled. You can try to telnet to port 22, if 22 port is open then it will show connected and if its not open then it won't you can grep on it.
Here is the modified code: (Provided that ssh is running on 22 port otherwise change the port in code.)
#!/bin/bash
echo "" | telnet $IP 22 | grep "Connected"
ret=$?
echo $ret
if [ $ret -ne 0 ]
then
# Do stuff here if example.com SSH is down
echo "Can not connect to the device"
fi

script to check if I can access multiple servers

I made this script to check if I can connect to a list of many servers:
for SERVER in $(cat servers.txt); do
ssh root#$SERVER && echo OK $SERVER || echo ERR $SERVER
done
The problem is that if is the first time I’m connecting to the server, the server asks the classic question “The authenticity of host ‘x.x.x.x’ can't be established... bla bla bla” Then I have to respond yes or not and it losses the purpose of making it a script, is there any way to bypass that so I can add it to the script?
Also, There are some servers in which I don't have my keys in them but they have the option to enter a password. In that case it will wait until I try a password to continue with the script execution, so I was wondering if there is a way to improve this script so if the server asks for a password then set it to ERR $SERVER and continue with the script?
Thank you for your help.
Do you actually need to establish an SSH connection?
Or is simply opening a socket connection to the host on the SSH port enough for you to determine that the server is online?
servers.txt
hostA.example.com
hostB.example.com
hostC.example.com
port-probe.sh
#!/bin/bash
PORT=22
TIMEOUT=3
for SERVER in $(cat servers.txt); do
# Open a socket and send a char
echo "-" | nc -w $TIMEOUT $SERVER $PORT &> /dev/null
# Check exit code of NC
if [ $? -eq 0 ]; then
echo "$SERVER is Available."
else
echo "$SERVER is Unavailable."
fi
done
You can use the -o flag to set options in SSH:
for SERVER in $(cat servers.txt); do
ssh -o StrictHostKeyChecking=no -o BatchMode=yes root#$SERVER exit && echo OK $SERVER || echo ERR $SERVER
done
Check out the manpage for ssh_config: man ssh_config for all of the options available with the -o flag.
If you have quite a few servers what you may want to do is connect to all of them simultaneously. This way it only waits maximum 2 minutes (default TCP connect timeout) if any of the servers is unresponsive.
Connect to each server without allocating a terminal and execute echo . command, redirect the output into a named file. Issue these commands in a loop asynchronously. Then wait till all commands complete, iterate over the log files and check which ones have dots in it. Then report the servers whose log files do not have a dot in it.
E.g.:
#/bin/bash
servers="$#"
for server in $servers; do
ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -Tn $server echo . >$server.log 2>$server.error.log &
done
wait # After 2 minutes all connection attempts timeout.
for server in $servers; do
[[ -s $server.log ]] || echo "Failed to connect to $server" >2
done

Unable to kill remote processes with ssh

I need to kill remote processes with a shell script as follows:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
scp a.txt $user#$ip://home/mag
ssh -o ConnectTimeout=10 $user#$ip > /dev/null 2>&1 << remoteCmd
touch b.txt
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
exit
remoteCmd
echo "commands executed."
fi
After executed it I found the scp and touch clauses had been executed, but the kill clause had not been executed successful and the process is still there. If I run clauses from "jobPid= ..." to "fi" on remote machine the process can be killed. How to fix it?
I put a script on the remote machine which can find and kill the process, then I ran the script on local machine which execute the script on the remote machine with ssh. The script is as follows:
Local script:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
ssh -q $user#$ip "/home/mag/local.sh"
echo "commands executed."
fi
remote script:
#!/bin/bash
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
Your script needs root access (WHICH IS NEVER A GOOD IDEA). Or make sure your program which is running, is running under your webuser/group

Catch SSH exceptions in a UNIX script

I have a background process in my server which updates my ~/.ssh/authorized_keys frequently. If I ssh from my client machine at the very moment it will fail
$ ssh my_server date
SSH Version: OpenSSH_5.3p1
user#my_server's password:
and the ssh will mark the script as failed after a number of tries.
I want to break away and add an exception handling piece to sleep 30 seconds whenever this ssh failure occurs.
Something like
*ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi*
Is there any better approach as the above snippet will still prompt for password
Maybe you could approach this in one shell script by "flock" on a lock file, and then "flock" in the shell script you run above:
In the script that updates your authorized keys:
(flock 200
# commands here that modify your authorized_keys file
) 200>/tmp/authkey_lock
And around the script piece you have posted above:
(flock 200
ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi
) 200>/tmp/authkey_lock
Please see "man flock" for for information about flock.

Checking SSH failure in a script

Hi what is the best way to check to see if SSH fails for whatever reason?
Can I use a IF statement ( if it fails then do something)
I'm using the ssh command in a loop and passing my hosts names form a flat file.
so I do something like:
for i in `cat /tmp/hosts` ; do ssh $i 'hostname;sudo ethtool eth1'; done
I get sometime this error or I just cannot connect
ssh: host1 Temporary failure in name resolution
I want to skip the hosts that I cannot connect to is SSH fails. What is the best way to do this? Is there a runtime error I can trap to bypass the hosts that I cannot ssh into for whatever reason, perhaps ssh is not allowed or I do not have the right password ?
Thanking you in advance
Cheers
To check if there was a problem connecting and/or running the remote command:
if ! ssh host command
then
echo "SSH connection or remote command failed"
fi
To check if there was a problem connecting, regardless of success of the remote command (unless it happens to return status 255, which is rare):
if ssh host command; [ $? -eq 255 ]
then
echo "SSH connection failed"
fi
Applied to your example, this would be:
for i in `cat /tmp/hosts` ;
do
if ! ssh $i 'hostname;sudo ethtool eth1';
then
echo "Connection or remote command on $i failed";
fi
done
You can check the return value that ssh gives you as originally shown here:
How to create a bash script to check the SSH connection?
$ ssh -q user#downhost exit
$ echo $?
255
$ ssh -q user#uphost exit
$ echo $?
0
EDIT - I cheated and used nc
Something like this:
#!/bin/bash
ssh_port_is_open() { nc -z ${1:?hostname} 22 > /dev/null; }
for host in `cat /tmp/hosts` ; do
if ssh_port_is_open $host; then
ssh -o "BatchMode=yes" $i 'hostname; sudo ethtool eth1';
else
echo " $i Down"
fi
done

Resources