Catch SSH exceptions in a UNIX script - shell

I have a background process in my server which updates my ~/.ssh/authorized_keys frequently. If I ssh from my client machine at the very moment it will fail
$ ssh my_server date
SSH Version: OpenSSH_5.3p1
user#my_server's password:
and the ssh will mark the script as failed after a number of tries.
I want to break away and add an exception handling piece to sleep 30 seconds whenever this ssh failure occurs.
Something like
*ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi*
Is there any better approach as the above snippet will still prompt for password

Maybe you could approach this in one shell script by "flock" on a lock file, and then "flock" in the shell script you run above:
In the script that updates your authorized keys:
(flock 200
# commands here that modify your authorized_keys file
) 200>/tmp/authkey_lock
And around the script piece you have posted above:
(flock 200
ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi
) 200>/tmp/authkey_lock
Please see "man flock" for for information about flock.

Related

How can i exit, stop, kill autossh if connection timed out, ip, port not exists or response without using ssh and sshd config files?

I run autossh in a script for remote port forwarding and i need to exit, kill, stop the script if connection timed out, ip, port not exists or response, without the using of the ssh, sshd config files, is this possible?
No answer, found on stacksites or the manpage of autossh.
Example 1:
myautossh script
#!/bin/bash
/usr/bin/autossh -NT -o "ExitOnForwardFailure=yes" -R 5555:localhost:443 -l user 1.1.1.1
if [ $? -eq 0 ]; then
echo "SUCCESS" >> errorlog
else
echo "FAIL" >> errorlog
fi
Example 2:
myautossh script
#!/bin/bash
/usr/bin/autossh -f -NT -M 0 -o "ServerAliveInterval=5" -o "ServerAliveCountMax=1" -o "ExitOnForwardFailure=yes" -R 5555:localhost:443 -l user 1.1.1.1 2>> errorlog
if [ $? -eq 0 ]; then
echo "SUCCESS" >> errorlog
else
echo "FAIL" >> errorlog
kill $(ps aux | grep [m]yautossh | awk '{print $2}')
fi
IP 1.1.1.1 not exists in my network so it get a connection timeout, but the script and autossh is still running, checked with:
ps aux | grep [m]yautossh
or
ps x | grep [a]utossh
Can only terminate the script with ctrl+c
I want to run autossh in a script, try to connect to a not existing ip or port and terminate, exit, kill the process of autossh to continue my script, without config ssh & sshd config files, only with the options/commands of autossh and the using of -f for background, is this possible?
the use of timeout with --preserve-status is what you need
timeout allows you to run a cmmand with a time limit
Preserving the Exit Status, timeout with --preserve-status returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command
this will terminate the command after 2 seconds and returns the exit status of your command if not equal 0, command not success, you could not establish a successful conection
#!/bin/bash
timeout --preserve-status 2 /usr/bin/autossh -NT -o "ExitOnForwardFailure=yes" -R 33333:localhost:443 -l user 1.1.1.1
if [ $? -eq 0 ]; then
echo "Connection success"
else
echo "Connection fail"
fi
https://linuxize.com/post/timeout-command-in-linux/

How to wait until ssh is available?

I'm trying to code a script which will wait for a server to be up and check if ssh is running.
#!/bin/bash
until [ $(ssh -o BatchMode=yes -o ConnectTimeout=5 root#HOST echo ok 2>&1) = "ok" ]; do
echo "Trying again..."
done
echo "SSH is running"
I have this error if server is power off :
test3: ligne 3 : [: Too many arguments
Trying again...
^C
If server is running it output :
ok
The trivial fix is to put double quotes around the string which might come up empty.
until [ "$(ssh ...)" = "ok" ]; do ...
The Bash-only test [[ is more tolerant, so you could use [[ ... ]] instead of [ ... ] and not have to add quotes.
... but a better solution is to look for the exit status from ssh:
until ssh ...; do ...
If you want the operation to be silent, add a redirection.
until ssh user#hostname true >/dev/null 2>&1; do ...
with whatever additional options you want, of course. You might need to add one or more ssh -t options if it complains about not being connected to a TTY, for example.
Your ssh command is expanding to nothing, or to multiple words; you should quote it (and run Shellcheck on your script):
until [ "$(ssh ... )" = ok ]; do

Unable to kill remote processes with ssh

I need to kill remote processes with a shell script as follows:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
scp a.txt $user#$ip://home/mag
ssh -o ConnectTimeout=10 $user#$ip > /dev/null 2>&1 << remoteCmd
touch b.txt
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
exit
remoteCmd
echo "commands executed."
fi
After executed it I found the scp and touch clauses had been executed, but the kill clause had not been executed successful and the process is still there. If I run clauses from "jobPid= ..." to "fi" on remote machine the process can be killed. How to fix it?
I put a script on the remote machine which can find and kill the process, then I ran the script on local machine which execute the script on the remote machine with ssh. The script is as follows:
Local script:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
ssh -q $user#$ip "/home/mag/local.sh"
echo "commands executed."
fi
remote script:
#!/bin/bash
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
Your script needs root access (WHICH IS NEVER A GOOD IDEA). Or make sure your program which is running, is running under your webuser/group

Checking SSH failure in a script

Hi what is the best way to check to see if SSH fails for whatever reason?
Can I use a IF statement ( if it fails then do something)
I'm using the ssh command in a loop and passing my hosts names form a flat file.
so I do something like:
for i in `cat /tmp/hosts` ; do ssh $i 'hostname;sudo ethtool eth1'; done
I get sometime this error or I just cannot connect
ssh: host1 Temporary failure in name resolution
I want to skip the hosts that I cannot connect to is SSH fails. What is the best way to do this? Is there a runtime error I can trap to bypass the hosts that I cannot ssh into for whatever reason, perhaps ssh is not allowed or I do not have the right password ?
Thanking you in advance
Cheers
To check if there was a problem connecting and/or running the remote command:
if ! ssh host command
then
echo "SSH connection or remote command failed"
fi
To check if there was a problem connecting, regardless of success of the remote command (unless it happens to return status 255, which is rare):
if ssh host command; [ $? -eq 255 ]
then
echo "SSH connection failed"
fi
Applied to your example, this would be:
for i in `cat /tmp/hosts` ;
do
if ! ssh $i 'hostname;sudo ethtool eth1';
then
echo "Connection or remote command on $i failed";
fi
done
You can check the return value that ssh gives you as originally shown here:
How to create a bash script to check the SSH connection?
$ ssh -q user#downhost exit
$ echo $?
255
$ ssh -q user#uphost exit
$ echo $?
0
EDIT - I cheated and used nc
Something like this:
#!/bin/bash
ssh_port_is_open() { nc -z ${1:?hostname} 22 > /dev/null; }
for host in `cat /tmp/hosts` ; do
if ssh_port_is_open $host; then
ssh -o "BatchMode=yes" $i 'hostname; sudo ethtool eth1';
else
echo " $i Down"
fi
done

Making bash script to check connectivity and change connection if necessary. Help me improve it?

My connection is flaky, however I have a backup one. I made some bash script to check for connectivity and change connection if the present one is dead. Please help me improve them.
The scripts almost works, except for not waiting long enough to receive an IP (it cycles to next step in the until loop too quick). Here goes:
#!/bin/bash
# Invoke this script with paths to your connection specific scripts, for example
# ./gotnet.sh ./connection.sh ./connection2.sh
until [ -z "$1" ] # Try different connections until we are online...
do
if eval "ping -c 1 google.com"
then
echo "we are online!" && break
else
$1 # Runs (next) connection-script.
echo
fi
shift
done
echo # Extra line feed.
exit 0
And here is an example of the slave scripts:
#!/bin/bash
ifconfig wlan0 down
ifconfig wlan0 up
iwconfig wlan0 key 1234567890
iwconfig wlan0 essid example
sleep 1
dhclient -1 -nw wlan0
sleep 3
exit 0
Here's one way to do it:
#!/bin/bash
while true; do
if ! [ "`ping -c 1 google.com; echo $?`" ]; then #if ping exits nonzero...
./connection_script1.sh #run the first script
sleep 10 #give it a few seconds to complete
fi
if ! [ "`ping -c 1 google.com; echo $?`" ]; then #if ping *still* exits nonzero...
./connection_script2.sh #run the second script
sleep 10 #give it a few seconds to complete
fi
sleep 300 #check again in five minutes
done
Adjust the sleep times and ping count to your preference. This script never exits so you would most likely want to run it with the following command:
./connection_daemon.sh 2>&1 > /dev/null & disown
Have you tried omitting the -nw option from the dhclient command?
Also, remove the eval and quotes from your if they aren't necessary. Do it like this:
if ping -c 1 google.com > /dev/null 2>&1
Trying using ConnectTimeout ${timeout} somewhere.

Resources