How to get a custom return value from an SSH command? - bash

I'm trying to get feedback from SSH but I'm unsure how to go about it. My SSH command is structured as follows:
ssh exampleuser#exampledomain.com 'sudo bash -s' < examplebashscript.sh
I have rsa keys set up and the script runs exactly as expected on the remove server. The end of my bash script is as follows
if [ $ERROR_COUNT -ne 0 ]; then
...
exit 10
else
...
exit 20
fi
However, if I echo $? I get 0 as my response. I presume this 0 is being returned because the SSH command ran successfully? How do I return a a value from the SSH command? I want to return a 10 if there were errors when the script ran and 20 if there were no errrors

Related

Bash script catch exit code of another bash script over ssh

I'm writing a script execute.sh that ssh to other host and execute bash script from another file. The problem is I don't know how to catch exit code of the file I executed.
My execute.sh is like this:
ssh $REMOTE_USER#$REMOTE_HOST 'bash -s' < ./onvps.sh
and I want execute.sh catch exit code of onvps.sh.
Thank you all.
As the ssh client will exit with the exit code of the remote command, you can check the value of $?. if it return 0, it indicates that command is executed successfully.
in your case
ssh $REMOTE_USER#$REMOTE_HOST 'bash -s' < ./onvps.sh
return_code=$?
# then evaluate
if [ $return_code -eq 0 ]
then
echo "OK"
else
echo "ERROR"
fi
One example
$ more mytest.sh
exit 22
$ ssh myuser#myhost 'bash -s' < ./mytest.sh
$ echo $?
22
whatever you are including in the script will be executed in the remote, so $? will give you the exit code of it.

how to check ssh exit command executed?

I want to check exit command is executed success in a ssh session.
My first attempt is using the command exit && echo $? || echo $?.The echo $? || echo $?,it should print 0 if exit successful.
The problem is echo never execute if exit execute success because connection is disconnected and the later command is lost.
My second attempt is splitting the command to two command, as this:
$ exit
$ echo $?
echo $? should be 0 if exit execute successful.
But another problem is echo $? maybe be swallowed because it send so quickly that arrived to the remote ssh host before exit be executed.
So, how to ensure exit is executed at remote host before send next command?
UPDATE
The using stage is I execute shell command at a program language and send message by a ssh pipe stream.So, I don't know the exit command executed time.If the exit command not completed, my follow commands will be swallowed because they send to the exiting host.
That is why I'm care about the exit command executed.
If your main concern is knowing that you are back to your local machine, then you could define a variable earlier in your script that is just known on your local machine before ssh. Then after exiting you could test for the existence of that variable. If it exists you are back on the local machine and if it does not then try to exit again because you are not back on your local machine.
#define this before ssh
uniqueVarName333=1
Then in your script:
# ssh stuff
exit
if [ -z ${uniqueVarName333+x} ]; then exit; else echo "Back on local machine"; fi
Or you could just check for the success of exit multiple times to ensure that it is successful when you command it to the remote machine.
exit || exit || exit #check it as many times as you feel to get the probability of exiting close to 0
Inspired by #9Breaker.
I has solved this by calling echo 'END-COMMAND' flag repeatedly with a spare time, such as 15ms.
To clarify this by a shell channel example:
echo '' && echo 'BEGIN-COMMAND' && exit
echo 'END-COMMAND'
//if not response repeat send `echo 'END-COMMAND'`
echo $? && echo 'END-COMMAND'
//if not response repeat send `echo 'END-COMMAND'`
echo $? && echo 'END-COMMAND'
We can pick up IO stream chars and parse streaming util match BEGIN-COMMAN and END-COMMAND.
the response maybe success:
BEGIN-COMMAND
0
END-COMMAND //should be send multi times to get the response
or fail if network broken when connecting:
BEGIN-COMMAND
ssh: connect to host 192.168.1.149 port 22: Network is unreachable
255
END-COMMAND

sshpass exit in automation

I have total of 6 IP addresses and out of the 6 only 2 IP addresses are valid. I wrote a shell script to use sshpass to test each IP.
The issue is when script reaches IP that is working it log's in the system (Cisco switch) and stays there and not continuing with the loop to test the remaining IPs. If i type "exit" on the system than it continues with the loop.
After a successful login how can script automatically get out, from logged system, and continue with testing remaining IP?
/usr/bin/sshpass -p $ADMINPASS ssh -oStrictHostKeyChecking=no -oCheckHostIP=no -t $ADMINLOGIN#$IP exit
i can use the exit status to figure out which IP worked and which on didn't work.
Test first if IP is alive, and then 'ssh' on it, could help you.I don't know if you are using a loop or not, but loop can be a good choice.Should look like : for f in ip-1 ip-2 ip-3 ip-4 ip-5 ip-6; do ping -c 1 -w 3 $f; if [ $? -eq 0 ]; then echo OK; ssh_pass $f your_command; else echo " IP is NOK"; fi; done
You can then also add 'exit' command, depending on what you test : 'exit 0' if it is OK, after you 'ssh' command, 'exit 1' if NOK.

Catch SSH exceptions in a UNIX script

I have a background process in my server which updates my ~/.ssh/authorized_keys frequently. If I ssh from my client machine at the very moment it will fail
$ ssh my_server date
SSH Version: OpenSSH_5.3p1
user#my_server's password:
and the ssh will mark the script as failed after a number of tries.
I want to break away and add an exception handling piece to sleep 30 seconds whenever this ssh failure occurs.
Something like
*ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi*
Is there any better approach as the above snippet will still prompt for password
Maybe you could approach this in one shell script by "flock" on a lock file, and then "flock" in the shell script you run above:
In the script that updates your authorized keys:
(flock 200
# commands here that modify your authorized_keys file
) 200>/tmp/authkey_lock
And around the script piece you have posted above:
(flock 200
ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi
) 200>/tmp/authkey_lock
Please see "man flock" for for information about flock.

ssh script doesn't return control to the parent script

I am trying to execute a local script on the remote server, by writing it to standard input of an ssh command. The script runs fine, but then ssh doesn't exit: it just hangs, and control doesn't return to the parent script.
Parent Shell :
for HOSTNAME in ${HOSTS} ; do
ssh -t -t $HOSTNAME "bash -s" < ~backup_conf.sh
done
Called Script:
#!/bin/sh
AGENT_BASE_PATH=/home/lokesh
if [ -d "$AGENT_BASE_PATH/CI/DE_deployment/conf" ]; then
if [ -d "$AGENT_BASE_PATH/CI/temp/conf_bkup" ]; then
rm -rf $AGENT_BASE_PATH/CI/temp/conf_bkup
fi
cp -R $AGENT_BASE_PATH/CI/DE_deployment/conf $AGENT_BASE_PATH/CI/temp/conf_bkup
fi
exit
I have written 'exit' but the control is not returning back to the parent script.
It hangs at the remote server.... :(
Culprit is bash -s line in your calling code, which is still expecting input to be ended using ctrl-C:
Try this instead:
for HOSTNAME in ${HOSTS} ; do
ssh -t -t $HOSTNAME "bash ~backup_conf.sh"
done
write your exit - status into a file on the remote host and pick it later from the remote host with ssh/scp/sftp.
Direct via ssh you will not get a exit - status from the other host.

Resources