SSH not exiting properly inside if statement in bash heredoc - bash

So i am running this script to check if a java server is up remotely by sshing into remote. If it is down, i am trying to exit and run another script locally. However, after the exit command, it is still in the remote directory.
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
# want to exit ssh
exit
# after here when i check it is still in ssh
# I want to run another script locally in the same directory as the current script
./other_script.sh
else
echo "java server up"
fi;
EOF

The exit is exiting the ssh session and so never gets to the execution of the other_script.sh line in the HEREDOC. It would be better to place this outside of the script and actioned from the exit status of the HEREDOC/ssh and so:
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
exit 7 # Set the exit status to a number that isn't standard in case ssh fails
else
echo "java server up"
fi;
EOF
if [[ $? -eq 7 ]]
then
./other_script.sh
fi

Related

Bash script catch exit code of another bash script over ssh

I'm writing a script execute.sh that ssh to other host and execute bash script from another file. The problem is I don't know how to catch exit code of the file I executed.
My execute.sh is like this:
ssh $REMOTE_USER#$REMOTE_HOST 'bash -s' < ./onvps.sh
and I want execute.sh catch exit code of onvps.sh.
Thank you all.
As the ssh client will exit with the exit code of the remote command, you can check the value of $?. if it return 0, it indicates that command is executed successfully.
in your case
ssh $REMOTE_USER#$REMOTE_HOST 'bash -s' < ./onvps.sh
return_code=$?
# then evaluate
if [ $return_code -eq 0 ]
then
echo "OK"
else
echo "ERROR"
fi
One example
$ more mytest.sh
exit 22
$ ssh myuser#myhost 'bash -s' < ./mytest.sh
$ echo $?
22
whatever you are including in the script will be executed in the remote, so $? will give you the exit code of it.

Unable to kill remote processes with ssh

I need to kill remote processes with a shell script as follows:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
scp a.txt $user#$ip://home/mag
ssh -o ConnectTimeout=10 $user#$ip > /dev/null 2>&1 << remoteCmd
touch b.txt
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
exit
remoteCmd
echo "commands executed."
fi
After executed it I found the scp and touch clauses had been executed, but the kill clause had not been executed successful and the process is still there. If I run clauses from "jobPid= ..." to "fi" on remote machine the process can be killed. How to fix it?
I put a script on the remote machine which can find and kill the process, then I ran the script on local machine which execute the script on the remote machine with ssh. The script is as follows:
Local script:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
ssh -q $user#$ip "/home/mag/local.sh"
echo "commands executed."
fi
remote script:
#!/bin/bash
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
Your script needs root access (WHICH IS NEVER A GOOD IDEA). Or make sure your program which is running, is running under your webuser/group

Write the script to check remote host services running or not [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
This is the script but output is wrong even Apache is running its show stop. I'm using Ubuntu 12.04.
ssh -qn root# ip
if ps aux | grep [h]ttpd > /dev/null
then
echo "Apcache is running"
else
echo "Apcahe is not running"
fi
Try the following:
if ssh -qn root#ip pidof httpd &>/dev/null ; then
echo "Apache is running";
exit 0;
else
echo "Apache is not running";
exit 1;
fi
These exit commands will send the correct EXIT_SUCCESS and EXIT_FAILURE too ( Will be usefull to extend this script in future, if you need ).
But ONE ADVICE : Is better to put the script as a remote process to run with a sudoer user over ssh account
You are not running the commands on the remote host.
Try this instead.
if ssh -qn root#ip ps aux | grep -q httpd; then
echo "Apache is running"
else
echo "Apache is not running"
fi
Just to be explicit, ps aux is the argument to ssh and so that is what is being executed on the remote host. The grep runs as a child of the local script.
First of all httpd is not available in ubuntu. For ubuntu apache2 is available.
So this command ps aux | grep [h]ttpd will not work on ubuntu.
No need to write any script to check the apache status. From ubuntu terminal run this command to get the status:
sudo service apache2 status
Output will be:
A > if apache is running: Apache2 is running (pid 1234)
B > if apache is not running: Apache2 is NOT running.
Since ssh returns with exit status of the remote command check man page for ssh and search for exit status
so Its as simple as
ssh root#ip "/etc/init.d/apache2 status"
if [ $? -ne 0 ]; then # if service is running exit status is 0 for "/etc/init.d/apache2 status"
echo "Apache is not running"
else
echo "Apache is running"
fi
You do not need ps or grep for this

ssh script doesn't return control to the parent script

I am trying to execute a local script on the remote server, by writing it to standard input of an ssh command. The script runs fine, but then ssh doesn't exit: it just hangs, and control doesn't return to the parent script.
Parent Shell :
for HOSTNAME in ${HOSTS} ; do
ssh -t -t $HOSTNAME "bash -s" < ~backup_conf.sh
done
Called Script:
#!/bin/sh
AGENT_BASE_PATH=/home/lokesh
if [ -d "$AGENT_BASE_PATH/CI/DE_deployment/conf" ]; then
if [ -d "$AGENT_BASE_PATH/CI/temp/conf_bkup" ]; then
rm -rf $AGENT_BASE_PATH/CI/temp/conf_bkup
fi
cp -R $AGENT_BASE_PATH/CI/DE_deployment/conf $AGENT_BASE_PATH/CI/temp/conf_bkup
fi
exit
I have written 'exit' but the control is not returning back to the parent script.
It hangs at the remote server.... :(
Culprit is bash -s line in your calling code, which is still expecting input to be ended using ctrl-C:
Try this instead:
for HOSTNAME in ${HOSTS} ; do
ssh -t -t $HOSTNAME "bash ~backup_conf.sh"
done
write your exit - status into a file on the remote host and pick it later from the remote host with ssh/scp/sftp.
Direct via ssh you will not get a exit - status from the other host.

Checking SSH failure in a script

Hi what is the best way to check to see if SSH fails for whatever reason?
Can I use a IF statement ( if it fails then do something)
I'm using the ssh command in a loop and passing my hosts names form a flat file.
so I do something like:
for i in `cat /tmp/hosts` ; do ssh $i 'hostname;sudo ethtool eth1'; done
I get sometime this error or I just cannot connect
ssh: host1 Temporary failure in name resolution
I want to skip the hosts that I cannot connect to is SSH fails. What is the best way to do this? Is there a runtime error I can trap to bypass the hosts that I cannot ssh into for whatever reason, perhaps ssh is not allowed or I do not have the right password ?
Thanking you in advance
Cheers
To check if there was a problem connecting and/or running the remote command:
if ! ssh host command
then
echo "SSH connection or remote command failed"
fi
To check if there was a problem connecting, regardless of success of the remote command (unless it happens to return status 255, which is rare):
if ssh host command; [ $? -eq 255 ]
then
echo "SSH connection failed"
fi
Applied to your example, this would be:
for i in `cat /tmp/hosts` ;
do
if ! ssh $i 'hostname;sudo ethtool eth1';
then
echo "Connection or remote command on $i failed";
fi
done
You can check the return value that ssh gives you as originally shown here:
How to create a bash script to check the SSH connection?
$ ssh -q user#downhost exit
$ echo $?
255
$ ssh -q user#uphost exit
$ echo $?
0
EDIT - I cheated and used nc
Something like this:
#!/bin/bash
ssh_port_is_open() { nc -z ${1:?hostname} 22 > /dev/null; }
for host in `cat /tmp/hosts` ; do
if ssh_port_is_open $host; then
ssh -o "BatchMode=yes" $i 'hostname; sudo ethtool eth1';
else
echo " $i Down"
fi
done

Resources