Unable to kill remote processes with ssh - shell

I need to kill remote processes with a shell script as follows:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
scp a.txt $user#$ip://home/mag
ssh -o ConnectTimeout=10 $user#$ip > /dev/null 2>&1 << remoteCmd
touch b.txt
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi
exit
remoteCmd
echo "commands executed."
fi
After executed it I found the scp and touch clauses had been executed, but the kill clause had not been executed successful and the process is still there. If I run clauses from "jobPid= ..." to "fi" on remote machine the process can be killed. How to fix it?

I put a script on the remote machine which can find and kill the process, then I ran the script on local machine which execute the script on the remote machine with ssh. The script is as follows:
Local script:
#!/bin/bash
ip="172.24.63.41"
user="mag"
timeout 10s ssh -q $user#$ip exit
if [ $? -eq 124 ]
then
echo "can not connect to $ip, timeout out."
else
echo "connected, executing commands"
ssh -q $user#$ip "/home/mag/local.sh"
echo "commands executed."
fi
remote script:
#!/bin/bash
jobPid=`jps -l | grep jobserver | awk '{print $1}'`
if [ ! $jobPid == "" ]; then
kill -9 $jobPid
fi

Your script needs root access (WHICH IS NEVER A GOOD IDEA). Or make sure your program which is running, is running under your webuser/group

Related

How can i exit, stop, kill autossh if connection timed out, ip, port not exists or response without using ssh and sshd config files?

I run autossh in a script for remote port forwarding and i need to exit, kill, stop the script if connection timed out, ip, port not exists or response, without the using of the ssh, sshd config files, is this possible?
No answer, found on stacksites or the manpage of autossh.
Example 1:
myautossh script
#!/bin/bash
/usr/bin/autossh -NT -o "ExitOnForwardFailure=yes" -R 5555:localhost:443 -l user 1.1.1.1
if [ $? -eq 0 ]; then
echo "SUCCESS" >> errorlog
else
echo "FAIL" >> errorlog
fi
Example 2:
myautossh script
#!/bin/bash
/usr/bin/autossh -f -NT -M 0 -o "ServerAliveInterval=5" -o "ServerAliveCountMax=1" -o "ExitOnForwardFailure=yes" -R 5555:localhost:443 -l user 1.1.1.1 2>> errorlog
if [ $? -eq 0 ]; then
echo "SUCCESS" >> errorlog
else
echo "FAIL" >> errorlog
kill $(ps aux | grep [m]yautossh | awk '{print $2}')
fi
IP 1.1.1.1 not exists in my network so it get a connection timeout, but the script and autossh is still running, checked with:
ps aux | grep [m]yautossh
or
ps x | grep [a]utossh
Can only terminate the script with ctrl+c
I want to run autossh in a script, try to connect to a not existing ip or port and terminate, exit, kill the process of autossh to continue my script, without config ssh & sshd config files, only with the options/commands of autossh and the using of -f for background, is this possible?
the use of timeout with --preserve-status is what you need
timeout allows you to run a cmmand with a time limit
Preserving the Exit Status, timeout with --preserve-status returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command
this will terminate the command after 2 seconds and returns the exit status of your command if not equal 0, command not success, you could not establish a successful conection
#!/bin/bash
timeout --preserve-status 2 /usr/bin/autossh -NT -o "ExitOnForwardFailure=yes" -R 33333:localhost:443 -l user 1.1.1.1
if [ $? -eq 0 ]; then
echo "Connection success"
else
echo "Connection fail"
fi
https://linuxize.com/post/timeout-command-in-linux/

SSH not exiting properly inside if statement in bash heredoc

So i am running this script to check if a java server is up remotely by sshing into remote. If it is down, i am trying to exit and run another script locally. However, after the exit command, it is still in the remote directory.
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
# want to exit ssh
exit
# after here when i check it is still in ssh
# I want to run another script locally in the same directory as the current script
./other_script.sh
else
echo "java server up"
fi;
EOF
The exit is exiting the ssh session and so never gets to the execution of the other_script.sh line in the HEREDOC. It would be better to place this outside of the script and actioned from the exit status of the HEREDOC/ssh and so:
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
exit 7 # Set the exit status to a number that isn't standard in case ssh fails
else
echo "java server up"
fi;
EOF
if [[ $? -eq 7 ]]
then
./other_script.sh
fi

How to check connection to a list of servers in bash?

Im trying to check connections for a list of servers. I want to loop through the list, check if a connection works and if yes, do some stuff, if not, echo out a problem message.
My problem is:
the script stops at the first node without echoing the $?.
So, whats wrong with my for-loop?
These vars are included from a config file:
$nodes is a list of server IPs like 1.1.1.1,2.2.2.2,10.10.10.10
$user is one string
for node in $(echo $nodes | sed "s/,/ /g")
do
echo "Checking Node: $node"
ssh -q -o ConnectTimeout=3 $user#$node echo ok
echo $?
if [[ $? != 0 ]]
then
echo "Problem in logging into $node"
else
# do some stuff here
fi
done
EDIT #1:
for node in $(echo $nodes | sed "s/,/ /g")
do
echo "Checking Node: $node"
ssh -q -t -o ConnectTimeout=3 $user#$node "echo ok"
retcode=$?
echo $retcode
if [[ "$retcode" -ne 0 ]]
then
echo "Problem in logging into $node"
else
echo "OK"
fi
done
It is because ssh first asks you to validate The authority of the host and If you accept the authority it will ask for password. That is why your command does not return to shell and waits for input.
If your intention is just validating ssh connection, then you may consider to use
telnet <your_host> <port> < /dev/null
But if your intend is to run some commands you need a trust relationship between hosts. In that case you can use:
Execute this commands:
ssh-keygen -t rsa
then
ssh-copy-id -i root#ip_address
Now you can connect with
ssh <user>#<host>
Furher information
You can add -tto make virtual terminal and add quotes on command:
ssh -q -t -o ConnectTimeout=3 ${user}#${node} "echo ok"
Also use -ne instead of != which is for compare strings
if [[ "$?" -ne 0 ]]
Also echo $? mess the return code. You should use something like:
ssh -q -t -o ConnectTimeout=3 ${user}#${node} "echo ok"
retcode=$?
echo $retcode
if [[ "$retcode" -ne 0 ]]
You can rewrite ssh command like this to avoid problems with ssh host keys
ssh -q -t -o StrictHostKeyChecking=no -o ConnectTimeout=3 ${user}#${node} "echo ok"

Catch SSH exceptions in a UNIX script

I have a background process in my server which updates my ~/.ssh/authorized_keys frequently. If I ssh from my client machine at the very moment it will fail
$ ssh my_server date
SSH Version: OpenSSH_5.3p1
user#my_server's password:
and the ssh will mark the script as failed after a number of tries.
I want to break away and add an exception handling piece to sleep 30 seconds whenever this ssh failure occurs.
Something like
*ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi*
Is there any better approach as the above snippet will still prompt for password
Maybe you could approach this in one shell script by "flock" on a lock file, and then "flock" in the shell script you run above:
In the script that updates your authorized keys:
(flock 200
# commands here that modify your authorized_keys file
) 200>/tmp/authkey_lock
And around the script piece you have posted above:
(flock 200
ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi
) 200>/tmp/authkey_lock
Please see "man flock" for for information about flock.

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

Resources