How to drop ssh session in shell? ssh keeps session alive even after script exits - bash

How to drop ssh session in an automation script write in bash?
I have a script running local to trigger a script on a remote machine, and the script on remote machine will trigger another script running in the background...
I want to drop the session while keep the remote machine still running the background script, so I use nohup.
I have a local script localScript as follows
#!/bin/bash
echo "start remote trigger script..."
./trigger
the trigger script is ready on my remote machine with the following lines:
#!/bin/bash
echo "start script test..."
nohup ./test > output &
echo "start test script in background, exit..."
exit
The test script is a basic sleep loop just for testing...
#!/bin/bash
c=1
while [ "$c" -le 10 ]
do
echo "sleep 10 seconds, c=$c"
sleep 10s
c=$((c+1))
if [ "$c" -eq 10 ]
then
echo "max count reach, exit"
exit
fi
done
But what I found is the ssh keeps session alive (wait idle for 100 seconds), how can I drop the session?
The command I use is
sshpass -p XXXX ssh -o StrictHostKeyChecking=no user#IP 'bash -s' < localScript

This should work if you force pseudo-terminal allocation (-t):
sshpass -p XXXX ssh -t -o StrictHostKeyChecking=no user#IP 'bash -s' < localScript

Related

SSH not exiting properly inside if statement in bash heredoc

So i am running this script to check if a java server is up remotely by sshing into remote. If it is down, i am trying to exit and run another script locally. However, after the exit command, it is still in the remote directory.
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
# want to exit ssh
exit
# after here when i check it is still in ssh
# I want to run another script locally in the same directory as the current script
./other_script.sh
else
echo "java server up"
fi;
EOF
The exit is exiting the ssh session and so never gets to the execution of the other_script.sh line in the HEREDOC. It would be better to place this outside of the script and actioned from the exit status of the HEREDOC/ssh and so:
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
exit 7 # Set the exit status to a number that isn't standard in case ssh fails
else
echo "java server up"
fi;
EOF
if [[ $? -eq 7 ]]
then
./other_script.sh
fi

Unable to fully close remote SSH tunnel in script's exit

I'm writing a script, which double-hop SSH-forwards port 80 from our remotely deployed VMs, and opens this "status page" in a local browser. To open it, the SSH tunnel must be "backgrounded", however doing so causes the SSH tunnel to exit with a persistent tunnel remaining on the SSH server that I'm tunneling through (bastion). Here is the script, so far:
#!/bin/sh
# SSH needs a HUP when this script exits
shopt -s huponexit
echo "SSH Forwards the VM status page for a given host..."
read -p "Host Name: " CODE
PORT=$(($RANDOM + 1024))
# "-t -t" (force tty) needed to avoid orphan tunnels on bastion after exit. (Only seems to work when not backgrounded?)
ssh -t -t -4L $PORT:localhost:$PORT user1#bastion sudo ssh -4NL $PORT:localhost:80 root#$CODE.internal-vms &
PID=$!
# Open browser to VM Status Page
sleep 1
open http://localhost:$PORT/
# Runs the SSH tunnel in the background, ensuring it gets killed on shell's exit...
bash
kill -CONT $PID
#kill -QUIT $PID
echo "Killed SSH Tunnel. Exiting..."
sleep 2
Unfortunately, given the backgrounding of the SSH tunnel (using & on line 10), when the script is killed (via CTRL-C), the "bastion" server ends up having an orphaned SSH connection remaining indefinitely.
The "-t -t" and "shopt -s huponexit" are fixed I've tried, but don't seem to help. I've also tried various SIG's in the final kill. What am I doing wrong here? Thanks for the assistance!
The -f flag can be used to background the process. To end the connection, ssh -O exit user1#bastion is a better option than kill which is rather violent.
I would do it like this. Fyi, I didn't test the modified script, although I regularly use a similar, long SSH command.
#!/bin/sh
# SSH needs a HUP when this script exits
shopt -s huponexit
echo "SSH Forwards the VM status page for a given host..."
read -p "Host Name: " CODE
PORT=$(($RANDOM + 1024))
# "-t -t" (force tty) needed to avoid orphan tunnels on bastion after exit. (Only seems to work when not backgrounded?)
ssh -t -t -f -4L $PORT:localhost:$PORT user1#bastion sudo ssh -4NL $PORT:localhost:80 root#$CODE.internal-vms
#PID=$!
# Open browser to VM Status Page
sleep 1
open http://localhost:$PORT/
# Runs the SSH tunnel in the background, ensuring it gets killed on shell's exit...
#bash
#kill -CONT $PID
#kill -QUIT $PID
ssh -O exit user#bastion
echo "Killed SSH Tunnel. Exiting..."
sleep 2

Catch SSH exceptions in a UNIX script

I have a background process in my server which updates my ~/.ssh/authorized_keys frequently. If I ssh from my client machine at the very moment it will fail
$ ssh my_server date
SSH Version: OpenSSH_5.3p1
user#my_server's password:
and the ssh will mark the script as failed after a number of tries.
I want to break away and add an exception handling piece to sleep 30 seconds whenever this ssh failure occurs.
Something like
*ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi*
Is there any better approach as the above snippet will still prompt for password
Maybe you could approach this in one shell script by "flock" on a lock file, and then "flock" in the shell script you run above:
In the script that updates your authorized keys:
(flock 200
# commands here that modify your authorized_keys file
) 200>/tmp/authkey_lock
And around the script piece you have posted above:
(flock 200
ssh -o 'StrictHostKeyChecking no' appsrvr01.myserv.com "date" 2> /tmp/error
if [ $? -ne 0 ]
then
echo -e "\n Please wait..\n\n"
sleep 1s
else
echo -e "\n The Environment is ready to use!\n\n"
exit 0
fi
) 200>/tmp/authkey_lock
Please see "man flock" for for information about flock.

How can I trigger multiple remote shell scripts in bash to run concurrently and then wait for them to complete?

I have a bash script which runs a number of tasks on remote machines via SSH. Each script is quite long so I'd like to run them concurrently as background tasks. I also need for them all to complete before moving on. I know that I can use the wait command for the latter but when I stick & at the end to make it background task, it all stops working.
By stop working, I mean that the scripts don't seem to have run, but otherwise the main script still completes.
ssh root#machine1 'bash -s' < script1 my_parameter
ssh root#machine2 'bash -s' < script2 my_parameter
ssh root#machine3 'bash -s' < script3 my_parameter
wait
some_other_task
This works fine for me with Ubuntu 11.04:
#!/bin/bash
ssh user#server1 <command.sh &
ssh user#server2 <command.sh &
ssh user#server3 <command.sh &
wait
echo end
Content of command.sh:
sleep 5
hostname
Update:
With commandline arguments it gets more complicated. You can use a "here-document" and set to set $1 $2 $3.
#!/bin/bash
ssh -T root#server1 << EOF
#!/bin/bash
set -- $my_parameter1 $my_parameter2 $my_parameter3
$(cat script1)
EOF
ssh -T root#server2 << EOF
#!/bin/bash
set -- $my_parameter1 $my_parameter2 $my_parameter3
$(cat script2)
EOF
ssh -T root#server3 << EOF
#!/bin/bash
set -- $my_parameter1 $my_parameter2 $my_parameter3
$(cat script3)
EOF
wait
echo end

ssh script doesn't return control to the parent script

I am trying to execute a local script on the remote server, by writing it to standard input of an ssh command. The script runs fine, but then ssh doesn't exit: it just hangs, and control doesn't return to the parent script.
Parent Shell :
for HOSTNAME in ${HOSTS} ; do
ssh -t -t $HOSTNAME "bash -s" < ~backup_conf.sh
done
Called Script:
#!/bin/sh
AGENT_BASE_PATH=/home/lokesh
if [ -d "$AGENT_BASE_PATH/CI/DE_deployment/conf" ]; then
if [ -d "$AGENT_BASE_PATH/CI/temp/conf_bkup" ]; then
rm -rf $AGENT_BASE_PATH/CI/temp/conf_bkup
fi
cp -R $AGENT_BASE_PATH/CI/DE_deployment/conf $AGENT_BASE_PATH/CI/temp/conf_bkup
fi
exit
I have written 'exit' but the control is not returning back to the parent script.
It hangs at the remote server.... :(
Culprit is bash -s line in your calling code, which is still expecting input to be ended using ctrl-C:
Try this instead:
for HOSTNAME in ${HOSTS} ; do
ssh -t -t $HOSTNAME "bash ~backup_conf.sh"
done
write your exit - status into a file on the remote host and pick it later from the remote host with ssh/scp/sftp.
Direct via ssh you will not get a exit - status from the other host.

Resources