I need to ssh into memcached servers and execute ensure connectivity.
I am supposed to re-use the same ssh connection and keep executing the commands whose output will be stored in some log. This is supposed to be a scheduled job which runs at some specific intervals.
Code 1 makes multiple ssh connections for each execution.
#!/bin/bash
USERNAME=ec2-user
HOSTS="10.243.107.xx 10.243.124.xx"
KEY=/home/xxx/xxx.pem
SCRIPT="echo stats | nc localhost 11211 | grep cmd_flush"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} -i ${KEY} ${HOSTNAME} "${SCRIPT}"
done
Code 2 hangs after ssh.
#!/bin/bash
USERNAME=ec2-user
KEY=/home/xxx/xxx.pem
ssh -l ${USERNAME} -i ${KEY} 10.243.xx.xx
while:
do
echo stats | nc localhost 11211 | grep cmd_flush
sleep 1
done
Is there any better way of doing this?
Since you want to have Code 2 run the infinite while loop on the remote host, you can pass that whole thing to the ssh command, after fixing the while statement:
#!/bin/bash
USERNAME=ec2-user
KEY=/home/xxx/xxx.pem
ssh -l ${USERNAME} -i ${KEY} 10.243.xx.xx '
while true
do
echo stats | nc localhost 11211 | grep cmd_flush
sleep 1
done
'
I have to warn that I think the whole approach is somewhat fragile, though. Long-standing ssh connections tend to die for various reasons, which don't always mean the connectivity is actually broken. Make sure the parent script that calls this notices dropped ssh connections and tries again. I guess you can put this script in a loop, and maybe log a warning each time the connection is dropped, and log an error if the connection cannot be re-established.
Related
I want to write a script to test my server and I want to be able to connect and disconnect thousands of users in a script that uses nc, what I came up with is:
echo "Test" | nc localhost <port> &
NCPID=$!
sleep 1
kill -kill $NCPID
But I'd like to remove the sleep 1 and still get the netcat connection closed after "Test" was echoed, how could I do that ?
Check -w option for netcat:
-w timeout If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. The -w flag has no
effect on the -l option, i.e. nc will listen forever for a connection,
with or without the -w flag. The default is no timeout.
echo "Test" | nc localhost <port> -w 1 &
I have a script, that is running inside the docker container some actions we need for some internal debugging purposes:
set -eu
echo "Starting i/o test for host"
IP_HOST=$(ip a | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" | grep 172.17 | awk 'NR==1{print $1}')
echo "Detected IP of host is $IP_HOST"
sshpass -p tcuser ssh -o StrictHostKeyChecking=no docker#localhost -t -t
echo "Then"
the output here produce exactly:
bash-5.0# sh /etc/cron.d/iotesthost.sh
Mon Mar 2 12:43:59 UTC 2020
Starting i/o test for host
Detected IP of host is 172.17.0.1
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
( '>')
/) TC (\ Core is distributed with ABSOLUTELY NO WARRANTY.
(/-_--_-\) www.tinycorelinux.net
and as the last line is reached, the script makes me exiting bash or crond execution. So I can't go ahead with processing other lines after sshpass/ssh, so I never reach echo "Then"
That is the reason of exiting the script execution and how to work it around, still keeping all the features of accepting keys (i need is as each time the docker container calls for the script it is new)
If I ignore -t -t, I'm getting error according to https://stackoverflow.com/a/7122115/1759063
see https://askubuntu.com/questions/87449/how-to-disable-strict-host-key-checking-in-ssh
or
echo "StrictHostKeyChecking no" >> /etc/ssh_config
for a global solution. Please note that there is no security check anymore then.
I would like to know what is the difference between the below commands:
ssh vagrant#someipaddress
cd /home/vagrant/
grep -i "something" data.txt
and
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
From this website it mentions, that you can send multiple commands to the remote server. Is the second option actually logging into the server? What is the benefit in this second approach?
Strictly Speaking from the example provided:
The first command:
Logs onto the remote server
Executes a couple commands, and
Stays logged on to the server
The second command runs half on the remote machine, logs out of the remote machine, and then pipes the output to grep on your local machine, all in one command line.
Breaking down what's happening:
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
The section in bold is running on your local PC, based on the output from the ssh session
The 'quotes "contain" the entire command block
the " quotes "contain" the individual arguments within the command block.
You may have meant to do this:
ssh vagrant#someipaddress 'cd /home/vagrant; cat data.txt' | grep -i "something"
Where the bold section runs locally
Or you may have intentionally done this:
ssh vagrant#someipaddress 'cd /home/vagrant/ | grep -i "something" data.txt'
Where the entire command runs on the server.
Either way, the end result:
Is that you automatically log out of the remote machine, and the whole command sequence was executed in one hit.
I execute my bash script PLCCheck as process
./PLCCheck &
PLCCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OKConnection" | netcat -u -c $HOST $PORT
done < <(netcat -u -l -p 6001)
It listens on UDP Port 6001.
When I want to execute my second bash script SQLCheck as process that listens on UDP Port 4001
./SQLCheck &
SQLCheck
while read -r line
do
...
def_host=192.168.100.110
def_port=6002
HOST=${2:-$def_host}
PORT=${3:-$def_port}
echo -n "OPENEF1" | netcat -u -c $HOST $PORT
done < <(nc -l -p 4001)
I got this error:
Error: Couldn't setup listening socket (err=-3)
Port 6001 and 4001 are open in the iptables and both scripts work as a single process. Why do I get this error?
I have checked the man page of nc. I think it is used on a wrong way:
-l Used to specify that nc should listen for an incoming connection rather
than initiate a connection to a remote host. It is an error to use this
option in conjunction with the -p, -s, or -z options. Additionally,
any timeouts specified with the -w option are ignored.
...
-p source_port
Specifies the source port nc should use, subject to privilege restrictions
and availability. It is an error to use this option in conjunction with the
-l option.
According to this one should not use -l option with -p option!
Try to use without -p, just nc -l 4001. Maybe this is the error...
I have been trying to automatically enter a ssh connection using a script. This previous SOF post has helped me so far. Using one connection works (the first ssh statement). However, I want to create another ssh connection once connected, which I thought could look like this:
#! /bin/bash
# My ssh script
sshpass -p "MY_PASSWORD1" ssh -o StrictHostKeyChecking=no *my_hostname_1*
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
When running the script, I get only connected to the my_hostname_1 and the second ssh command is not run until I exit the first ssh connection.
I've tried using an if statement like this:
if [ "$HOSTNAME" = my_host_name_1 ]; then
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no *my_hostname_2*
fi
but I can't get any commands to be read until I exit the first connection.
Here is a ProxyCommand example as suggested by #lihao:
#!/bin/bash
sshpass -p "MY_PASSWORD2" ssh -o StrictHostKeyChecking=no \
-o ProxyCommand='sshpass -p "MY_PASSWORD1" ssh m_hostname_1 netcat -w 1 %h %p' \
my_hostname_2
You are proxying through the first host to get to the second. This assumes you have netcat installed on my_hostname_2. If not, you'll need to install it.
You can also set this up in your ~/.ssh/config file so you don't need the proxy stuff on the command line:
Host my_hostname_1
HostName my_hostname_1
Host my_hostname_2
HostName my_hostname_2
ProxyCommand ssh my_hostname_1 netcat -w 1 %h %p
However, this is a little trickier with the password handling. While you could put the sshpass here, it's not a great idea to have passwords in plain text. Using key based authentication might be better.
A Bash script is a sequence of commands.
echo moo
echo bar
will run echo moo and wait for it to complete, then run the next command.
You can run a remote command like this:
ssh remote echo moo
which will connect to remote, run the command, and exit. If there are additional commands in the script file after this, the shell which is executing these commands will continue with the next one, obviously on the host where you started the script.
To connect to one host from another, you could in principle do
ssh host1 ssh host2
but the proxy command suggested by #zerodiff improves on several aspects of the experience.