Locking a remote host in pool of hosts - bash

Suppose I have a pool of hosts in a file somewhere
hosts.txt:
hosta#domain
hostb#domain
hostc#domain
I would like to be able to lock a host (something like flock /tmp/lockfile) so that other programs can't use that host. I would like to accomplish this with a bash script which iterates over the list of hosts, ssh'es into that host, tries to flock /tmp/lockfile on that machine, and if successful, returns with the name of the host. If unsuccessful, maybe it could return with some error code.
Currently, I have something like this:
#!/bin/bash
USER=someuser
while read host; do
echo -n "Trying ${host}..."
xterm -e "ssh \"${USER}#${host}\" 'flock -n /tmp/lockfile -c \"echo Locked $host; read\"'" &
FLOCK_PID=$!
sleep 1
kill -0 $FLOCK_PID;
if [ ! $? ]; then
echo -n "${host}"
exit 0
fi
done <hosts.txt
exit 1
This works but there are a few deficiencies. Because of the read command the only way I can think of to not block the shell is to start an xterm. Is there a more elegant way to accomplish what I want?

Related

Global variables (bash) inside loops [duplicate]

I want to write a Bash-Script which loggs into several machines via ssh and first shows their hostname and the executes a command (on every machine the same command). The hostname and the output of the command should be displayed together. I wanted a parallel version, so the ssh-commands should be run in background and in parallel.
I constructed the bashscripted attached below.
The problem is: As the runonip-function is executed in a subshell, it got no access to the DATA-array to store the results. Is it somehow possible to give the subshell access to the Array, perhaps via a "pass by reference" to the function?
Code:
#!/bin/bash
set -u
if [ $# -eq 0 ]; then
echo "Need Arguments: Command to run"
exit 1
fi
DATA=""
PIDS=""
#Function to run in Background for each ip
function runonip {
ip="$1"
no="$2"
cmds="$3"
DATA[$no]=$( {
echo "Connecting to $ip"
ssh $ip cat /etc/hostname
ssh $ip $cmds
} 2>&1 )
}
ips=$(get ips somewhere)
i=0
for ip in $ips; do
#Initialize Variables
i=$(($i+1))
DATA[$i]="n/a"
#For the RunOnIp Function to background
runonip $ip $i $# &
#Save PID for later waiting
PIDS[$i]="$!"
done
#Wait for all SubProcesses
for job in ${PIDS[#]}; do
wait $job
done
#Everybody finished, so output the information from DATA
for x in `seq 1 $i`; do
echo ${DATA[$x]}
done;
No, it's really not. The subshell runs in an entirely separate operating system process, and the only way for two processes to share memory is for their code to set that up explicitly with system calls. Bash doesn't do that.
What you need to do is find some other way for the two processes to communicate. Temporary files named after the PIDs would be one way:
#For the RunOnIp Function to background
runonip $ip $i $# >data-tmp&
mv data-tmp data-$!
And then cat the files:
#Everybody finished, so output the information from the temp files
for x in ${PIDS[#]}; do
cat data-$x
rm data-$x
done;
You might be able to set up a named pipe to do interprocess communication.
Another possibility, in Bash 4, might be to use coprocesses.
Additional references:
Named Pipes
Using File Descriptors with Named Pipes

sshpass exit in automation

I have total of 6 IP addresses and out of the 6 only 2 IP addresses are valid. I wrote a shell script to use sshpass to test each IP.
The issue is when script reaches IP that is working it log's in the system (Cisco switch) and stays there and not continuing with the loop to test the remaining IPs. If i type "exit" on the system than it continues with the loop.
After a successful login how can script automatically get out, from logged system, and continue with testing remaining IP?
/usr/bin/sshpass -p $ADMINPASS ssh -oStrictHostKeyChecking=no -oCheckHostIP=no -t $ADMINLOGIN#$IP exit
i can use the exit status to figure out which IP worked and which on didn't work.
Test first if IP is alive, and then 'ssh' on it, could help you.I don't know if you are using a loop or not, but loop can be a good choice.Should look like : for f in ip-1 ip-2 ip-3 ip-4 ip-5 ip-6; do ping -c 1 -w 3 $f; if [ $? -eq 0 ]; then echo OK; ssh_pass $f your_command; else echo " IP is NOK"; fi; done
You can then also add 'exit' command, depending on what you test : 'exit 0' if it is OK, after you 'ssh' command, 'exit 1' if NOK.

Checking SSH failure in a script

Hi what is the best way to check to see if SSH fails for whatever reason?
Can I use a IF statement ( if it fails then do something)
I'm using the ssh command in a loop and passing my hosts names form a flat file.
so I do something like:
for i in `cat /tmp/hosts` ; do ssh $i 'hostname;sudo ethtool eth1'; done
I get sometime this error or I just cannot connect
ssh: host1 Temporary failure in name resolution
I want to skip the hosts that I cannot connect to is SSH fails. What is the best way to do this? Is there a runtime error I can trap to bypass the hosts that I cannot ssh into for whatever reason, perhaps ssh is not allowed or I do not have the right password ?
Thanking you in advance
Cheers
To check if there was a problem connecting and/or running the remote command:
if ! ssh host command
then
echo "SSH connection or remote command failed"
fi
To check if there was a problem connecting, regardless of success of the remote command (unless it happens to return status 255, which is rare):
if ssh host command; [ $? -eq 255 ]
then
echo "SSH connection failed"
fi
Applied to your example, this would be:
for i in `cat /tmp/hosts` ;
do
if ! ssh $i 'hostname;sudo ethtool eth1';
then
echo "Connection or remote command on $i failed";
fi
done
You can check the return value that ssh gives you as originally shown here:
How to create a bash script to check the SSH connection?
$ ssh -q user#downhost exit
$ echo $?
255
$ ssh -q user#uphost exit
$ echo $?
0
EDIT - I cheated and used nc
Something like this:
#!/bin/bash
ssh_port_is_open() { nc -z ${1:?hostname} 22 > /dev/null; }
for host in `cat /tmp/hosts` ; do
if ssh_port_is_open $host; then
ssh -o "BatchMode=yes" $i 'hostname; sudo ethtool eth1';
else
echo " $i Down"
fi
done

Bash: Subprocess access variables

I want to write a Bash-Script which loggs into several machines via ssh and first shows their hostname and the executes a command (on every machine the same command). The hostname and the output of the command should be displayed together. I wanted a parallel version, so the ssh-commands should be run in background and in parallel.
I constructed the bashscripted attached below.
The problem is: As the runonip-function is executed in a subshell, it got no access to the DATA-array to store the results. Is it somehow possible to give the subshell access to the Array, perhaps via a "pass by reference" to the function?
Code:
#!/bin/bash
set -u
if [ $# -eq 0 ]; then
echo "Need Arguments: Command to run"
exit 1
fi
DATA=""
PIDS=""
#Function to run in Background for each ip
function runonip {
ip="$1"
no="$2"
cmds="$3"
DATA[$no]=$( {
echo "Connecting to $ip"
ssh $ip cat /etc/hostname
ssh $ip $cmds
} 2>&1 )
}
ips=$(get ips somewhere)
i=0
for ip in $ips; do
#Initialize Variables
i=$(($i+1))
DATA[$i]="n/a"
#For the RunOnIp Function to background
runonip $ip $i $# &
#Save PID for later waiting
PIDS[$i]="$!"
done
#Wait for all SubProcesses
for job in ${PIDS[#]}; do
wait $job
done
#Everybody finished, so output the information from DATA
for x in `seq 1 $i`; do
echo ${DATA[$x]}
done;
No, it's really not. The subshell runs in an entirely separate operating system process, and the only way for two processes to share memory is for their code to set that up explicitly with system calls. Bash doesn't do that.
What you need to do is find some other way for the two processes to communicate. Temporary files named after the PIDs would be one way:
#For the RunOnIp Function to background
runonip $ip $i $# >data-tmp&
mv data-tmp data-$!
And then cat the files:
#Everybody finished, so output the information from the temp files
for x in ${PIDS[#]}; do
cat data-$x
rm data-$x
done;
You might be able to set up a named pipe to do interprocess communication.
Another possibility, in Bash 4, might be to use coprocesses.
Additional references:
Named Pipes
Using File Descriptors with Named Pipes

Making bash script to check connectivity and change connection if necessary. Help me improve it?

My connection is flaky, however I have a backup one. I made some bash script to check for connectivity and change connection if the present one is dead. Please help me improve them.
The scripts almost works, except for not waiting long enough to receive an IP (it cycles to next step in the until loop too quick). Here goes:
#!/bin/bash
# Invoke this script with paths to your connection specific scripts, for example
# ./gotnet.sh ./connection.sh ./connection2.sh
until [ -z "$1" ] # Try different connections until we are online...
do
if eval "ping -c 1 google.com"
then
echo "we are online!" && break
else
$1 # Runs (next) connection-script.
echo
fi
shift
done
echo # Extra line feed.
exit 0
And here is an example of the slave scripts:
#!/bin/bash
ifconfig wlan0 down
ifconfig wlan0 up
iwconfig wlan0 key 1234567890
iwconfig wlan0 essid example
sleep 1
dhclient -1 -nw wlan0
sleep 3
exit 0
Here's one way to do it:
#!/bin/bash
while true; do
if ! [ "`ping -c 1 google.com; echo $?`" ]; then #if ping exits nonzero...
./connection_script1.sh #run the first script
sleep 10 #give it a few seconds to complete
fi
if ! [ "`ping -c 1 google.com; echo $?`" ]; then #if ping *still* exits nonzero...
./connection_script2.sh #run the second script
sleep 10 #give it a few seconds to complete
fi
sleep 300 #check again in five minutes
done
Adjust the sleep times and ping count to your preference. This script never exits so you would most likely want to run it with the following command:
./connection_daemon.sh 2>&1 > /dev/null & disown
Have you tried omitting the -nw option from the dhclient command?
Also, remove the eval and quotes from your if they aren't necessary. Do it like this:
if ping -c 1 google.com > /dev/null 2>&1
Trying using ConnectTimeout ${timeout} somewhere.

Resources