SCP loop stops executing after some time - bash

So I have these two versions of the same script. Both are attempting to copy my profile to all the servers on my infra ( about 5k ). The problem I am having is that no matter which version I use, I always get the process stuck somewhere around 300 servers. It does not matter if I do it sequentially or in parallel, both version fail and both at a random server. I dont get any error message (Yes I know Im redirecting error messages to null now), it simply stops executing after reaching a random point close to 300 servers and it just lingers there doing nothing.
The best run I could get did it for about 357 servers.
Probably there is some detail I unknow that is causing this. Could someone advise?
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log
done <<< "$( cat all_servers.txt )"
echo "$(date) - Process completed!!"
Parallel
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log &
done <<< "$( cat all_servers.txt )"
wait
echo "$(date) - Process completed!!"

Let's start with better input parsing. Instead of parsing a bash herestring from a posix command substitution via a while read loop, I've got the while read loop running through your server list directly via pipeline (this assumes one server per line in that file. I can fix this if that's not the case). If the contents of all_servers.txt was too long for a command line, you'd experience an error and/or premature termination.
I've also removed extraneous ./ items and I assume that rouser's home directory on each server is in fact /home/rouser (scp defaults to the home directory if given a relative path or no path at all).
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log \
|| echo "$server - Failed!" >> log.log
done < all_servers.txt
echo "$(date) - Process completed!!"
Parallel
For the Parallel solution, I've enclosed your conditional in parentheses just in case the pipeline was backgrounding the wrong process.
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
(
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log
|| echo "$server - Failed!" >> log.log
) &
done < all_servers.txt
wait
echo "$(date) - Process completed!!"
SSH keys
I highly recommend learning more about SSH. The scp -B flag was unknown to me because I'm used to using SSH keys and ssh-agent, which will make such connectivity seamless (use passwordless keys if you're running this in a cron job).

Related

Need to print current CPU usage and Memory usage in file continuously

I have prepared the below script, but it's not adding any data to the output file.
My intention is to get the current CPU usage and Memory usage and print them on log file.
What is wrong with my below script? I will run this script file in CentOS machine.
#!/usr/bin/bash
HOSTNAME=$(hostname)
mkdir -p /root/scripts
LOGFILE=/root/scripts/xcpuusagehistory.log
touch $LOGFILE
a=0;
b=1;
while [ "$a" -lt "$b" ]
do
CPULOAD=`top -d10 | grep "Cpu(s)"`
echo "$CPULOAD on Host $HOSTNAME" >> $LOGFILE
done
while true
do
cpu_load="$(top -b -n1 -d10 | grep "Cpu(s)")"
echo "$cpu_load on Host $HOSTNAME" >> "$log_file"
sleep 1
done
See top batch mode (-b) in the man page.

the bash script only reboot the router without echoing whether it is up or down

#!/bin/bash
ip route add 10.105.8.100 via 192.168.1.100
date
cat /home/xxx/Documents/list.txt | while read output
do
ping="ping -c 3 -w 3 -q 'output'"
if $ping | grep -E "min/avg/max/mdev" > /dev/null; then
echo 'connection is ok'
else
echo "router $output is down"
then
cat /home/xxx/Documents/roots.txt | while read outputs
do
cd /home/xxx/Documents/routers
php rebootRouter.php "outputs" admin admin
done
fi
done
The other documents are:
lists.txt
10.105.8.100
roots.txt
192.168.1.100
when i run the script, the result is a reboot of the router am trying to ping. It doesn't ping.
Is there a problem with the bash script.??
If your files only contain a single line, there's no need for the while-loop, just use read:
read -r router_addr < /home/xxx/Documents/list.txt
# the grep is unnecessary, the return-code of the ping will be non-zero if the host is down
if ping -c 3 -w 3 -q "$router_addr" &> /dev/null; then
echo "connection to $router_addr is ok"
else
echo "router $router_addr is down"
read -r outputs < /home/xxx/Documents/roots.txt
cd /home/xxx/Documents/routers
php rebootRouter.php "$outputs" admin admin
fi
If your files contain multiple lines, you should redirect the file from the right-side of the while-loop:
while read -r output; do
...
done < /foo/bar/baz
Also make sure your files contain a newline at the end, or use the following pattern in your while-loops:
while read -r output || [[ -n $output ]]; do
...
done < /foo/bar/baz
where || [[ -n $output ]] is true even if the file doesn't end in a newline.
Note that the way you're checking for your routers status is somewhat brittle as even a single missed ping will force it to reboot (for example the checking computer returns from a sleep-state just as the script is running, the ping fails as the network is still down but the admin script succeeds as the network just comes up at that time).

Convert bash script to Windows

I wrote the following script for Linux in order to detect drops in my network connection:
#!/bin/bash
echo "### RUNNING ###"
echo "### $(date) ###"
while true;do
now=$(date +"%T")
if [[ "$(ping -c 1 8.8.8.8 | grep '100.0% packet loss' )" != "" ]]; then
echo "!!! KO ($now)" >> "log_connectivity_$(date +"%F")"
else
echo "OK ($now)" >> "log_connectivity_$(date +"%F")"
fi
sleep 5s
done
What it does is, within a loop, to ping 8.8.8.8 once and, if packet is lost it prints KO and the time and, otherwise, it prints OK and the time.
I would like to translate this bash script into a Windows script, but I have no idea. I would be very grateful if you could help me with this.
Thanks in advance ;)

Commands won't write to file when script is executed by cron

I used crontab -e to schedule the execution of a shell script that does ssh calls to a list of servers and gets information and prints to file. The output of crontab -l is:
SHELL = /bin/sh
PATH = /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* 1 * * 1,2,3,4,5 /bin/bash /Users/cjones/Documents/Development/Scripts/DailyStatus.sh
The script I am running logs to files the output of echo "Beginning remote connections..." >> $logfile however does not log to a file the output of the following loop:
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
Pastebin of the full script: http://pastebin.com/3vD7Bba0
Additional note this script pushes the latest version of a management script then ssh's into the remote server to execute and capture the ouput. This work 100% of the time when ran manually. Any assitance would be helpful thanks!
you need to do SHELL=/bin/sh and the same with PATH. The spaces around = are wrong.
Also, use full paths when calling files in your script when you call it with crontab:
From
for servers in $(cat hostnames.txt); do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done
to
while read $servers
do
echo "Starting connection to $servers" >> $logfile
(rsync -av /Users/cjones/Documents/Development/Scripts/checkup.sh cjones#$servers:~/checkup.sh > /dev/null
echo""
ssh -t $servers "sudo ./checkup.sh") >> $logfile
echo ""
done < /path/to/hostnames.txt
^^^^^^^^^
Note the usage of while read; do ... done < file instead of the unnecessary for host in $(cat ...).

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

Resources