FTP bash script - bash

I would like to create a script that will upload a file until the upload uperation will successfull. The script will monitoring the log file. If "not connected" to the server i want to repeat the upload operation until "connected" and "file successfully transferred" Anyone can help me to build the correct one pls. What should i write after if egrep "not...?
LOGFILE=/home/transfer_logs/$a.log
First=$(egrep "Connected" $LOGFILE)
Second=$(egrep "File successfully transferred" $LOGFILE)
ftp -p -v -i 192.163.3.3 < ../../example.script > ../../$LOGFILE 2>&1
if
egrep "Not connected" $LOGFILE; then
ftp -p -v -i 192.163.3.3 < ../../example.script > ../../$LOGFILE 2>&1
until
[[ -n "$first" ]] && [[ -n "$second" ]];
done
fi
example contains:
binary
mput a.txt
quit

while :; do
ftp ... > $LOGFILE
grep -qF Connected $LOGFILE &&
grep -qF "File successfully transferred" $LOGFILE && break
done

Related

SCP loop stops executing after some time

So I have these two versions of the same script. Both are attempting to copy my profile to all the servers on my infra ( about 5k ). The problem I am having is that no matter which version I use, I always get the process stuck somewhere around 300 servers. It does not matter if I do it sequentially or in parallel, both version fail and both at a random server. I dont get any error message (Yes I know Im redirecting error messages to null now), it simply stops executing after reaching a random point close to 300 servers and it just lingers there doing nothing.
The best run I could get did it for about 357 servers.
Probably there is some detail I unknow that is causing this. Could someone advise?
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log
done <<< "$( cat all_servers.txt )"
echo "$(date) - Process completed!!"
Parallel
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" ./.bash_profile rouser#${server}:/home/rosuer/ && echo "$server - Done!" >> ./log.log || echo "$server - Failed!" >> ./log.log &
done <<< "$( cat all_servers.txt )"
wait
echo "$(date) - Process completed!!"
Let's start with better input parsing. Instead of parsing a bash herestring from a posix command substitution via a while read loop, I've got the while read loop running through your server list directly via pipeline (this assumes one server per line in that file. I can fix this if that's not the case). If the contents of all_servers.txt was too long for a command line, you'd experience an error and/or premature termination.
I've also removed extraneous ./ items and I assume that rouser's home directory on each server is in fact /home/rouser (scp defaults to the home directory if given a relative path or no path at all).
Sequential
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log \
|| echo "$server - Failed!" >> log.log
done < all_servers.txt
echo "$(date) - Process completed!!"
Parallel
For the Parallel solution, I've enclosed your conditional in parentheses just in case the pipeline was backgrounding the wrong process.
#!/bin/bash
clear
echo "$(date) - Process started"
all_count="$( cat all_servers.txt | wc -l )"
while read server
do
(
scp -B -o "StrictHostKeyChecking no" .bash_profile rouser#${server}: \
&& echo "$server - Done!" >> log.log
|| echo "$server - Failed!" >> log.log
) &
done < all_servers.txt
wait
echo "$(date) - Process completed!!"
SSH keys
I highly recommend learning more about SSH. The scp -B flag was unknown to me because I'm used to using SSH keys and ssh-agent, which will make such connectivity seamless (use passwordless keys if you're running this in a cron job).

Actual return code for SCP

I am writing a bash script that goes through a list of filenames and attempts to copy each file using scp from two servers into a local folder. The script then compares the local files to each other. Sometimes however, the file will not exist on one server or the other or both.
At first, I was using this code:
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
error=$(</tmp/Error) # error catching
if [[ -n "$error" ]]; then echo -e "$file not found on $host"; fi
But I found that some (corporate) servers output a (legalese) message (to stderr I guess) every time a user connects via scp or ssh. So I started looking into utilizing exit codes.
I could simply use
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -ne 0 ]]; then echo -e "$file not found on $host"; fi
but since the exit code for "file does not exist" is supposed to be 6, I would rather have a more precise
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -eq 6 ]]; then echo -e "$file not found on $host"; fi
The problem is that I seem to be getting an exit code of 1 no matter what went wrong. This question is similar to this one, but that answer does not help me in Bash.
Another solution I am considering is
scp $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
error=$(</tmp/Error) # error catching
if [[ ${error: -25} = "No such file or directory" ]]; then echo -e "$file not found on $host"; fi
But I am concerned that different versions of scp could have different error messages for the same error.
Is there a way to get the actual exit code of scp in a Bash script?
Per the comments (#gniourf_gniourf, #shelter, #Wintermute) I decided to simply switch tools to rsync. Thankfully the syntax doesn't need to be changed at all.
23 was the error code I was getting when files didn't exist so here is the code I ended up with
rsync -q $user#$host:/etc/$file ./$host/conf/ 2>/tmp/Error 1>/dev/null
if [[ $? -eq 23 ]]; then echo -e "$file not found on $host"; continue; fi
I'm seeing 1 for "file not found" not found, you can do testing for these sorts of things against localhost, if you need to differentiate different errors capture stdout instead.
if $err=`scp $host:$file 2>&1`
then
echo "copied successfully
else
case "$err" in
*"file not found"* )
echo "$file Not Found on $host"
;;
*"Could not resolve hostname"* )
echo "Host not found: $host"
;;
"Permission denied "* )
echo "perm-denied! $host"
;;
* )
echo "other scp error $err"
;;
esac
this isn't going to work if you have a different locale with different messages.

script does not stop when arguments are passed

I have the following which works perfectly.
#!/bin/bash
killall java
#program USB
make iris install.1 mib510,/dev/ttyUSB0
#listen serial port and write to file
java net.tinyos.tools.PrintfClient -comm serial#/dev/ttyUSB1:iris > foo.txt &
sleep 2
#if "Erase done" is printed to file, stop
if tail -f foo.txt | grep -n "Erase done" -q; then echo "Write ok";fi
killall java
But when I change my script to receive arguments below (sh test.sh USB0 USB1 foo.txt), it does not end. Although it writes the file, the process does not end
#!/bin/bash
killall java
#program USB
make iris install.1 mib510,/dev/tty$1
#listen serial port and write to file
java net.tinyos.tools.PrintfClient -comm serial#/dev/tty$2:iris > $3 &
sleep 2
#if "Erase done" is printed to file, stop
if tail -f $3 | grep -n "Erase done" -q; then echo "Write ok";fi
killall java
Am I doing something wrong?
It appears tail -f will quit when grep quits. So the problem might be with:
if tail -f $3 | grep -n "Erase done" -q; then echo "Write ok";fi
You can replace it with the following:
tail -f $3 | while read LOGLINE
do
[[ "${LOGLINE}" == *"Erase done"* ]] && echo "Write ok" && pkill -P $$ tail
done

BASH - Check that it is not running the same script

I have a script /root/data/myscript
and when I run /root/data/myscript
I do not know how to determine if you have one running
does anyone know?
I tried
if [[ "$(pidof -x /root/data/myscript | wc -w)" > "1" ]]
then echo "This script is already running!"
fi
thank you
This should work.
if [[ "$(pgrep myscript)" ]]
then echo "This script is already running!"
fi
This could work to check whether the script is already running or not.
if [[ "$(ps -ef | grep "/root/data/myscript" | grep -v "grep")" ]] ; then
echo "This script is already running!"
fi
Try this one.

FTP File Transfers Using Piping Safely

I have a file forwarding system where a bunch of files are downloaded to a directory, de-multiplexed and copied to individual machines.
The files are forwarded when they are received by the master server. And files normally arrive in bursts. (Auth by ssh keys)
This script creates the sftp session, and uses a pipe to watch the head of a fifo pipe.
HOST=$1
pipe=/tmp/pipes/${HOST%%.*}
ps aux | grep -v grep | grep sftp | grep "user#$HOST" > /dev/null
if [[ $? == 0 ]]; then
echo "FTP is Running on this Server"
exit
else
pid=`ps aux | grep -v grep | grep tail | tr -s ' ' | grep $pipe`
[[ $? == 0 ]] && kill -KILL `echo $pid | cut -f2 -d' '`
fi
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
tail -n +1 -f $pipe | sftp -o 'ServerAliveInterval 60' user#$HOST > /dev/null &
echo cd /tmp/data >>$pipe #Sends Command to Host
echo "Started FTP to $HOST"
Update: I ended up changing the cleanup code to use "ps aux" to see if an ftp session is running, and subsequently if the tail -f is still running. Grep by user#host and the name of the pipe respectively. This is done when the script is called, and the script is called whenever I try to upload a file.
IE:
FILENAME=`basename $1`
function transfer {
echo cd /apps/data >> $2 # For Safety
echo put $1 .$FILENAME >> $2
echo rename .$FILENAME $FILENAME >> $2
echo chmod 0666 $FILENAME >> $2
}
./ftp.sh host
[ -p $pipedir/host ] && transfer $1 $pipedir/host
Files received on the master server are caught by Incron which writes a put command and the available file's location to the fifo pipe, to be sent by sftp (rename is also preformed).
My question is, is this safe? Could this crash on ftp errors/events. Not really worried about login errors.
The goal is to reduce the number of ftp logins. Single Session/Minute(or more) intervals.
And allow files to be forwarded as they're received. Dynamic Commands.
I'd prefer to use standard ubuntu libraries, if possible.
EDIT: After testing and working through some issues the server simply runs with
[[ -p $pipe ]] && echo FTP is Running on this Server
ln -s $pipe $lock &> /dev/null || (echo FTP is Running on this Server && exit)
[[ ! -p $pipe ]] && mkfifo $pipe
( tail -n +1 -F $pipe & echo $! > $pipe.pid ) | tee >
( sed "/tail:/ q" >/dev/null && kill $(cat $pipe.pid) |& rm -f $pipe >/dev/null; )
| sftp -i ~/.ssh/$HOST.rsa -oServerAliveInterval=60 user#$HOST &
rm -f $lock
Its rather simple but works nicely.
you might be intrested in setting up a more simpler(and robust) syncronization infrastructure:
if a given host is not connected when a file arrives...it never recieves it (if i understand correctly your code)
i would do something like
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
on the client machines either periodically or by event...rsync is intelligently syncronizes the files by their timestamp and size (-a contains -t)
the event would be some process termination like:
client does(configure private key usage in ~/.ssh/config for host):
#!/bin/bash
while :;do
ssh user#host /srv/bin/sleepListener 600
rsync -a -e ssh user#host:/apps/data pathToLocalDataStore
done
on the server
/srv/bin/sleepListener is a symbolic link to /bin/sleep
server after recieving new file:
killall sleepListener
note: every 10 minutes a full check is performed...if nodes go offline/online it doesn't matter...

Resources