Store rsync error in variable - bash

I've written a bash script to sync backups with a local storage, the script checks whether a backup as been made on the day the script is executed, and if so it sync's.
I've made it this way so that if by accident (or other way) all the backups are deleted from the original location, the synced backups on the second storage won't be deleted upon next sync.
#!/bin/bash
files_found=`ssh user#xx.xx.xx.xx "find /home/data_folder/test* -type f -mtime -1"`
rsync_to_location="/home/test_folder/";
rsync_from_location="/home/data_folder/";
if [ "$files_found" = 0 ]; then
echo "File not found!"
send_error=`ssh user#xx.xx.xx.xx "echo 'This is the message body' | mail -s 'This is the subject' user#localhost"`
else
echo "Files found!"
rsync -arzt --ignore-existing --delete --max-delete=1 -e 'ssh' user#xx.xx.xx.xx:$rsync_from_location $rsync_to_location
if [[ $? -gt 0 ]]; then
send_error=`ssh user#xx.xx.xx.xx "echo 'This is the message body' | mail -s 'This is the subject' earmaster#localhost"`
fi
fi
Now my question is, if the rsync fails (max deletions), how can I store that message and send it with the mail?
I've tried with
rsync_error="rsync -arzt --ignore-existing --delete --max-delete=1 -e 'ssh' user#xx.xx.xx.xx:$rsync_from_location $rsync_to_location"
And then add the $rsync_error to the mail call, but it doesn't seem to work

The line you have put here will just store that command as a string and not run it.
rsync_error="rsync -arzt --ignore-existing --delete --max-delete=1 -e 'ssh' user#xx.xx.xx.xx:$rsync_from_location $rsync_to_location"
To capture its output you would need to put it in a $( ) like so.
rsync_error=$(rsync -arzt --ignore-existing --delete --max-delete=1 -e 'ssh' user#xx.xx.xx.xx:$rsync_from_location $rsync_to_location)
This will capture the stdout of the executed command but you are wanting stderr I assume. So a better way of doing this might be to pipe to stderr to a file and handle the output that way.
# rsync err output file
ERR_OUTPUT="/tmp/rsync_err_$$"
# When the script exits, remove the file
trap "rm -f $ERR_OUTPUT" EXIT
# Use 2> to direct stderr to the output file
rsync -arzt --ignore-existing --delete --max-delete=1 -e 'ssh' user#xx.xx.xx.xx:$rsync_from_location $rsync_to_location 2> "$ERR_OUTPUT"
# Do what ever with your error file

Related

run 2 rsync commands and print the output to a log file

I'm new to scripting and would like to understand how to print out the variables based on boolean logic.
#!/bin/bash
# set variables
WEBAPPS_YES="Successfully synced webapps folder"
WEBAPPS_NO="Could not sync webapps folder"
RSYNC_YES="Successfully synced rsync log file"
RSYNC_NO="Could not sync rsync log file"
# Command to rsync 'webapps' folder and write to a log file
rsync -azvh ~/webapps -e ssh user#something.com:/home/directories >> /path/to/rsync.log 2>&1
# Command to rsync 'rsync.log' to a log file on backup server 'Larry'
rsync -azvh --delete ~/rsync.log -e ssh user#something.com:/path/to/logs
if [ $? -eq 0 ]
then
echo
exit 0
else
echo >&2
exit 1
fi
I would like the whole if, then, else part to print out in the echo if both parts succeeded or not. I know I need some kind of logic statements but cannot figure it out.
You can check the result after running each rsync command, and display the result afterwards. I think that would work:
# Command to rsync 'webapps' folder and write to a log file
rsync -azvh ~/webapps -e ssh user#something.com:/home/directories >> /path/to/rsync.log 2>&1
RESULT1="$?"
# Command to rsync 'rsync.log' to a log file on backup server 'Larry'
rsync -azvh --delete ~/rsync.log -e ssh user#something.com:/path/to/logs
RESULT2="$?"
if [ "$RESULT1" != "0" ]; then
echo "$WEBAPPS_NO"
else
echo "$WEBAPPS_YES"
fi
if [ "$RESULT2" != "0" ]; then
echo "$RSYNC_NO"
else
echo "$RSYNC_YES"
fi

Pass dynamically generated parameters to command inside script

I have a script which calls the rsync command with some dynamically generated parameters but I'm having trouble passing them correctly.
Here's some excerpt:
logfile="$logDir/$(timestamp) $name.log"
echo "something" >> "$logfile"
params="-aAXz --stats -h --delete --exclude-from $exclude --log-file=$logfile $src $dest"
if [ "$silent" = "" ]; then
params="-v $params --info=progress2"
fi
rsync $params
If the logfile is e.g. /tmp/150507 test.log, the something statement is actually written to /tmp/150507 test.log, but rsync writes its logs to /tmp/150507 (everything after the first blank removed).
If I explicitly quote the name of the logfile in the params, rsync throws an exception:
params="-aAXz --stats -h --delete --exclude-from $exclude --log-file=\"$logfile\" $src $dest"
The error:
rsync: failed to open log-file "/tmp/150507: No such file or directory (2)
Ignoring "log file" setting.
How can I generate the params dynamically without losing the ability to use blanks in the filenames?
More quoting needed around log file name:
declare -a params
params=(-aAXz --stats -h --delete --exclude-from "$exclude" --log-file="$logfile" "$src" "$dest")
if [[ "$silent" = "" ]]; then
params=( -v "${params[#]}" --info=progress2 )
fi
rsync "${params[#]}"
This is the case where you should consider using BASH arrays to constitute a dynamic command line.

How to execute bash script after rsync

When I deploy on my dev server I use rsync. But after rsync I need to execute a .sh file for "after" deploy operations like clear cache...
Usually I do this via SSH, but if I deploy very often it's boring write:
ssh ...
write password
cd /var/www/myapp/web
./after_deploy.sh
There is a way to do this quickly? This is my rsync.sh:
#!/bin/bash
host=""
directory="/var/www/myapp/web"
password=""
usage(){
echo "Cant do rsync";
echo "Using:";
echo " $0 direct";
echo "Or:";
echo " $0 dry";
}
echo "Host: $host";
echo "Directory: $directory"
if [ $# -eq 1 ]; then
if [ "$1" == "dry" ]; then
echo "DRY-RUN mode";
rsync -CvzrltD --force --delete --exclude-from="app/config/rsync_exclude.txt" -e "sshpass -p '$password' ssh -p22" ./ $host:$directory --dry-run
elif [ "$1" == "direct" ]; then
echo "Normal mode";
rsync -CvzrltD --force --delete --exclude-from="app/config/rsync_exclude.txt" -e "sshpass -p '$password' ssh -p22" ./ $host:$directory
else
usage;
fi;
else
usage;
fi
If instead of using rsync over SSH, you can run an rsync daemon on the server. This allows you to use the pre-xfer exec and post-xfer exec options in /etc/rsyncd.conf to specify a command to be run before and/or after the transfer.
For example, in /etc/rsyncd.conf:
[myTransfers]
path = /path/to/stuff
auth users = username
secrets file = /path/to/rsync.secrets
pre-xfer exec = /usr/local/bin/someScript.sh
post-xfer exec = /usr/local/bin/someOtherscript.sh
You can then do the transfer from the client machine, and the relevant scripts will be run automatically on the server, for example:
rsync -av . username#hostname::myTransfers/
This approach may also be useful because environment variables relating to the transfer on the server can also be used by the scripts.
See https://linux.die.net/man/5/rsyncd.conf for more information.
You can add a command after the rsync command to execute it instead of starting a shell.
Add the following after the rsync command :
sshpass -p "$password" ssh $host "cd $dir && ./after_deploy.sh"

LOCAL_DIR variable prepends the scripts current directory (totally not what I expect)

Consider the following simple rsync script I am tryint to slap up:
#!/bin/bash
PROJECT="$1"
USER=stef
LOCAL_DIR="~/drupal-files/"
REMOTE_HOST="hostname.com"
REMOTE_PROJECTS_PATH=""
# Should not have anything to change below
PROJECT_LIST="proj1 proj2 proj3 quit"
echo "/nSelect project you wish to rsync\n\n"
select PROJECT in $PROJECT_LIST
do
if [ "$PROJECT" = "quit" ]; then
echo
echo "Quitting $0"
echo
exit
fi
echo "Rsynching $PROJECT from $REMOTE_HOST into" $LOCAL_DIR$PROJECT
rsync -avzrvP $USER#$REMOTE_HOST:/var/projects/$PROJECT/ $LOCAL_DIR$PROJECT
done
echo "Rsync complete."
exit;
The variable $LOCALDIR$PROJECT set in the rsync command always includes the scripts path, :
OUTPUT:
Rsynching casa from hostname.com.com into ~/drupal-files/casa
opening connection using: ssh -l stef hostname.com rsync --server --sender -vvlogDtprz e.iLsf . /var/groupe_tva/casa/
receiving incremental file list
rsync: mkdir "/home/stef/bin/~/drupal-files/proj1" failed: No such file or directory (2)
rsync error: error in file IO (code 11) at main.c(605) [Receiver=3.0.9]
The line with mkdir should not have /home/stef/bin, why is bash adding the script's running dir on the variable?
Thanks
LOCAL_DIR="~/drupal-files/"
The string is in quotes so there's pathname expansion, and the variable will contain the literal string.
Remove the quotes.
$ x="~/test"; echo $x
~/test
$ x=~/test; echo $x
/home/user/test

Bash script for inotifywait - How to write deletes to log file, and cp close_writes?

I have this bash script:
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
cp ./"$file" zinot/"$file"
done
~
Now, how would I have it do the same thing but also handle deletes by writing the filenames to a log file?
Something like?
#!/bin/bash
inotifywait -m -e close_write --exclude '\*.sw??$' . |
#adding --format %f does not work for some reason
while read dir ev file; do
# if DELETE, append $file to /inotify.log
# else
cp ./"$file" zinot/"$file"
done
~
EDIT:
By looking at the messages generated, I found that inotifywait generates CLOSE_WRITE,CLOSE whenever a file is closed. So that is what I'm now checking in my code.
I tried also checking for DELETE, but for some reason that section of the code is not working. Check it out:
#!/bin/bash
fromdir=/path/to/directory/
inotifywait -m -e close_write,delete --exclude '\*.sw??$' "$fromdir" |
while read dir ev file; do
if [ "$ev" == 'CLOSE_WRITE,CLOSE' ]
then
# copy entire file to /root/zinot/ - WORKS!
cp "$fromdir""$file" /root/zinot/"$file"
elif [ "$ev" == 'DELETE' ]
then
# trying this without echo does not work, but with echo it does!
echo "$file" >> /root/zinot.txt
else
# never saw this error message pop up, which makes sense.
echo Could not perform action on "$ev"
fi
done
In the dir, I do touch zzzhey.txt. File is copied. I do vim zzzhey.txt and file changes are copied. I do rm zzzhey.txt and the filename is added to my log file zinot.txt. Awesome!
You need to add -e delete to your monitor, otherwise DELETE events won't be passed to the loop. Then add a conditional to the loop that handles the events. Something like this should do:
#!/bin/bash
inotifywait -m -e delete -e close_write --exclude '\*.sw??$' . |
while read dir ev file; do
if [ "$ev" = "DELETE" ]; then
echo "$file" >> /inotify.log
else
cp ./"$file" zinot/"$file"
fi
done

Resources