How to execute bash script after rsync - bash

When I deploy on my dev server I use rsync. But after rsync I need to execute a .sh file for "after" deploy operations like clear cache...
Usually I do this via SSH, but if I deploy very often it's boring write:
ssh ...
write password
cd /var/www/myapp/web
./after_deploy.sh
There is a way to do this quickly? This is my rsync.sh:
#!/bin/bash
host=""
directory="/var/www/myapp/web"
password=""
usage(){
echo "Cant do rsync";
echo "Using:";
echo " $0 direct";
echo "Or:";
echo " $0 dry";
}
echo "Host: $host";
echo "Directory: $directory"
if [ $# -eq 1 ]; then
if [ "$1" == "dry" ]; then
echo "DRY-RUN mode";
rsync -CvzrltD --force --delete --exclude-from="app/config/rsync_exclude.txt" -e "sshpass -p '$password' ssh -p22" ./ $host:$directory --dry-run
elif [ "$1" == "direct" ]; then
echo "Normal mode";
rsync -CvzrltD --force --delete --exclude-from="app/config/rsync_exclude.txt" -e "sshpass -p '$password' ssh -p22" ./ $host:$directory
else
usage;
fi;
else
usage;
fi

If instead of using rsync over SSH, you can run an rsync daemon on the server. This allows you to use the pre-xfer exec and post-xfer exec options in /etc/rsyncd.conf to specify a command to be run before and/or after the transfer.
For example, in /etc/rsyncd.conf:
[myTransfers]
path = /path/to/stuff
auth users = username
secrets file = /path/to/rsync.secrets
pre-xfer exec = /usr/local/bin/someScript.sh
post-xfer exec = /usr/local/bin/someOtherscript.sh
You can then do the transfer from the client machine, and the relevant scripts will be run automatically on the server, for example:
rsync -av . username#hostname::myTransfers/
This approach may also be useful because environment variables relating to the transfer on the server can also be used by the scripts.
See https://linux.die.net/man/5/rsyncd.conf for more information.

You can add a command after the rsync command to execute it instead of starting a shell.
Add the following after the rsync command :
sshpass -p "$password" ssh $host "cd $dir && ./after_deploy.sh"

Related

Copying file between servers

i'm trying to create a simple script that will copy files from server1 to server 2 or from server 2 to server1(depends where i run the script from)
I created a script that should recognize on which server I am, take the source folder and destination folder and execute.
for example
sh script.sh /home/test /destest
should cop y files from test folder to the other server to destest folder
but something is not working for me, I keep getting
No such file or directoryscp:
any ideas?
#!/bin/bash
SRC1=$1
DEST=$3
BOX=$(hostname)
if [ $BOX=server1 ]; then
sudo scp $SRC1 server2:\ $DEST
else
sudo scp -v $SRC1/* server1:\ $DEST
fi
Don't put a space after server1: and server2:.
You need a space around = in the if test.
You should almost always quote variables, in case the value contains whitespace, unless you actually want to split it into separate arguments.
#!/bin/bash
SRC1=$1
DEST=$3
BOX=$(hostname)
if [ "$BOX" = server1 ]; then
sudo scp "$SRC1" "server2:$DEST"
else
sudo scp -v "$SRC1"/* "server1:$DEST"
fi
This is my fixed script that is now working :)
#!/bin/bash
BOX=$(hostname)
if [ "$BOX" = server1 ]; then
sudo scp "$1" user#server2:\ "$2"
else
sudo scp "$1"/* user#server1:\ "$2"
fi

SSH into remote computer and compile/run code

I made a script (below) that goes into a remote computer and runs C code on it. This script works perfectly but asks for the password multiple times. How can I make it only ask for the password once?
#!/bin/bash
USER=myusername
COMP=remote_computer_name
OUTPUT=$1
ARGS=${#:2}
CODE_DIR="Dir_$$"
SCRIPT_NAME=$(basename $0)
LAST_CREATED_DIR=$(ls -td -- */ | head -n 1)
#Check if we are on local computer. If so, we copy the
#current directory to the remote run this script on the remote
if [ "${HOSTNAME}" != "${COMP}" ]; then
if [ "$#" -lt 1 ]; then
echo "Incorrect usage."
echo "Usage: ./${SCRIPT_NAME} <compiled_c_output_name> <arg1> <arg2> ... <argN>"
exit
fi
# Check if there is no makefile in the current directory
if [ ! -e [Mm]akefile ]; then
echo "There is no makefile in this directory"
exit
fi
echo "On local. Copying current directory to remote..."
scp -r ./ ${USER}#${COMP}:/ilab/users/${USER}/${CODE_DIR}
ssh ${USER}#${COMP} "bash -s" < ./${SCRIPT_NAME} ${OUTPUT} ${ARGS}
else
echo "On remote. Compiling code..."
cd $LAST_CREATED_DIR
make clean
make all
if [ -e $OUTPUT ]; then
echo "EXECUTING \"./${OUTPUT} ${ARGS}\" ON REMOTE ($COMP)"
./${OUTPUT} ${ARGS}
fi
fi
You can use SSH-Key authentication technique for password less login -
Here are the steps :
Generate RSA key -
ssh-keygen -t rsa
This generates two files under /home/<user>/.ssh/ id_rsa
(Private) and id_rsa.pub (Public)
The second file is your public key. You have to copy the contents of
this file over to the remote computer you want to log into and append
it to /home/<user>/.ssh/authorized_keys or use ssh-copy-id
utility if available (ssh-copy-id username#remote_host)
After this, the authentication is done by the public-private key pair
and you may not require a password henceforth.
You can use sshpass. Here is an example:
sshpass -pfoobar ssh -o StrictHostKeyChecking=no user#host command_to_run
More info, here:
https://askubuntu.com/questions/282319/how-to-use-sshpass

run 2 rsync commands and print the output to a log file

I'm new to scripting and would like to understand how to print out the variables based on boolean logic.
#!/bin/bash
# set variables
WEBAPPS_YES="Successfully synced webapps folder"
WEBAPPS_NO="Could not sync webapps folder"
RSYNC_YES="Successfully synced rsync log file"
RSYNC_NO="Could not sync rsync log file"
# Command to rsync 'webapps' folder and write to a log file
rsync -azvh ~/webapps -e ssh user#something.com:/home/directories >> /path/to/rsync.log 2>&1
# Command to rsync 'rsync.log' to a log file on backup server 'Larry'
rsync -azvh --delete ~/rsync.log -e ssh user#something.com:/path/to/logs
if [ $? -eq 0 ]
then
echo
exit 0
else
echo >&2
exit 1
fi
I would like the whole if, then, else part to print out in the echo if both parts succeeded or not. I know I need some kind of logic statements but cannot figure it out.
You can check the result after running each rsync command, and display the result afterwards. I think that would work:
# Command to rsync 'webapps' folder and write to a log file
rsync -azvh ~/webapps -e ssh user#something.com:/home/directories >> /path/to/rsync.log 2>&1
RESULT1="$?"
# Command to rsync 'rsync.log' to a log file on backup server 'Larry'
rsync -azvh --delete ~/rsync.log -e ssh user#something.com:/path/to/logs
RESULT2="$?"
if [ "$RESULT1" != "0" ]; then
echo "$WEBAPPS_NO"
else
echo "$WEBAPPS_YES"
fi
if [ "$RESULT2" != "0" ]; then
echo "$RSYNC_NO"
else
echo "$RSYNC_YES"
fi

How do I check to see if a file exists on a remote server using shell

I have done a lot of searching and I can't seem to find out how to do this using a shell script. Basically, I am copying files down from remote servers and I want to do something else if it doesn't exist. I have an array below, but I tried to reference it directly, but it is still returning false.
I am brand new at this, so please be kind :)
declare -a array1=('user1#user1.user.com');
for i in "${array1[#]}"
do
if [ -f "$i:/home/user/directory/file" ];
then
do stuff
else
Do other stuff
fi
done
Try this:
ssh -q $HOST [[ -f $i:/home/user/directory/file ]] && echo "File exists" || echo "File does not exist";
or like this:
if ssh $HOST stat $FILE_PATH \> /dev/null 2\>\&1
then
echo "File exists"
else
echo "File not exist"
fi
Assuming you are using scp and ssh for remote connections something like this should do what you want.
declare -a array1=('user1#user1.user.com');
for i in "${array1[#]}"; do
if ssh -q "$i" "test -f /home/user/directory/file"; then
scp "$i:/home/user/directory/file" /local/path
else
echo 'Could not access remote file.'
fi
done
Alternatively, if you don't necessarily need to care about the difference between the remote file not existing and other possible scp errors then the following would work.
declare -a array1=('user1#user1.user.com');
for i in "${array1[#]}"; do
if ! scp "$i:/home/user/directory/file" /local/path; then
echo 'Remote file did not exist.'
fi
done

how to find a file exists in particular dir through SSH

how to find a file exists in particular dir through SSH
for example :
host1 and dir /home/tree/TEST
Host2:- ssh host1 - find the TEST file exists or not using bash
ssh will return the exit code of the command you ask it to execute:
if ssh host1 stat /home/tree/TEST \> /dev/null 2\>\&1
then
echo File exists
else
echo Not found
fi
You'll need to have key authentication setup of course, so you avoid the password prompt.
This is what I ended up doing after reading and trying out the stuff here:
FileExists=`ssh host "test -e /home/tree/TEST && echo 1 || echo 0"`
if [ ${FileExists} = 0 ]
#do something because the file doesn't exist
fi
More info about test: http://linux.die.net/man/1/test
An extension to Erik's accepted answer.
Here is my bash script for waiting on an external process to upload a file. This will block current script execution indefinitely until the file exists.
Requires key-based SSH access although this could be easily modified to a curl version for checks over HTTP.
This is useful for uploads via external systems that use temporary file names:
rsync
transmission (torrent)
Script below:
#!/bin/bash
set -vx
#AUTH="user#server"
AUTH="${1}"
#FILE="/tmp/test.txt"
FILE="${2}"
while (sleep 60); do
if ssh ${AUTH} stat "${FILE}" > /dev/null 2>&1; then
echo "File found";
exit 0;
fi;
done;
No need for echo. Can't get much simpler than this :)
ssh host "test -e /path/to/file"
if [ $? -eq 0 ]; then
# your file exists
fi

Resources