Rsync copies too many directories being executed via bash script - bash

Originally I would like to sync directory (with all files and subdirectories) given in parameter in bash script.
I found this post: How can I recursively copy a directory into another and replace only the files that have not changed? which explains how to use rsync in similar case.
My bash script is quite simple and listed below:
#!/bin/bash
echo -e "Type the project to be deployed: \c "
read project
echo -e "* Deploying: $project *"
echo -e "Sync: /var/repo/released/$project"
echo -e " /var/www/released/$project"
rsync -pr /var/repo/released/$project /var/www/released/$project
As a result it copies everything within /released (there are many directories in there, let's say -projects-).
I would like to copy (sync) only project given in parameter.
Could you please advice how to do this?

When you call the script without an argument (which most likely is what you're doing since you interactively read the project name into the variable $project), the positional parameter $1 remains empty. Therefore the script will rsync the entire content of /var/repo/released/.
You need to replace $1 with $project in your script. Also, I'd recommend to put double quotes around the paths to avoid problems due to spaces in a directory name.
rsync -pr "/var/repo/released/$project" "/var/www/released/$project"

Related

Bash Shell Script Issues

I am new to UNIX and have a homework assignment that is giving me trouble. I am to write a script that will back up specified files from the current directory into a specified destination directory. This script is to take three arguments.
sourcePath, which is the path to the source files/files being backed up or copied.
backupPath, which is the path to the target directory where the files will be backed up.
filePrefix, which is used to identify which files to backup, specifically only files whose names begin with the given prefix will be copied while others will be ignored. Example would be, if the user enters the letter "d", then all files starting with that letter are to be copied while any other file is to be ignored.
I haven't learned much about scripting/functions in bash so I've tried looking up tutorials which have been helpful but not enough. This script is something I can easily do when just typing out the commands. For instance, I would cd into the target directory that has the files, then using the cp command copy files that begin with the specific prefix to the target directory, but when making a script I am at a dead end.
I feel as though my code is monumentally incorrect and its due to my lack of experience, but nothing online has been of any help. So far my code is
read sourcePath
read backupPath
read filePrefix
grep /export/home/public/"$sourcePath
mkdir -p $backupPath
cp /export/home/public/"$sourcePath"/$filePrefix /home/public/"$backupPath"
So an example execution of the script would be
$ ./script.sh
(sourcePath)HW4testdir (backupPath)backup (filePrefix)d
Output:
backing up: def (example file starting with d)
backing up: dog (example file starting with d)
So far when executing the code, nothing happens. Again, I'm sure most, or even all of the code is wrong and totally off base, but I never learned about scripting. If I did not have to create a script, I could easily achieve this desired outcome.
I suggest with bash:
read -r -p "sourcePath: " sourcePath
read -r -p "backupPath: " backupPath
read -r -p "filePrefix: " filePrefix
mkdir -p /home/public/"$backupPath"
cp /export/home/public/"$sourcePath/$filePrefix"* /home/public/"$backupPath"
Make sure that the used user has the right to create the directory /home/public/"$backupPath".
See: help read
For a start: Your assignment states, that your script should accept arguments.
However your script does not take arguments. It reads the parameters from standard input. Arguments are passed to the script on the command line, and your script would be called as
./script.sh HW4testdir backup d
Hence you can't use read to fetch them. The first argument is available under the name $1, the second argument is $2 and so on. You could write for instance
sourcePath=${1?Parameter missing}
which has the side effect to abort the script with an error message, if the caller forgets to pass the parameter.
Another point: You don't say anywhere that bash should be used to run the script. Since you want the script to be called by
./script.sh ....
and not by
bash ./script.sh ....
you must encode the information, that bash should be used, in your script. Assuming that your bash is located in /usr/bin, you would do this by making the first line of the script
#!/usr/bin/bash

How do I call rename successfully from a bash script on Ubuntu?

I have a bash script #!/usr/bin/env bash that is called part of a make process. This script creates a directory with the files pertinent to a realise and then tars them up. I would like to take a copy of the directory and rename some of the files to replace the version identifier with the word "latest". This will make it simple to script the acquisition of the latest file from a web-server. When I run my script, the call to rename seems to do nothing, why is that?
#!/usr/bin/env bash
DATE_NOW="$(date +'%Y%m%d')"
product_id_base="$1"
firmware_dir="${product_id_base}-full-${DATE_NOW}"
# ...rest of file ommitted to protest the innocent
# It creates and fills the ${firmware_dir} with some files that end in
# -$DATE_NOW.<extention> and I would like to rename the copies of them so that they end in
# -latest.<extention>
cp -a "./${firmware_dir}" "./${product_id_base}-full-latest"
# see what there is in pwd
cd "./${product_id_base}-full-latest"
list_output=`ls`
echo $list_output
# Things go OK until this point.
replacment="'s/${DATE_NOW}/latest/'"
rename_path=$(which rename)
echo $replacment
perl $rename_path -v $replacment *
echo $cmd
pwd
$cmd
echo "'s/-${DATE_NOW}/-latest/g'" "${product_id_base}-*"
echo $a
# check what has happened
list_output=`ls`
echo $list_output
I call the above with ./rename.sh product-id and get the expected output from ls that indicates the present working directory is the one full of files that I want renamed.
$ ./rename.sh product-id ET-PIC-v1.1.dat ET-PIC-v1.1.hex
product-id-20160321.bin product-id-20160321.dat
product-id-20160321.elf product-id-20160321.gz 's/20160321/latest/'
/home/thomasthorne/work/product-id/build/product-id-full-latest
's/-20160321/-latest/g' product-id-*
ET-PIC-v1.1.dat ET-PIC-v1.1.hex product-id-20160321.bin
product-id-20160321.dat product-id-20160321.elf product-id-20160321.gz
What I hopped to see was some renamed files. When I directly call the rename function from a terminal emulator I see the rename occur.
~/work/product-id/build/product-id-full-latest$ rename -vn
's/-20160321/-latest/g' * product-id-20160321.bin renamed as
product-id-latest.bin product-id-20160321.dat renamed as
product-id-latest.dat product-id-20160321.elf renamed as
product-id-latest.elf ...
I have tried a few variations on escaping the strings, using ` or $(), removing all the substitutions from the command line. So far nothing has worked so I must be missing something fundamental.
I have read that #!/usr/bin/env bash behaves much like #!/bin/bash so I don't think that is at play. I know that Ubuntu and Debian have different versions of the rename script to some other distributions and I am running on Ubuntu. That lead me to try calling perl /usr/bin/rename ... instead of just rename but that seems to have made no perceivable difference.
This string:
replacment="'s/${DATE_NOW}/latest/'"
will be kept exactly the same because you put it between single quotes.
Have you tried with:
replacment="s/${DATE_NOW}/latest/"
This one worked on my Ubuntu, without perl:
$ ./test_script
filename_20160321 renamed as filename_latest
filename2_20160321 renamed as filename2_latest
filename3_20160321 renamed as filename3_latest
test_script content being:
#!/bin/bash
DATE_NOW="$(date +'%Y%m%d')"
replacment="s/${DATE_NOW}/latest/"
rename -v $replacment *

bash script check if file exist and sent an information email

I worte a simple script to check if there are some files exist (endded with .txt) in the dirctoey older than 6 hours, after the check to send an email.
The scripte isnt working well and as expected, I ask you if there's some simpler and more powerful way to do it? Basically it just needs to check if file eneded with .txt exists and older than 6hours, if yes an email should be sent.
This is my script
#!/bin/bash
DATE=`date +%Y.%m.%d-%H.%M`
HOSTNAME='host'
BASEDIR=`/usr/local/se/work/jobs/`
LOGFILE=`/usr/local/se/work/jobs/logs/jobs.log`
VERTEILER="anyemail"
# Functions
#
# function check if the jobs are exists
'find ${BASEDIR} -name "*.txt" -nmin +354' 2>$1 >>$LOGFILE
#function mail
cat << EOF | mailx -s "Example ${HOSTNAME} jobs `date +%Y.%m.%d-%H.%M`" -a ${LOGFILE} ${VERTEILER}
Hi,
Please check the Jobs.
Details :
`ls -ltr /usr/local/se/work/jobs/`
------------ END ----------------------------------------
.
Thank you
To redirect STDERR to STDOUT use 2>&1, you are doing it wrong with 2>$1
Also, the correct parameter for find is -mmin not -nmin like you have in your code.
Further more you have syntax errors like here:
BASEDIR=`/usr/local/se/work/jobs/`
LOGFILE=`/usr/local/se/work/jobs/logs/jobs.log`
What you mean to type is:
BASEDIR='/usr/local/se/work/jobs/'
LOGFILE='/usr/local/se/work/jobs/logs/jobs.log'
When you use backticks bash tries to execute COMMAND, you are using it the right way here:
DATE=`date +%Y.%m.%d-%H.%M`
You are also not closing the heredoc <<EOF, you need EOF on the last line of the script.
And lose the '' surrounding the find command.
You should pay attention to what bash says, it should prompt lots of errors, try to run the script manually to see them if you are using this via cron.
Did you even try to run this script?
The grave accent (`) in BASEDIR and LOGFILE means the shell will try to evaluate them as commands which fails. You don't generally need quotes around strings in shell scripts, although it may be considered good practice to use double quotes (").
HOSTNAME=host
BASEDIR=/usr/local/se/work/jobs
LOGFILE=${LOGFILE}/logs/jobs.log
VERTEILER=anyemail
The switch to search files by minutes in called mmin and not nmin -- again that should have given you an error. And the math is wrong, if you want 6 hours then it's 6 * 60 = 360 minutes.
find ${BASEDIR} -name *.txt -mmin +360
You are redirecting stderr to the first input parameter. (2>$1) Are you expecting error output from this command, could you explain what is going on here?
And then you append that to LOGFILE which is in a directory that may not exist. mkdir -p is a good choice for creating folders in scripts because it creates parent directories when needed and won't complain if the folder already exists. So do something along the lines of
mkdir -p /usr/local/se/work/jobs/logs

Deleting a directory contents using shell scripts

I am a newbie to Shell scripting. I want to delete all the contents of a directory which is in HOME directory of the user and deleting some files which are matching with my conditions. After googled for some time, i have created the following script.
#!/bin/bash
#!/sbin/fuser
PATH="$HOME/di"
echo "$PATH";
if [ -d $PATH ]
then
rm -r $PATH/*
fuser -kavf $PATH/.n*
rm -rf $PATH/.store
echo 'File deleted successfully :)'
fi
If I run the script, i am getting error as follows,
/users/dinesh/di
dinesh: line 11: rm: command not found
dinesh: line 12: fuser: command not found
dinesh: line 13: rm: command not found
File deleted successfully :)
Can anybody help me with this?
Thanks in advance.
You are modifying PATH variable, which is used by the OS defines the path to find the utilities (so that you can invoke it without having to type the full path to the binary). The system cannot find rm and fuser in the folders currently specified by PATH (since you overwritten it with the directory to be deleted), so it prints the error.
tl;dr DO NOT use PATH as your own variable name.
PATH is a special variable that controls where the system looks for command executables (like rm, fuser, etc). When you set it to /users/dinesh/di, it then looks there for all subsequent commands, and (of course) can't find them. Solution: use a different variable name. Actually, I'd recommend using lowercase variables in shell scripts -- there are a number of uppercase reserved variable names, and if you try to use any of them you're going to have trouble. Sticking to lowercase is an easy way to avoid this.
BTW, in general it's best to enclose variables in double-quotes whenever you use them, to avoid trouble with some parsing the shell does after replacing them. For example, use [ -d "$path" ] instead of [ -d $path ]. $path/* is a bit more complicated, since the * won't work inside quotes. Solution: rm -r "$path"/*.
Random other notes: the #!/sbin/fuser line isn't doing anything. Only the first line of the script can act as a shebang. Also, don't bother putting ; at the end of lines in shell scripts.
#!/bin/bash
path="$HOME/di"
echo "$path"
if [ -d "$path" ]
then
rm -r "$path"/*
fuser -kavf "$path"/.n*
rm -rf "$path/.store"
echo 'File deleted successfully :)'
fi
This line:
PATH="$HOME/di"
removes all the standard directories from your PATH (so commands such as rm that are normally found in /bin or /usr/bin are 'missing'). You should write:
PATH="$HOME/di:$PATH"
This keeps what was already in $PATH, but puts $HOME/di ahead of that. It means that if you have a custom command in that directory, it will be invoked instead of the standard one in /usr/bin or wherever.
If your intention is to remove the directory $HOME/di, then you should not be using $PATH as your variable. You could use $path; variable names are case sensitive. Or you could use $dir or any of a myriad other names. You do need to be aware of the key environment variables and avoid clobbering or misusing them. Of the key environment variables, $PATH is one of the most key ($HOME is another; actually, after those two, most of the rest are relatively less important). Conventionally, upper case names are reserved for environment variables; use lower case names for local variables in a script.

Using a filename with spaces with scp and chmod in bash

Periodically, I like to put files in the /tmp directory of my webserver to share out. What is annoying is that I must set the permissions whenever I scp the files. Following the advice from another question I've written a script which copies the file over, sets the permissions and then prints the URL:
#!/bin/bash
scp "$1" SERVER:"/var/www/tmp/$1"
ssh SERVER chmod 644 "/var/www/tmp/$1"
echo "URL is: http://SERVER/tmp/$1"
When I replace SERVER with my actual host, everything works as expected...until I execute the script with an argument including spaces. Although I suspect the solution might be to use $# I've not yet figured out how to get a spaced filename to work.
It turns out that what is needed is to escape the path which will be sent to the remote server. Bash thinks the quotes in SERVER:"/var/www/tmp/$1" are related to the $1 and removes them from the final output. If I try to run:
tmp-scp.sh Screen\ shot\ 2010-02-18\ at\ 9.38.35\ AM.png
Echoing we see it is trying to execute:
scp SERVER:/var/www/tmp/Screen shot 2010-02-18 at 9.38.35 AM.png
If instead the quotes are escaped literals then the scp command looks more like you'd expect:
scp SERVER:"/var/www/tmp/Screen shot 2010-02-18 at 9.38.35 AM.png"
With the addition of some code to truncate the path the final script becomes:
#!/bin/bash
# strip path
filename=${1##*/}
fullpath="$1"
scp "$fullpath" SERVER:\"/var/www/tmp/"$filename"\"
echo SERVER:\"/var/www/tmp/"$filename"\"
ssh SERVER chmod 644 \"/var/www/tmp/"$filename"\"
echo "URL is: http://SERVER/tmp/$filename"
The script looks right. My guess is that you need to quote the filename when you pass it into your script:
scp-chmod.sh "filename with spaces"
Or escape the spaces:
scp-chmod.sh filename\ with\ spaces
the easier way without worrying about spaces in file names, (besides quoting) is to rename your files to get rid of spaces before transferring. Or when you create the files, don't use spaces. You can make this your "best practice" whenever you name your files.

Resources