I have a bash script that I need to pass in the files to be copied (*.cpp) and the directory to copy to (cfiles/backup). The problem is it only copies the first file instead of all the files in the directory.
#!/bin/bash
while getopts "ab:" input; do
case $input in
a)
#an option
;;
b)
# Get the wild card and destination passed in
# wildcard=$OPTARG
dest="${#: -1}"
#Make the directory if it doesn't exit
mkdir -p $dest 2>1
find . -name "$OPTARG" -type f -exec cp {} $dest \; 2>1
printf 'string = %b| destination = %b\n' $OPTARG $dest
;;
?)
echo "Error! Invalid option provided" >&2
exit 1
;;
:)
echo "Option -$OPTARG missing parameter!" >&2
;;
esac
done
The problem is it only ever copies 1 file any insight will be appreciated!
You need to add a -r at the end of your cp command. This performs a recursive file copy on the directory.
find . -name "$OPTARG" -type f -exec cp -r {} $dest \; 2>1
Related
I use this bash command often
find ~ -type f -name \*.smt -exec grep something {} /dev/null \;
so I am trying to turn it into a simple bash script that I would invoke like this
findgrep ~ something --mtime -12 --name \*.smt
Thanks to this answer I managed to make it work like this:
if ! options=$(getopt -o abc: -l name:,blong,mtime: -- "$#")
then
exit 1
fi
eval "set -- $options"
while [ $# -gt 0 ]
do
case $1 in
-t|--mtime) mtime=${2} ; shift;;
-n|--name|--iname) name="$2" ; shift;;
(--) shift; break;;
(-*) echo "$0: error - unrecognized option $1" 1>&2; exit 1;;
(*) break;;
esac
shift
done
if [ $# -eq 2 ]
then
dir="$1"
str="$2"
elif [ $# -eq 1 ]
then
dir="."
str="$1"
else
echo "Need a search string"
exit
fi
echo "find $dir -type f -mtime $mtime -name $name -exec grep -iln \"$str\" {} /dev/null \;"
echo "find $dir -type f -mtime $mtime -name $name -exec grep -iln \"$str\" {} /dev/null \;" | bash
but the last line - echo'ing a command into bash - seems outright barbaric, but it works.
Is there a better way to do that? somehow trying to execute the find command directly gives no output, while running the one echo'ed out in bash works ok.
ame $name -e
It's still not quoted. Check your script with shellcheck.
find "$dir" -type f -mtype "$mtime" -name "$name" -exec grep -iln "$str" {} ';'
You might want to take a few steps back and do some research about quoting and expansions in shel, find and glob. find program expects literal glob pattern, and unquoted variable expansions undergo filename expansion, changing *.smt into the list of words representing filenames, while find wants the pattern not the result of expansions.
I can throw: man find, man 7 glob, https://www.gnu.org/software/bash/manual/html_node/Quoting.html https://mywiki.wooledge.org/BashFAQ/050
https://mywiki.wooledge.org/BashGuide/Parameters#Parameter_Expansion
Before you start deciding how to pass variable number of arguments to find, I encourage to research Bash arrays. I would do:
#!/bin/bash
fatal() {
echo "$0: ERROR: $*" >&2
exit 1
}
args=$(getopt -o abc: -l name:,iname:,mtime: -- "$#") || exit 1
eval "set -- $args"
findargs=() # bash array
while (($#)); do
case $1 in
-t|--mtime) findargs+=(-mtime "$2"); shift; ;;
-n|--name) findargs+=(-name "$2"); shift; ;;
--iname) findargs+=(-iname "$2"); shift; ;;
--) shift; break; ;;
-*) fatal "unrecognized option $1"; ;;
*) break; ;;
esac
shift
done
if (($# == 2)); then
dir="$1"
str="$2"
elif (($# == 1)); then
dir="."
str="$1"
else
fatal "Need a search string"
fi
set -x
find "$dir" -type f "${findargs[#]}" -exec grep -iln "$str" /dev/null {} +
I want to automatically create a directory without entering data from the keyboard.
Where should I place my *.war* file for backup then I have to copy this file to another directory here I should remove existing file and copy this new file in * *.
You can use the rsync command with the argument --delete, example:
folder a: 2019-05-21.war
folder b: 2019-05-15.war
when you run rsync it will erase whatever is different in the destination folder.
script examples:
#!/bin/bash
origin_dir="/opt/a"
dest_dir="/opt/b"
log=$(date +"/tmp/%F-bkp.log" -u)
rsync -avz --delete $a/ $b/ >> $log 2>&1
#if you want to keep backup for less than a week, delete the older files in origin
[ -d "$a/" ] && find $a/ -type f -name '*.war' -mtime +6 -exec rm {} \;
One a little more verbose example showing you typical things that you can do easily in a shell script.
#!/bin/bash
trap f_cleanup 2 # clean-up when getting signal
PRG=`basename $0` # get the name of this script without path
DEST=$HOME/dest # XXX customize this: the target directory
#
# F U N C T I O N S
#
function f_usage()
{
echo "$PRG - copy a file to destination directory ($DEST)"
echo "Usage: $PRG filename"
exit 1
}
function f_cleanup()
{
echo ">>> Caught Signal, cleaning up.."
rm -f $DEST/$1
exit 1
}
#
# M A I N
#
case $# in
1)
FILE=$1 # command line argument is the file to be copied
;;
*)
echo "$PRG: wrong number of arguments ($#), expected 1"
f_usage
;;
esac
while getopts "h?" opt; do
case "$opt" in
h|\?)
f_usage
;;
esac
done
if [ ! -f $FILE ]; then
echo "$PRG: error: file not found ($FILE)" && exit 1
fi
if [ ! -d $DEST ]; then
echo "$PRG: warning: dest dir ($DEST) does not exist, trying to create it.."
mkdir -p $DEST && echo "$PRG: dest dir ($DEST) successfully created"
if [ $? -ne 0 ]; then
echo "$PRG: error: dest dir ($DEST) could not be created"
exit 1
fi
fi
cp -p $FILE $DEST
RET=$? # return status of copy command
case $RET in
0) echo "$PRG: copying $FILE to $DEST was successful"
rm $FILE
;;
*) echo "$PRG: copying $FILE to $DEST was not successful"
exit 1
;;
esac
I would like to make a bash function that lists all the directories (and files) inside a given directory.
searchInRepo(){
file_list=`ls $1`
#echo $file_list
for aFile in $file_list; do
echo "$aFile --"
# case : directory
if [ -d $aFile ]
then
echo "$aFile ***"
cd $aFile
searchInRepo $aFile
cd ..
# case : file
elif [ -f $aFile ]
then
echo "$aFile is a regular file"
fi
done
}
As you see, this is a recursive function. When I call it with ls $1 (listing parameter's files) it doesn't recognize the directories as being directories. When I just use ls(no argument involved) everything works fine.
Any suggestions here ?
Cheers !
Why use ls when bash can do it for you? This will check to make sure the argument has a trailing /* so it will work with bare directory names.
if [[ ! "$1" =~ /\*$ ]]
then
if [[ ! "$1" =~ /$ ]]
then
searchpath="$1/*"
else
searchpath="$1*"
fi
fi
echo "Searching $searchpath"
for f in $searchpath; do
if [ -d "$f" ]
then
echo "Directory -> $f"
else
echo "File -> $f"
fi
done
Why even use for loop, find is made for this.
find . # List all files and directories recursively
find . -type f # List only files
find . -maxdepth 1 # List all files and directories in current directory
If you are using git, you can also use git ls-files
if [ ! -f ./* ]; then
for files in $(find . -maxdepth 1 -type f); do
echo $files
else
echo Nothing here
fi
Returns
syntax error near unexpected token `else'
New to this. Can anyone point me to what I did wrong?
You forgot done!
if [ ! -f ./* ]; then
for files in $(find . -maxdepth 1 -type f); do
echo $files
done
else
echo Nothing here
fi
The reason you get a syntax error is because you are not ending the loop with the "done" statement. You should be using a while loop, instead of a for loop in this case, as the for loop will break if any of the filenames contain spaces or newlines.
Also, the test command you have issued will also give a syntax error if the glob expands to multiple files.
$ [ ! -f ./* ]
bash: [: too many arguments
Here is a better way to check if the directory contains any files:
files=(./*) # populate an array with file or directory names
hasfile=false
for file in "${files[#]}"; do
if [[ -f $file ]]; then
hasfile=true
break
fi
done
if $hasfile; then
while read -r file; do
echo "$file"
done < <(find . -maxdepth 1 -type f)
fi
Also, you could simply replace the while loop with find -print if you have GNU find:
if $hasfile; then
find . -maxdepth 1 -type f -print
fi
The syntax for "for" is
for: for NAME [in WORDS ... ;] do COMMANDS; done
You are missing the "done"
Try
if [ ! -f ./* ]; then
for files in $(find . -maxdepth 1 -type f); do
echo $files
done
else
echo Nothing here
fi
BTW, did you mean echo with lowercase rather than ECHO?
I saw a question on stackflow about parsing arguments. I tried to write this, but it's not working and now it's getting on my nerves.
The usual way of running a script on the terminal is ./scriptname, but I later introduced the argument -d. So, if I put ./scriptname it will not run. If I put ./scriptname -d it will.
Now I want to put another argument for the path (where the files are moving, in this case "/home/elg19/documents") such that when I do not include the path, it won't run. But, if I put ./scriptname -d path I want to replace $To in the existing script with the command argument after -d.
#!/bin/bash
From="/home/mark/doc"
To=$2
if [ $1 = -d ]; then
cd "$From"
for i in pdf txt doc; do
find . -type f -name "*.${i}" -exec mv "{}" "$To" \;
done
fi
Your desired usage isn't completely clear, but it seems to be:
scriptname -d path
So, you can do it the extensible way, or the brute force way. Since you're changing directories willy-nilly, you also need to ensure that the paths are absolute, not relative.
Brute force
#!/bin/bash
From="/home/mark/doc"
if [ $# = 2 ] && [ "$1" = '-d' ] && [ -d $2 ]
then
case "$2" in
(/*) cd "$From" &&
for extn in pdf txt doc
do find . -type f -name "*.$extn" -exec mv {} "$To" \;
done;;
(*) echo "$0: path name must be absolute ($2 is not)" 1>&2; exit 1;;
esac
else
echo "Usage: $0 -d /absolute/dirname" 1>&2; exit 1
fi
Extensible
#!/bin/bash
From="/home/mark/doc"
To=""
usage()
{
echo "Usage: $(basename $0 .sh) -d /absolute/dirname" 1>&2
exit 1
}
while getopts d: opt
do
case "$opt" in
(d) if [ ! -d "$OPTARG" ]
then echo "$0: $OPTARG is not a directory" 1>&2; exit 1
else
case "$OPTARG" in
(/*) To="$OPTARG";;
(*) echo "$0: path name must be absolute ($2 is not)" 1>&2; exit 1;;
esac
fi;;
(*) usage;;
esac
done
shift $(($OPTIND - 1))
if [ $# != 0 ] || [ -z "$To" ]
then usage
fi
cd "$From" &&
for extn in pdf txt doc
do find . -type f -name "*.$extn" -exec mv {} "$To" \;
done
For example, it will be very easy to add a -f from option to deal with changing the source of the files.
Note that you could also use:
for extn in pdf txt doc
do find "$From" -type f -name "*.$extn" -exec mv {} "$To" \;
done
This would allow you to permit relative names for the 'from' and 'to' directories because it does not change directory.
I assume you want to do some input validation to your command line arguments. I guess the following would be somewhat useful:
#!/bin/bash
usage() {
echo "USAGE :"
echo "./move -d <to-directory>"
}
if [ $# -ne 2 ] ; then
usage
exit
fi
case $1 in
-d ) shift
To=$1
;;
* ) usage
exit
esac
From="/tmp/From/"
cd "$From"
for i in pdf txt doc; do
find . -type f -name "*.${i}" -exec mv "{}" "$To" \;
done
Moreover to debug your script, you may use the following command:
bash -x ./move.sh -d /tmp/To/
You may add more error checking (and informative echo's) for the following cases:
Source/destination directory does not exits
N files have been copied from the to
No files available at
You can take the type of files as arguments f.e. -t doc xls pdf