md5 all files in a directory tree - bash

I have a a directory with a structure like so:
.
├── Test.txt
├── Test1
│   ├── Test1.txt
│   ├── Test1_copy.txt
│   └── Test1a
│   ├── Test1a.txt
│   └── Test1a_copy.txt
└── Test2
├── Test2.txt
├── Test2_copy.txt
└── Test2a
├── Test2a.txt
└── Test2a_copy.txt
I would like to create a bash script that makes a md5 checksum of every file in this directory. I want to be able to type the script name in the CLI and then the path to the directory I want to hash and have it work. I'm sure there are many ways to accomplish this. Currently I have:
#!/bin/bash
for file in "$1" ; do
md5 >> "${1}__checksums.md5"
done
This just hangs and it not working. Perhaps I should use find?
One caveat - the directories I want to hash will have files with different extensions and may not always have this exact same tree structure. I want something that will work in these different situations, as well.

Using md5deep
md5deep -r path/to/dir > sums.md5
Using find and md5sum
find relative/path/to/dir -type f -exec md5sum {} + > sums.md5
Be aware, that when you run check on your MD5 sums with md5sum -c sums.md5, you need to run it from the same directory from which you generated sums.md5 file. This is because find outputs paths that are relative to your current location, which are then put into sums.md5 file.
If this is a problem you can make relative/path/to/dir absolute (e.g. by puting $PWD/ in front of your path). This way you can run check on sums.md5 from any location. Disadvantage is, that now sums.md5 contains absolute paths, which makes it bigger.
Fully featured function using find and md5sum
You can put this function to your .bashrc file (located in your $HOME directory):
function md5sums {
if [ "$#" -lt 1 ]; then
echo -e "At least one parameter is expected\n" \
"Usage: md5sums [OPTIONS] dir"
else
local OUTPUT="checksums.md5"
local CHECK=false
local MD5SUM_OPTIONS=""
while [[ $# > 1 ]]; do
local key="$1"
case $key in
-c|--check)
CHECK=true
;;
-o|--output)
OUTPUT=$2
shift
;;
*)
MD5SUM_OPTIONS="$MD5SUM_OPTIONS $1"
;;
esac
shift
done
local DIR=$1
if [ -d "$DIR" ]; then # if $DIR directory exists
cd $DIR # change to $DIR directory
if [ "$CHECK" = true ]; then # if -c or --check option specified
md5sum --check $MD5SUM_OPTIONS $OUTPUT # check MD5 sums in $OUTPUT file
else # else
find . -type f ! -name "$OUTPUT" -exec md5sum $MD5SUM_OPTIONS {} + > $OUTPUT # Calculate MD5 sums for files in current directory and subdirectories excluding $OUTPUT file and save result in $OUTPUT file
fi
cd - > /dev/null # change to previous directory
else
cd $DIR # if $DIR doesn't exists, change to it to generate localized error message
fi
fi
}
After you run source ~/.bashrc, you can use md5sums like normal command:
md5sums path/to/dir
will generate checksums.md5 file in path/to/dir directory, containing MD5 sums of all files in this directory and subdirectories. Use:
md5sums -c path/to/dir
to check sums from path/to/dir/checksums.md5 file.
Note that path/to/dir can be relative or absolute, md5sums will work fine either way. Resulting checksums.md5 file always contains paths relative to path/to/dir.
You can use different file name then default checksums.md5 by supplying -o or --output option. All options, other then -c, --check, -o and --output are passed to md5sum.
First half of md5sums function definition is responsible for parsing options. See this answer for more information about it. Second half contains explanatory comments.

How about:
find /path/you/need -type f -exec md5sum {} \; > checksums.md5
Update#1: Improved the command based on #twalberg's recommendation to handle white spaces in file names.
Update#2: Improved based on #jil's suggestion, to remove unnecessary xargs call and use -exec option of find instead.
Update#3: #Blake a naive implementation of your script would look something like this:
#!/bin/bash
# Usage: checksumchecker.sh <path>
find "$1" -type f -exec md5sum {} \; > "$1"__checksums.md5

Updated Answer
If you like the answer below, or any of the others, you can make a function that does the command for you. So, to test it, type the following into Terminal to declare a function:
function sumthem(){ find "$1" -type f -print0 | parallel -0 -X md5 > checksums.md5; }
Then you can just use:
sumthem /Users/somebody/somewhere
If that works how you like, you can add that line to the end of your "bash profile" and the function will be declared and available whenever you are logged in. Your "bash profile" is probably in $HOME/.profile
Original Answer
Why not get all your CPU cores working in parallel for you?
find . -type f -print0 | parallel -0 -X md5sum
This finds all the files (-type f) in the current directory (.) and prints them with a null byte at the end. These are then passed passed into GNU Parallel, which is told that the filenames end with a null byte (-0) and that it should do as many files as possible at a time (-X) to save creating a new process for each file and it should md5sum the files.
This approach will pay the largest bonus, in terms off speed, with big images like Photoshop files.

#!/bin/bash
shopt -s globstar
md5sum "$1"/** > "${1}__checksums.md5"
Explanation: shopt -s globstar(manual) enables ** recursive glob wildcard. It will mean that "$1"/** will expand to list of all the files recursively under the directory given as parameter $1. Then the script simply calls md5sum with this file list as parameter and > "${1}__checksums.md5" redirects the output to the file.

md5deep -r $your_directory | awk {'print $1'} | sort | md5sum | awk {'print $1'}

Use find command to list all files in directory tree,
then use xargs to provide input to md5sum command
find dirname -type f | xargs md5sum > checksums.md5

In case you prefer to have separate checksum files in every directory, rather than a single file, you can
find all subdirectories
keep only those which actually contain files (not only other subdirs)
cd to each of them and create a checksums.md5 file inside that directory
Here is a an example script which does that:
#!/bin/bash
# Do separate md5 files in each subdirectory
md5_filename=checksums.md5
dir="$1"
[ -z "$dir" ] && dir="."
# Check OS to select md5 command
if [[ "$OSTYPE" == "linux-gnu"* ]]; then
is_linux=1
md5cmd="md5sum"
elif [[ "$OSTYPE" == "darwin"* ]]; then
md5cmd="md5 -r"
else
echo "Error: unknown OS '$OSTYPE'. Don't know correct md5 command."
exit 1
fi
# go to base directory after saving where we started
start_dir="$PWD"
cd "$dir"
# if we're in a symlink cd to the real path
if [ ! "$dir" = "$(pwd -P)" ]; then
dir="$(pwd -P)"
cd "$dir"
fi
if [ "$PWD" = "/" ]; then
die "Refusing to do it on system root '$PWD'"
fi
# Find all folders to process
declare -a subdirs=()
declare -a wanted=()
# find all non-hidden subdirectories (not if the name begins with "." like ".Trashes", ".Spotlight-V100", etc.)
while IFS= read -r; do subdirs+=("$PWD/$REPLY"); done < <(find . -type d -not -name ".*" | LC_ALL=C sort)
# count files and if there are any, add dir to "wanted" array
echo "Counting files and sizes to process ..."
for d in "$dir" "${subdirs[#]}"; do # include "$dir" itself, not only it's subdirs
files_here=0
while IFS= read -r ; do
(( files_here += 1 ))
done < <(find "$d" -maxdepth 1 -type f -not -name "*.md5")
(( files_here )) && wanted+=("$d")
done
echo "Found ${#wanted[#]} folders to process:"
printf " * %s\n" "${wanted[#]}"
if [ "${#wanted[*]}" = 0 ]; then
echo "Nothing to do. Exiting."
exit 0
fi
for d in "${wanted[#]}"; do
cd "$d"
find . -maxdepth 1 -type f -not -name "$md5_filename" -print0 \
| LC_ALL=C sort -z \
| while IFS= read -rd '' f; do
$md5cmd "$f" | tee -a "$md5_filename"
done
cd "$dir"
done
cd "$start_dir"
(This is actually a very simplified version of this "md5dirs" script on Github. The original is quite specific and more complex, making it less illustrative as an example, and more difficult to adapt to other different needs.)

I wanted something similar to calculate the SHA256 of an entire directory, so I wrote this "checksum" script:
#!/bin/sh
cd $1
find . -type f | LC_ALL=C sort |
(
while read name; do
sha256sum "$name"
done;
) | sha256sum
Example usage:
patrick#pop-os:~$ checksum tmp
d36bebfa415da8e08cbfae8d9e74f6606e86d9af9505c1993f5b949e2befeef0 -
In an earlier version I was feeding the file names to "xargs", but that wasn't working when file names had spaces.

Related

Bash : Verify empty folder before ls -A

I'm starting to study some sh implementations and im running into some troubles when trying to do some actions with files inside some folders.
Here is the scenario:
I have a list of TXT files inside two different subfolders :
├── Folder A
├── randomFile1.txt
├── randomFile2.txt
├── Folder B
├── File1.txt
├── Folder C
├── File2.txt
And depending of the folder that the file resides in, i should take an specify action.
obs1 : The files from folderA should not be processed.
basicaly i tried two different aprroachs:
first one :
files_b="$incoming/$origin"/FolderB/*.txt
files_c="$incoming/$origin"/FolderC/*.txt
if [ "$(ls -A $files_b)" ]; then
for file in $files_b
do
#take action
done
else
echo -e "\033[1;33mWarning: No files\033[0m"
fi
if [ "$(ls -A $files_c)" ]; then
for file in $files_c
do
#take action
done
else
echo -e "\033[1;33mWarning: No files\033[0m"
fi
the problem for this one is that when i run the command ls -A if one of the folders (B or C) is empty, it throws an error because of the " *.txt " in the end of the path.
The second :
path="$incoming/$origin"/*.txt
find $path -type f -name "*.txt" | while read txt; do
for file in $txt
do
name=$(basename "$file")
dir=$(basename $(dirname $file))
if [ "$dir" == FolderB]; then
# Do something to files"
elif [ "$dir" == FolderC]; then
# Do something to files"
fi
done
done
For that approach the problem is that i'm picking the files from folder A and i dont want that (because it will decrease performance due to "if" statements), and i dont know how to verify if the folder is empty using the find command.
Can annyone help me?
Thank you all.
I would write the code like this:
No unquoted parameter expansions
Don't use ls to check if the directory is empty
Use printf instead of echo.
# You cannot safely expand a parameter so that it does file globbing
# but does *not* to word-splitting. Put the glob directly in the loop
# or use an array.
shopt -s nullglob
found=
for file in "$incoming/$origin"/FolderB/*.txt; do
do
found=1
#take action
done
if [ "$found" ]; then
printf "\033[1;33mWarning: No files\033[0m\n"
fi
In the first solution you can simply hide the error messages.
if [ "$(ls -A $files_b 2>/dev/null)" ]; then
In the second solution, start find at the subdirectories instead of the parent directory:
path="$incoming/$origin/FolderA $incoming/$origin/FolderB"
I think using find should be better
files_b="${incoming}/${origin}/FolderB"
files_c="${incoming}/${origin}/FolderC"
find files_b -name "*.txt" -exec action1 {} \;
find files_b -name "*.txt" -exec action2 {} \;
or even just find
find "${incoming}/${origin}/FolderB" -name "*.txt" -exec action1 {} \;
find "${incoming}/${origin}/FolderC" -name "*.txt" -exec action2 {} \;
of course you should think about your action, but you can make function or separate script which accept file name(s)

Copy specific files from subdirectories into one directory

I have a directory:
❯ find ./images -name *150x150.jpg
./images/2060511653921052666.images/thumb-150x150.jpg
./images/1777759401031970571.images/thumb-150x150.jpg
./images/1901716489977597520.images/thumb-150x150.jpg
./images/2008758225324557620.images/thumb-150x150.jpg
./images/1988762968386208381.images/thumb-150x150.jpg
./images/1802341648716075239.images/thumb-150x150.jpg
./images/2051017760380879322.images/thumb-150x150.jpg
./images/1974813836146304123.images/thumb-150x150.jpg
./images/2003120002653201215.images/thumb-150x150.jpg
./images/1911925394312129508.images/thumb-150x150.jpg
(...)
I would like to copy all those files (thumb-150x150.jpg) into one directory.
❯ find ./images -name *150x150.jpg -exec cp {} ./another-directory \;
But of course every file will be overwritten by the next one.
So how could I copy them to either:
1) 1.jpg, 2.jpg, 3.jpg... etc
or
2) use the subdirectory id (./images/2060511653921052666.images/thumb-150x150.jpg) as the target filename (2060511653921052666.jpg in this example) ?
you can use loop:
i=1
find ./images -name *150x150.jpg | while read line; do
cp $line /anotherdir/$i.jpg
i=$[i+1]
done
A simple bash script, using $RANDOM to generate a random number for each image copied. A random number is embedded in the new name of each file.
Note that $RANDOM generates a random number between 1 and 32767. So the same random number could be produced more than once. This is only likely if you had tens of thousand images to copy.
It is fairly easy to improve the randomness of each number generated is necessary.
# !/bin/bash
cd images
for d in *
do
[ -d $d ] && cd $d
for image in *150x150.jpg
do
cp -pv $image /another-dir/thumb${RANDOM}.jpg
done
[ -d $d ] && cd ..
done
If you have GNU parallel it's possible:
find ./images -name *150x150.jpg | \
parallel 'mv {} ./another-directory/`stat -c%i {}`_{/}'
the stat -c%i {} get's the inode for each file
I tested the command bellow on my system:
find -iname "*.mp3" | parallel 'cp {} ~/tmp/{/.}_`stat -c%i {}`.mp3'
{/} .......... gets just the filename with no full path "basename"
{.} .......... removes extension
Before runing the actual command you can put a echo command in front of cp or mv in order to test your command.
source: https://stackoverflow.com/a/16074963/2571881

Rename files based on their parent directory in Bash

Been trying to piece together a couple previous posts for this task.
The directory tree looks like this:
TEST
|ABC_12345678
3_XYZ
|ABC_23456789
3_XYZ
etc
Each folder within the parent folder named "TEST" always starts with ABC_\d{8} -the 8 digits are always different. Within the folder ABC_\d{8} is always a folder entitled 3_XYZ that always has a file named "MD2_Phd.txt". The goal is to rename each "MD2_PhD.txt" file with the specific 8 digit ID found in the ABC folder name i.e. "\d{8}_PhD.txt"
After several iterations on various bits of code from different posts this is the best I can come up with,
cd /home/etc/Desktop/etc/TEST
find -type d -name 'ABC_(\d{8})' |
find $d -name "*_PhD.txt" -execdir rename 's/MD2$/$d/' "{}" \;
done
find + bash solution:
find -type f -regextype posix-egrep -regex ".*/TEST/ABC_[0-9]{8}/3_XYZ/MD2_Phd\.txt" \
-exec bash -c 'abc="${0%/*/*}"; fp="${0%/*}/";
mv "$0" "$fp${abc##*_}_PhD.txt" ' {} \;
Viewing results:
$ tree TEST/ABC_*
TEST/ABC_12345678
└── 3_XYZ
└── 12345678_PhD.txt
TEST/ABC_1234ss5678
└── 3_XYZ
└── MD2_Phd.txt
TEST/ABC_23456789
└── 3_XYZ
└── 23456789_PhD.txt
You are piping find output to another find. That won't work.
Use a loop instead:
dir_re='^.+_([[:digit:]]{8})/'
for file in *_????????/3_XYZ/MD2_PhD.txt; do
[[ -f $file ]] || continue
if [[ $file =~ $dir_re ]]; then
dir_num="${BASH_REMATCH[1]}"
new_name="${file%MD2_PhD.txt/$dir_num.txt}" # replace the MD2_PhD at the end
echo mv "$file" "$new_name" # remove echo from here once tested
fi
done

Split a folder into multiple subfolders in terminal/bash script

I have several folders, each with between 15,000 and 40,000 photos. I want each of these to be split into sub folders - each with 2,000 files in them.
What is a quick way to do this that will create each folder I need on the go and move all the files?
Currently I can only find how to move the first x items in a folder into a pre-existing directory. In order to use this on a folder with 20,000 items... I would need to create 10 folders manually, and run the command 10 times.
ls -1 | sort -n | head -2000| xargs -i mv "{}" /folder/
I tried putting it in a for-loop, but am having trouble getting it to make folders properly with mkdir. Even after I get around that, I need the program to only create folders for every 20th file (start of a new group). It wants to make a new folder for each file.
So... how can I easily move a large number of files into folders of an arbitrary number of files in each one?
Any help would be very... well... helpful!
Try something like this:
for i in `seq 1 20`; do mkdir -p "folder$i"; find . -type f -maxdepth 1 | head -n 2000 | xargs -i mv "{}" "folder$i"; done
Full script version:
#!/bin/bash
dir_size=2000
dir_name="folder"
n=$((`find . -maxdepth 1 -type f | wc -l`/$dir_size+1))
for i in `seq 1 $n`;
do
mkdir -p "$dir_name$i";
find . -maxdepth 1 -type f | head -n $dir_size | xargs -i mv "{}" "$dir_name$i"
done
For dummies:
create a new file: vim split_files.sh
update the dir_size and dir_name values to match your desires
note that the dir_name will have a number appended
navigate into the desired folder: cd my_folder
run the script: sh ../split_files.sh
This solution worked for me on MacOS:
i=0; for f in *; do d=dir_$(printf %03d $((i/100+1))); mkdir -p $d; mv "$f" $d; let i++; done
It creates subfolders of 100 elements each.
This solution can handle names with whitespace and wildcards and can be easily extended to support less straightforward tree structures. It will look for files in all direct subdirectories of the working directory and sort them into new subdirectories of those. New directories will be named 0, 1, etc.:
#!/bin/bash
maxfilesperdir=20
# loop through all top level directories:
while IFS= read -r -d $'\0' topleveldir
do
# enter top level subdirectory:
cd "$topleveldir"
declare -i filecount=0 # number of moved files per dir
declare -i dircount=0 # number of subdirs created per top level dir
# loop through all files in that directory and below
while IFS= read -r -d $'\0' filename
do
# whenever file counter is 0, make a new dir:
if [ "$filecount" -eq 0 ]
then
mkdir "$dircount"
fi
# move the file into the current dir:
mv "$filename" "${dircount}/"
filecount+=1
# whenever our file counter reaches its maximum, reset it, and
# increase dir counter:
if [ "$filecount" -ge "$maxfilesperdir" ]
then
dircount+=1
filecount=0
fi
done < <(find -type f -print0)
# go back to top level:
cd ..
done < <(find -mindepth 1 -maxdepth 1 -type d -print0)
The find -print0/read combination with process substitution has been stolen from another question.
It should be noted that simple globbing can handle all kinds of strange directory and file names as well. It is however not easily extensible for multiple levels of directories.
The code below assumes that the filenames do not contain linefeeds, spaces, tabs, single quotes, double quotes, or backslashes, and that filenames do not start with a dash. It also assumes that IFS has not been changed, because it uses while read instead of while IFS= read, and because variables are not quoted. Add setopt shwordsplit in Zsh.
i=1;while read l;do mkdir $i;mv $l $((i++));done< <(ls|xargs -n2000)
The code below assumes that filenames do not contain linefeeds and that they do not start with a dash. -n2000 takes 2000 arguments at a time and {#} is the sequence number of the job. Replace {#} with '{=$_=sprintf("%04d",$job->seq())=}' to pad numbers to four digits.
ls|parallel -n2000 mkdir {#}\;mv {} {#}
The command below assumes that filenames do not contain linefeeds. It uses the implementation of rename by Aristotle Pagaltzis which is the rename formula in Homebrew, where -p is needed to create directories, where --stdin is needed to get paths from STDIN, and where $N is the number of the file. In other implementations you can use $. or ++$::i instead of $N.
ls|rename --stdin -p 's,^,1+int(($N-1)/2000)."/",e'
I would go with something like this:
#!/bin/bash
# outnum generates the name of the output directory
outnum=1
# n is the number of files we have moved
n=0
# Go through all JPG files in the current directory
for f in *.jpg; do
# Create new output directory if first of new batch of 2000
if [ $n -eq 0 ]; then
outdir=folder$outnum
mkdir $outdir
((outnum++))
fi
# Move the file to the new subdirectory
mv "$f" "$outdir"
# Count how many we have moved to there
((n++))
# Start a new output directory if we have sent 2000
[ $n -eq 2000 ] && n=0
done
The answer above is very useful, but there is a very import point in Mac(10.13.6) terminal. Because xargs "-i" argument is not available, I have change the command from above to below.
ls -1 | sort -n | head -2000| xargs -I '{}' mv {} /folder/
Then, I use the below shell script(reference tmp's answer)
#!/bin/bash
dir_size=500
dir_name="folder"
n=$((`find . -maxdepth 1 -type f | wc -l`/$dir_size+1))
for i in `seq 1 $n`;
do
mkdir -p "$dir_name$i";
find . -maxdepth 1 -type f | head -n $dir_size | xargs -I '{}' mv {} "$dir_name$i"
done
This is a tweak of Mark Setchell's
Usage:
bash splitfiles.bash $PWD/directoryoffiles splitsize
It doesn't require the script to be located in the same dir as the files for splitting, it will operate on all files, not just the .jpg and allows you to specify the split size as an argument.
#!/bin/bash
# outnum generates the name of the output directory
outnum=1
# n is the number of files we have moved
n=0
if [ "$#" -ne 2 ]; then
echo Wrong number of args
echo Usage: bash splitfiles.bash $PWD/directoryoffiles splitsize
exit 1
fi
# Go through all files in the specified directory
for f in $1/*; do
# Create new output directory if first of new batch
if [ $n -eq 0 ]; then
outdir=$1/$outnum
mkdir $outdir
((outnum++))
fi
# Move the file to the new subdirectory
mv "$f" "$outdir"
# Count how many we have moved to there
((n++))
# Start a new output directory if current new dir is full
[ $n -eq $2 ] && n=0
done
Can be directly run in the terminal
i=0;
for f in *;
do
d=picture_$(printf %03d $((i/2000+1)));
mkdir -p $d;
mv "$f" $d;
let i++;
done
This script will move all files within the current directory into picture_001, picture_002... and so on. Each newly created folder will contain 2000 files
2000 is the chunked number
%03d is the suffix digit you can adjust (currently 001,002,003)
picture_ is the folder prefix
This script will chunk all files into its directory (create subdirectory)
You'll certainly have to write a script for that.
Hints of things to include in your script:
First count the number of files within your source directory
NBFiles=$(find . -type f -name *.jpg | wc -l)
Divide this count by 2000 and add 1, to determine number of directories to create
NBDIR=$(( $NBFILES / 2000 + 1 ))
Finally loop through your files and move them accross the subdirs.
You'll have to use two imbricated loops : one to pick and create the destination directory, the other to move 2000 files in this subdir, then create next subdir and move the next 2000 files to the new one, etc...

How do I copy directory structure containing placeholders

I have the situation, where a template directory - containing files and links (!) - needs to be copied recursively to a destination directory, preserving all attributes. The template directory contains any number of placeholders (__NOTATION__), that need to be renamed to certain values.
For example template looks like this:
./template/__PLACEHOLDER__/name/__PLACEHOLDER__/prog/prefix___FILENAME___blah.txt
Destination becomes like this:
./destination/project1/name/project1/prog/prefix_customer_blah.txt
What I tried so far is this:
# first create dest directory structure
while read line; do
dest="$(echo "$line" | sed -e 's#__PLACEHOLDER__#project1#g' -e 's#__FILENAME__#customer#g' -e 's#template#destination#')"
if ! [ -d "$dest" ]; then
mkdir -p "$dest"
fi
done < <(find ./template -type d)
# now copy files
while read line; do
dest="$(echo "$line" | sed -e 's#__PLACEHOLDER__#project1#g' -e 's#__FILENAME__#customer#g' -e 's#template#destination#')"
cp -a "$line" "$dest"
done < <(find ./template -type f)
However, I realized that if I want to take care about permissions and links, this is going to be endless and very complicated. Is there a better way to replace __PLACEHOLDER__ with "value", maybe using cp, find or rsync?
I suspect that your script will already do what you want, if only you replace
find ./template -type f
with
find ./template ! -type d
Otherwise, the obvious solution is to use cp -a to make an "archive" copy of the template, complete with all links, permissions, etc, and then rename the placeholders in the copy.
cp -a ./template ./destination
while read path; do
dir=`dirname "$path"`
file=`basename "$path"`
mv -v "$path" "$dir/${file//__PLACEHOLDER__/project1}"
done < <(`find ./destination -depth -name '*__PLACEHOLDER__*'`)
Note that you'll want to use -depth or else renaming files inside renamed directories will break.
If it's very important to you that the directory tree is created with the names already changed (i.e. you must never see placeholders in the destination), then I'd recommend simply using an intermediate location.
First copy with rsync, preserving all the properties and links etc.
Then change the placeholder strings in the destination filenames:
#!/bin/bash
TEMPL="$PWD/template" # somewhere else
DEST="$PWD/dest" # wherever it is
mkdir "$DEST"
(cd "$TEMPL"; rsync -Hra . "$DEST") #
MyRen=$(mktemp)
trap "rm -f $MyRen" 0 1 2 3 13 15
cat >$MyRen <<'EOF'
#!/bin/bash
fn="$1"
newfn="$(echo "$fn" | sed -e 's#__PLACEHOLDER__#project1#g' -e s#__FILENAME__#customer#g' -e 's#template#destination#')"
test "$fn" != "$newfn" && mv "$fn" "$newfn"
EOF
chmod +x $MyRen
find "$DEST" -depth -execdir $MyRen {} \;

Resources