Sort files into sub folders by date - bash - bash

Basically my HDD crashed, I was able to recover all the files, but, all the files have retained their meta & some have retained their names, I have 274000 images, which I need to more or less, sort into folders by date.
So let's say it starts with the first files, it would get the date from the file, create a sub folder, and until the date changes, keep moving that file into the created folder, once the date changes, it would create a new folder and keep doing the same thing.
I'm sure this is possible, I really didn't want to have to do this manually as it would take weeks...
Lets say I have a target folder /target/
Target contains, 274000 files, in no sub folders at all.
The folders structure should be /target/YY/DD_MM/filenames
I would like to create a bash script for this, but I'm not really sure where to proceed from here.
I've found this:
#!/bin/bash
DIR=/home/data
target=$DIR
cd "$DIR"
for file in *; do
dname="$( date -d "${file%-*}" "+$target/%Y/%b_%m" )"
mkdir -vp "${dname%/*}"
mv -vt "$dname" "$file"
done
Would creating a folder without checking if it exists delete files inside that folder?
I'm also not quite sure what adding an asterix to the dir pathname would do?
I'm not quite familiar with bash, but I'd love to get this working if someone could please explain to me a little more what's going on?
Thankyou!

I seemed to have found an answer that suited me, this worked on OSX just fine on three files, before I run it on the massive folder, can you guys just check that this isn't going to fail somewhere?
#!/bin/bash
DIR=/Users/limeworks/Downloads/target
target=$DIR
cd "$DIR"
for file in *; do
# Top tear folder name
year=$(stat -f "%Sm" -t "%Y" $file)
# Secondary folder name
subfolderName=$(stat -f "%Sm" -t "%d-%m-%Y" $file)
if [ ! -d "$target/$year" ]; then
mkdir "$target/$year"
echo "starting new year: $year"
fi
if [ ! -d "$target/$year/$subfolderName" ]; then
mkdir "$target/$year/$subfolderName"
echo "starting new day & month folder: $subfolderName"
fi
echo "moving file $file"
mv "$file" "$target/$year/$subfolderName"
done

I've had issues with the performance of the other solutions since my filesystem is remotely mounted and access times are big.
I've worked on some improved solutions in bash and python:
Bash version:
record # cat test.sh
for each in *.mkv
do
date=$(date +%Y-%d-%m -r "$each");
_DATES+=($date);
FILES+=($each);
done
DATES=$(printf "%s\n" "${_DATES[#]}" | sort -u);
for date in ${DATES[#]}; do
if [ ! -d "$date" ]; then
mkdir "$date"
fi
done
for i in ${FILES[#]}; do
dest=$(date +%Y-%d-%m -r "$i")
mv $i $dest/$i
done
record # time bash test.sh
real 0m3.785s
record #
Python version:
import os, datetime, errno, argparse, sys
def create_file_list(CWD):
""" takes string as path, returns tuple(files,date) """
files_with_mtime = []
for filename in [f for f in os.listdir(CWD) if os.path.splitext(f)[1] in ext]:
files_with_mtime.append((filename,datetime.datetime.fromtimestamp(os.stat(filename).st_mtime).strftime('%Y-%m-%d')))
return files_with_mtime
def create_directories(files):
""" takes tuple(file,date) from create_file_list() """
m = []
for i in files:
m.append(i[1])
for i in set(m):
try:
os.makedirs(os.path.join(CWD,i))
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
def move_files_to_folders(files):
""" gets tuple(file,date) from create_file_list() """
for i in files:
try:
os.rename(os.path.join(CWD,i[0]), os.path.join(CWD,(i[1] + '/' + i[0])))
except Exception as e:
raise
return len(files)
if __name__ == '__main__':
parser = argparse.ArgumentParser(prog=sys.argv[0], usage='%(prog)s [options]')
parser.add_argument("-e","--extension",action='append',help="File extensions to match",required=True)
args = parser.parse_args()
ext = ['.' + e for e in args.extension]
print "Moving files with extensions:", ext
CWD = os.getcwd()
files = create_file_list(CWD)
create_directories(files)
print "Moved %i files" % move_files_to_folders(files)
record # time python sort.py -e mkv
Moving files with extensions: ['.mkv']
Moved 319 files
real 0m1.543s
record #
Both scripts are tested upon 319 mkv files modified in the last 3 days.

I worked on a little script and tested it.Hope this helps.
#!/bin/bash
pwd=`pwd`
#list all files,cut date, remove duplicate, already sorted by ls.
dates=`ls -l --time-style=long-iso|grep -e '^-.*'|awk '{print $6}'|uniq`
#for loop to find all files modified on each unique date and copy them to your pwd
for date in $dates; do
if [ ! -d "$date" ]; then
mkdir "$date"
fi
#find command will find all files modified at particular dates and ignore hidden files.
forward_date=`date -d "$date + 1 day" +%F`
find "$pwd" -maxdepth 1 -not -path '*/\.*' -type f -newermt "$date" ! -newermt "$forward_date" -exec cp -f {} "$pwd/$date" \;
done
You must be in your working directory where your files to be copied according to date are present.

Related

check whether a directory contains all (and only) the listed files

I'm writing unit-tests to test file-IO functions.
There's no formalized test-framework in my target language, so my idea is to run a little test program that somehow manipulates files in a test-directory, and after that checks the results in a little shell script.
To evaluate the output, I want to check a given directory whether all expected files are there and no other files have been created during the test.
My first attempt goes like this:
set -e
test -e "${test_dir}/a.txt"
test -e "${test_dir}/b.txt"
test -d "${test_dir}/dir"
find "${test_dir}" -mindepth 1 \
-not -wholename "${test_dir}/a.txt" \
-not -wholename "${test_dir}/b.txt" \
-not -wholename "${test_dir}/dir" \
| grep . && exit 1 || true
This properly detects whether there are two files a.txt and b.txt, and a subdirectory dir/ in the ${test_dir}.
If there happens to be a file c.txt, the test should and will fail.
However, this doesn't scale well.
There are dozens of unit-tests and each has a different set of files/directories, so I find myself repeating lines very similar to the above again and again.
So I'd rather wrap the above into a function call like so:
if checkdirectory "${test_dir}" a.txt b.txt dir/ dir/subdir/ dir/.hidden.txt; then
echo "ok"
else
echo "ko"
fi
Unfortunately I have no clue how to implement checkdirectory (esp. the find invocation with multiple -not -wholename ... stanzas gives me headache).
To add a bit of fun, the constraints are:
support both (and differentiate between) files and directories
must (EDITed from should) run on Linux, macOS & MSYS2/MinGW, therefore:
POSIX if possible (in reality it will be bash, but probably bash<<4! so no fancy features)
EDIT
some more constraints (these didn't make it into original my late-night question; so just consider them "extra constraints for bonus points")
the test-directory may contain subdirectories and files in subdirectories (up to an arbitrary depth), so any check needs to operate on more than just the top-level directory
ideally, the paths may contain weirdo characters like spaces, linebreaks,... (this is really unit-testing. we do want to test for such cases)
the testdir is more often than not some randomly generated directory using mktemp -d, so it would be nice if we could avoid hardcoding it in the tests
no assumptions about the underlying filesystem can be made.
Assuming we have a directory tree as an example:
$test_dir/a.txt
$test_dir/b.txt
$test_dir/dir/c.txt
$test_dir/dir/"d e".txt
$test_dir/dir/subdir/
then would you please try:
#!/bin/sh
checkdirectory() {
local i
local count
local testdir=$1
shift
for i in "$#"; do
case "$i" in
*/) [ -d "$testdir/$i" ] || return 1 ;; # check if the directory exists
*) [ -f "$testdir/$i" ] || return 1 ;; # check if the file exists
esac
done
# convert each filename to just a newline, then count the lines
count=`find "$testdir" -mindepth 1 -printf "\n" | wc -l`
[ "$count" -eq "$#" ] || return 1
return 0
}
if checkdirectory "$test_dir" a.txt b.txt dir/ dir/c.txt "dir/d e.txt" dir/subdir/; then
echo "ok"
else
echo "ko"
fi
One easy fast way would be to compare the output of find with a reference string:
Lets start with an expected directory and files structure:
d/FolderA/filexx.csv
d/FolderA/filexx.doc
d/FolderA/Sub1
d/FolderA/Sub2
testassert
#!/usr/bin/env bash
assertDirContent() {
read -r -d '' s < <(find "$1" -printf '%y %p\n')
[ "$2" = "$s" ]
}
testref='d d/FolderA/
f d/FolderA/filexx.csv
f d/FolderA/filexx.doc
d d/FolderA/Sub1
d d/FolderA/Sub2'
if assertDirContent 'd/FolderA/' "$testref"; then
echo 'ok'
else
echo 'Directory content assertion failed'
fi
Testing it:
$ ./testassert
ok
$ touch d/FolderA/unwantedfile
$ ./testassert
Directory content assertion failed
$ rm d/FolderA/unwantedfile
$ ./testassert
ok
$ rmdir d/FolderA/Sub1
$ ./testassert
Directory content assertion failed
$ mkdir d/FolderA/Sub1
$ ./testassert
ok
$ rmdir d/FolderA/Sub2
# Replace with a file instead of a directory
touch d/FolderA/Sub2
$ ./testassert
Directory content assertion failed
Now if you add timestamps and other info like permissions, owner, group to the find -printf output, you can also check all these matches the asserted string output.
I don't know what you mean by differentiating between files and directories since your last if statement is somehow binary. Here's what worked for me:
#! /bin/bash
function checkdirectory()
{
test_dir="${1}"
shift
content="$#"
for file in ${content}
do
[[ -z "${test_dir}/${file}" ]] && return 1
done
# -I is meant to be appended to "ls" to ignore the files in order to check if other files exist.
matched=" -I ${content// / -I } -I ${test_dir}"
[[ -e `ls $matched` ]] && return 1
return 0
}
if checkdirectory /some/directory a.txt b.txt dir; then
echo "ok"
else
echo "ko"
fi
Here's a possible solution i dreamed up during the night.
It destroys the test-data, so might not be usable in many cases (though it might just work for paths generated on-the-fly during unit tests):
checkdirectory() {
local i
local testdir=$1
shift
# try to remove all the listed files
for i in "$#"; do
if [ "x${i}" = "x${i%/}" ]; then
rm "${testdir}/${i}" || return 1
fi
done
# the directories should now be empty,
# so try to remove those dirs that are listed
for i in "$#"; do
if [ "x${i}" != "x${i%/}" ]; then
rmdir "${testdir}/${i}" || return 1
fi
done
# finally ensure that no files are left
if find "${testdir}" -mindepth 1 | grep . >/dev/null ; then
return 1
fi
return 0
}
When invoking the checkdirectory function, deeper directories must come first (that is checkdirectory foo/bar/ foo/ rather than checkdirectory foo/ foo/bar/).

Bash simple script copying files to specific folder + renaming to todays effective date

Good day,
I need your help in creating next script
Every day teacher uploading files in next format:
STUDENT_ACCOUNTS_20200217074343-20200217.xlsx
STUDENT_MARKS_20200217074343-20200217.xlsx
STUNDENT_HOMEWORKS_20200217074343-20200217.xlsx
STUDENT_PHYSICAL_20200217074343-20200217.xlsx
SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx
[file_name+todaydatetime-todaydate.xlsx]
But sometimes a teacher is not uploading these files and we need to do manual renaming the files received for the previous date and then copying every separate file to separate folder like:
cp STUDENT_ACCOUNTS_20200217074343-20200217.xlsx /incoming/A1/STUDENT_ACCOUNTS_20200318074343-20200318.xlsx
cp STUDENT_MARKS_20200217074343-20200217.xlsx /incoming/B1/STUDENT_ACCOUNTS_20200318074343-20200318.xlsx
.............
cp SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx /incoming/F1/SUBSCRIBED_STUDENTS_20200318074343-20200318.xlsx.
In two words - taking the files from previous date copying them to specific folder with a new timestamp.
#!/bin/bash
cd /home/incoming/
date=$(date '+%Y%m%d')
previousdate="$( date --date=yesterday '+%Y%m%d' )"
cp /home/incoming/SUBSCRIBED_STUDENTS_'$previousdate'.xlsx /incoming/F1/SUBSCRIBED_STUDENTS_'$date'.xlsx
and there could be case when teacher can upload one file and others not, how to do check for existing files?
Thanks for reading that, if you can help me i will ne really thankful - you will save plenty of manual work for me.
The process can be automated completely if your directory structure is known. If it follows some kind of pattern, do mention it here.
For the timing, this maybe helpful:
Filename "tscp"
#
# Stands for timestamped cp
#
tscp() {
local file1=$1 ; shift
local to_dir=$1 ; shift
local force_copy=$1 ; shift
local current_date="$(date '+%Y%m%d')"
if [ "${force_copy}" == "--force" ] ; then
cp "${file1}" "${to_dir}/$(basename ${file1%-*})-${current_date}.xlsx"
else
cp -n "${file1}" "${to_dir}/$( basename ${file1%-*})-${current_date}.xlsx"
fi
}
tscp "$#"
It's usage is as follows:
tscp source to_directory [-—force]
Basically the script takes 2 arguments and the 3rd one is optional.
First arg is source file path and second are is the directory path to where you want to copy (. if same directory).
By default this copy would be made if and only if destination file doesn't exist.
If you want to overwrite the destination file then pass a third arg —force.
Again, this can be refined much much more based on details provided.
Sample usage for now:
bash tscp SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx /incoming/F1/
will copy SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx to directory /incoming/F1/ with updated date if it doesn't exist yet.
UPDATE:
Give this a go:
#! /usr/bin/env bash
printf_err() {
ERR_COLOR='\033[0;31m'
NORMAL_COLOR='\033[0m'
printf "${ERR_COLOR}$1${NORMAL_COLOR}" ; shift
printf "${ERR_COLOR}%s${NORMAL_COLOR}\n" "$#" >&2
}
alias printf_err='printf_err "Line ${LINENO}: " '
shopt -s expand_aliases
usage() {
printf_err \
"" \
"usage: ${BASH_SOURCE##*/} " \
" -f copy_data_file" \
" -d days_before" \
" -m months_before" \
" -o" \
" -y years_before" \
" -r " \
" -t to_dir" \
>&2
exit 1
}
fullpath() {
local path="$1" ; shift
local abs_path
if [ -z "${path}" ] ; then
printf_err "${BASH_SOURCE}: Line ${LINENO}: param1(path) is empty"
return 1
fi
abs_path="$( cd "$( dirname "${path}" )" ; pwd )/$( basename ${path} )"
printf "${abs_path}"
}
OVERWRITE=0
REVIEW=0
COPYSCRIPT="$( mktemp "/tmp/copyscriptXXXXX" )"
while getopts 'f:d:m:y:t:or' option
do
case "${option}" in
d)
DAYS="${OPTARG}"
;;
f)
INPUT_FILE="${OPTARG}"
;;
m)
MONTHS="${OPTARG}"
;;
t)
TO_DIR="${OPTARG}"
;;
y)
YEARS="${OPTARG}"
;;
o)
OVERWRITE=1
;;
r)
REVIEW=1
COPYSCRIPT="copyscript"
;;
*)
usage
;;
esac
done
INPUT_FILE=${INPUT_FILE:-$1}
TO_DIR=${TO_DIR:-$2}
if [ ! -f "${INPUT_FILE}" ] ; then
printf_err "No such file ${INPUT_FILE}"
usage
fi
DAYS="${DAYS:-1}"
MONTHS="${MONTHS:-0}"
YEARS="${YEARS:-0}"
if date -v -1d > /dev/null 2>&1; then
# BSD date
previous_date="$( date -v -${DAYS}d -v -${MONTHS}m -v -${YEARS}y '+%Y%m%d' )"
else
# GNU date
previous_date="$( date --date="-${DAYS} days -${MONTHS} months -${YEARS} years" '+%Y%m%d' )"
fi
current_date="$( date '+%Y%m%d' )"
tmpfile="$( mktemp "/tmp/dstnamesXXXXX" )"
awk -v to_replace="${previous_date}" -v replaced="${current_date}" '{
gsub(to_replace, replaced, $0)
print
}' ${INPUT_FILE} > "${tmpfile}"
paste ${INPUT_FILE} "${tmpfile}" |
while IFS=$'\t' read -r -a arr
do
src=${arr[0]}
dst=${arr[1]}
opt=${arr[2]}
if [ -n "${opt}" ] ; then
if [ ! -d "${dst}" ] ;
then
printf_err "No such directory ${dst}"
usage
fi
dst="${dst}/$( basename "${opt}" )"
else
if [ ! -d "${TO_DIR}" ] ;
then
printf_err "No such directory ${TO_DIR}"
usage
fi
dst="${TO_DIR}/$( basename "${dst}" )"
fi
src=$( fullpath "${src}" )
dst=$( fullpath "${dst}" )
if [ -n "${OVERWRITE}" ] ; then
echo "cp ${src} ${dst}"
else
echo "cp -n ${src} ${dst}"
fi
done > "${COPYSCRIPT}"
if [ "${REVIEW}" -eq 0 ] ; then
${BASH} "${COPYSCRIPT}"
rm "${COPYSCRIPT}"
fi
rm "${tmpfile}"
Steps:
Store the above script in a file, say `tscp`.
Now you need to create the input file for it.
From you example, a sample input file can be like:
STUDENT_ACCOUNTS_20200217074343-20200217.xlsx /incoming/A1/
STUDENT_MARKS_20200217074343-20200217.xlsx /incoming/B1/
STUNDENT_HOMEWORKS_20200217074343-20200217.xlsx
STUDENT_PHYSICAL_20200217074343-20200217.xlsx
SUBSCRIBED_STUDENTS_20200217074343-20200217.xlsx /incoming/FI/
Where first part is the source file name and after a "tab" (it should be a tab for sure), you mention the destination directory. These paths should be either absolute or relative the the directory where you are executing the script. You may not mention destination directory if all are to be sent to same directory (discussed later).
Let's say you named this file `file`.
Also, you don't really have to type all that. If you have these files in the current directory, just do this:
ls -1 > file
(the above is ls "one", not "l".)
Now we have the `file` from above in which we didn't mention destination directory for all but only for some.
Let's say we want to move all other directories to `/incoming/x` and it exists.
Now script is to be executed like:
bash tscp -f file -t /incoming/x -r
Where `/incoming/x` is the default directory i.e. when none other directory is mentioned in `file`, your files are moved to this directory.
Now in the current directory a script named `copyscript` will be generated which will contain `cp` commands to copy all files. You can open a review `copyscript` and if the copying seems right, go ahead and:
bash copyscript
which will copy all the files and then you can:
rm copyscript
You need not generate to `copyscript` and can straight away go for a copy like:
bash tscp -f file -t /incoming/x
which won't generate any copyscript and copy straight away.
Previously `-r` caused the generation of `copyscript`.
I would recomment to use version with `-r` because that is a little safer and you will be sure that right copies are being made.
By default it would check for the previous day and rename to current date, but you can override that behaviour as:
bash tscp -f file -t /incoming/x -d 3
`-d 3` would look for 3 days back files in `file`.
By default copies won't overwrite i.e. if file at the destination already exists, copies won't be made.
If you want to overwrite, add flag `-o`.
As a conclusion I would advice to use:
bash tscp -f file -r
where file contains tab separated values like above for all.
Also, adding tscp to path would be a good idea after you are sure it works ok.
Also the scipt is made on mac and there is always a change of version clash of tools used. I would suggest to try the script on some sample data first to make sure script works right on your machine.

Bash script overwriting existing .tar.gz file

I have written a small bash file to backup a repository. If the archive file doesn't exists, it should be created. Thereafter, files found should be appended to the existing file.
For some reason, the archive keeps getting (re)created/overwritten. This is my script below. Can anyone see where the logic error is coming from?
#!/bin/bash
REPOSITORY_DIR=$PWD/../repository
STORAGE_DAYS=30
#https://stackoverflow.com/questions/48585148/reading-names-of-imediate-child-folders-into-an-array-and-iterating-over-it?noredirect=1#comment84167181_48585148
while IFS= read -rd '' file;
do fbname=$(basename "$file");
# Find all persisted (.CSV) files that are older than $STORAGE_DAYS
files_to_process=($(find $REPOSITORY_DIR/$fbname -type f -name '*.csv' -mtime +$STORAGE_DAYS))
backup_datafile=$REPOSITORY_DIR/$fbname/$fbname.data.tar.gz
#echo $backup_datafile
# If the tar.gz file does not exist, we want to create it
# else (i.e. file exists), we want to add files to it
# Solution from: https://stackoverflow.com/questions/28185012/how-to-create-tar-for-files-older-than-7-days-using-linux-shell-scripting
NUM_FILES_FOUND=${#files_to_process[#]}
if [ $NUM_FILES_FOUND -gt 0 ]; then
echo "Found ${#files_to_process[#]} files to process ..."
if [ ! -f backup_datafile ]; then
# Creating a new tar.gz file, since file was not found
echo "Creating new backup file: $backup_datafile"
tar cvfz $backup_datafile "${files_to_process[#]}"
else
echo "Adding files to existing backup file: $backup_datafile"
# https://unix.stackexchange.com/questions/13093/add-update-a-file-to-an-existing-tar-gz-archive
gzip -dc $backup_datafile | tar -r "${files_to_process[#]}" | gzip >$backup_datafile.new
mv $backup_datafile.new $backup_datafile
fi
# remove processed files
for filename in "${files_to_process[#]}"; do
rm -f "$filename"
done
else
echo "Skipping directory: $REPOSITORY_DIR/$fbname/. No files found"
fi
done < <(find "$REPOSITORY_DIR" -maxdepth 1 -mindepth 1 -type d -print0)
This is where it went wrong:
if [ ! -f backup_datafile ]; then
I guess it should have been
if [ ! -f $backup_datafile ]; then
^
Or better yet, put that in quotes:
if [ ! -f "$backup_datafile" ]; then
if [ ! -f backup_datafile ]; then
i think you might be missing a "$" there

Bash: Recreate recursively sub-directories

I've got a lot of files in a lot of sub-directories.
I would like to perform some task on them and return the result in a new file, but in an output directory which has the exact same sub-directories as the input.
I try already this:
#!/bin/bash
########################################################
# $1 = "../benchmarks/k"
# $2 = Output Folder;
# $3 = Path to access the solver
InputFolder=$1;
OutputFolder=$2;
Solver=$3
mkdir -p $2;
########################################
#
# Send the command on the cluster
# to run the solver on the instancee.
#
########################################
solveInstance() {
instance=$1;
# $3 $instance > $2/$i.out
}
########################################
#
# Loop on benchmarks folders recursively
#
########################################
loop_folder_recurse() {
for i in "$1"/*;
do
if [ -d "$i" ]; then
echo "dir: $i"
mkdir -p "$2/$i";
loop_folder_recurse "$i"
elif [ -f "$i" ]; then
solveInstance "$i"
fi
done
}
########################################
#
# Main of the Bash script.
#
########################################
echo "Dir: $1";
loop_folder_recurse $1
########################################################
The problem is my line mkdir -p "$2/$i";. $2 is the name of a directory that we create at the beginning, so there is no problem. But in $i, it can be an absolute path and in that case it wants to create all the sub-directories to arrive to the file : Not possible. Or it can contain .. and same kind of problem appear...
I don't know exactly how to fix this bug :/ I try some things with sed but I did not succeed :/
The easiest way is to use find:
for i in `find $1 -type d` # Finds all the subfolders and loop.
do
mkdir ${i/$1/$2} # Replaces the root with the new root and creates the dir.
done
In such a way you recreate the folder structure of $1 in $2. You can even avoid the loop if you use sed to replace the old folder path with the new.

How to split the file path to extract the various subfolders into variables? (Ubuntu Bash)

I need help with Ubuntu Precise bash script.
I have several tiff files in various folders
masterFOlder--masterSub1 --masterSub1-1 --file1.tif
|--masterSub1-2 --masterSub1-2-1 --file2.tif
|
|--masterSub2 --masterSub1-2 .....
I need to run an Imagemagick command and save them to new folder "converted" while retaining the sub folder tree i.e. the new tree will be
converted --masterSub1 --masterSub1-1 --file1.png
|--masterSub1-2 --masterSub1-2-1 --file2.png
|
|--masterSub2 --masterSub1-2 .....
How do i split the filepath into folders, replace the first folder (masterFOlder to converted) and recreate a new file path?
Thanks to everyone reading this.
This script should work.
#!/bin/bash
shopt -s extglob && [[ $# -eq 2 && -n $1 && -n $2 ]] || exit
MASTERFOLDER=${1%%+(/)}/
CONVERTFOLDER=$2
OFFSET=${#MASTERFOLDER}
while read -r FILE; do
CPATH=${FILE:OFFSET}
CPATH=${CONVERTFOLDER}/${CPATH%.???}.png
CDIR=${CPATH%/*}
echo "Converting $FILE to $CPATH."
[[ -d $CDIR ]] || mkdir -p "$CDIR" && echo convert "$FILE" "$CPATH" || echo "Conversion failed."
done < <(exec find "${MASTERFOLDER}" -mindepth 1 -type f -iname '*.tif')
Just replace echo convert "$FILE" "$CPATH" with the actual command you use and run bash script.sh masterfolder convertedfolder

Resources