How to search for keywords in metadata across all files in a folder recursively? - bash

I need to search all subdirectories and files recursively from a location and print out any files that contains metadata matching any of my specified keywords.
e.g. If John Smith was listed as the author of hello.js in the metadata and one of my keywords was 'john' I would want the script to print hello.js.
I think the solution could be a combination of mdls and grep but I have not used bash much before so am a bit stuck.
I have tried the following command but this only prints the line the keyword is on if 'john' is found.
mdls hello.js | grep john
Thanks in advance.
(For reference I am using macOS.)

Piping the output of mdls into grep as you show in your question doesn't carry forward the filename. The following script iterates recursively over the files in the selected directory and checks to see if one of the attributes matches the desired pattern (using regex). If it does, the filename is output.
#!/bin/bash
shopt -s globstar # expand ** recursively
shopt -s nocasematch # ignore case
pattern="john"
attrib=Author
for file in /Users/me/myfiles/**/*.js
do
attrib_value=$(mdls -name "$attrib" "$file")
if [[ $attrib_value =~ $pattern ]]
then
printf 'Pattern: %s found in file $file\n' "$pattern" "$file"
fi
done
You can use a literal test instead of a regular expression:
if [[ $attrib_value == *$pattern* ]]
In order to use globstar you will need to use a later version of Bash than the one installed by default in MacOS. If that's not possible then you can use find, but there are challenges in dealing with filenames that contain newlines. This script takes care of that.
#!/bin/bash
shopt -s nocasematch # ignore case
dir=/Users/me/myfiles/
check_file () {
local attrib=$1
local pattern=$2
local file=$3
local attrib_value=$(mdls -name "$attrib" "$file")
if [[ $attrib_value =~ $pattern ]]
then
printf 'Pattern: %s found in file $file\n' "$pattern" "$file"
fi
}
export -f check_file
pattern="john"
attrib=Author
find "$dir" -name '*.js' -print0 | xargs -0 -I {} bash -c 'check_file "$attrib" "$pattern" "{}"'

Related

Remove numbers at beginning of filenames in directory in bash

In an attempt to rename the files in one directory with numbers at the front I made an error in my script so that this happened in the wrong directory. Therefore I now need to remove these numbers from the beginning of all of my filenames in a directory. These range from 1 to 3 digits. Examples of the filnames I am working with are:
706terrain_Slope1000m_Minimum_all_25PCs_bolt_all_25PCs_qq_bolt.png
680met_sfcWind_all_25PCs_bolt_number.txt
460greenness_NDVI_500m_min_all_25PCs_bolt_number.txt
I was thinking of using mv but I'm not really sure how to do it with varying numbers of digits at the beginning, so any advice would be appreciated!
A simple way in bash is making use of a regular expression test:
for file in *; do
[[ -f "${file}" ]] && [[ "${file}" =~ (^[0-9]+) ]] && mv ${file} ${file/${BASH_REMATCH[1]}}
done
This does the following:
[[ -f "${file}" ]]: test if file is a file, if so
[[ "${file}" =~ (^[0-9]+) ]]: check if file starts with a number
${file/${BASH_REMATCH[1]}}: remove the number from the string file by using BASH_REMATCH, a variable that matches the groupings from the regex match.
If you've got perl's rename installed, the following should work :
rename 's/^[0-9]{1,3}//' /path/to/files
/path/to/files can be a list of specific files, or probably in your case a glob (e.g. *.{png,txt}). You don't need to select only files starting with digits as rename won't modify those that do not.
Using bash parameter expansion:
shopt -s extglob
for i in +([0-9])*.{txt,png}; do
mv -- "$i" "${i##+([0-9])}"
done
This will remove starting digits (any number) in filenames having png and txt extension.
The ## is removing the longest matching prefix pattern.
The +(...) is path name expansion syntax for repeated characters.
And [0-9] is pattern matching digits.
Alternate method using GNU find:
#!/usr/bin/env bash
find ./ \
-maxdepth 1\
-type f\
-name '[[:digit:]]*'\
-exec bash -c 'shopt -s extglob; f="${1##*/}"; d="${1%%/*}"; mv -- "$1" "${d}/${f##+([[:digit:]])}"' _ {} \;
Find all actual files in current directory whose name start with a digit.
For each found file, execute the Bash script below:
shopt -s extglob # need for extended pattern syntax
f="${1##*/}" # Get file name without directory path
d="${1%%/*}" # Get directory path without file name
mv -- "$1" "${d}/${f##+([[:digit:]])}" # Rename without the leading digits
Using basic features of a POSIX-compliant shell:
#!/bin/sh
for f in [[:digit:]]*; do
if [ -f "$f" ]; then
pf="${f%${f#???}}" pf="${pf##*[[:digit:]]}"
mv "$f" "$pf${f#???}"
fi
done

How to use bash string formatting to reverse date format?

I have a lot of files that are named as: MM-DD-YYYY.pdf. I want to rename them as YYYY-MM-DD.pdf I’m sure there is some bash magic to do this. What is it?
For files in the current directory:
for name in ./??-??-????.pdf; do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done
Recursively, in or under the current directory:
find . -type f -name '??-??-????.pdf' -exec bash -c '
for name do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done' bash {} +
Enabling the globstar shell option in bash lets us do the following (will also, like the above solution, handle all files in or below the current directory):
shopt -s globstar
for name in **/??-??-????.pdf; do
if [[ "$name" =~ (.*)/([0-9]{2})-([0-9]{2})-([0-9]{4})\.pdf ]]; then
echo mv "$name" "${BASH_REMATCH[1]}/${BASH_REMATCH[4]}-${BASH_REMATCH[3]}-${BASH_REMATCH[2]}.pdf"
fi
done
All three of these solutions uses a regular expression to pick out the relevant parts of the filenames, and then rearranges these parts into the new name. The only difference between them is how the list of pathnames is generated.
The code prefixes mv with echo for safety. To actually rename files, remove the echo (but run at least once with echo to see that it does what you want).
A direct approach example from the command line:
$ ls
10-01-2018.pdf 11-01-2018.pdf 12-01-2018.pdf
$ ls [0-9]*-[0-9]*-[0-9]*.pdf|sed -r 'p;s/([0-9]{2})-([0-9]{2})-([0-9]{4})/\3-\1-\2/'|xargs -n2 mv
$ ls
2018-10-01.pdf 2018-11-01.pdf 2018-12-01.pdf
The ls output is piped to sed , then we use the p flag to print the argument without modifications, in other words, the original name of the file, and s to perform and output the conversion.
The ls + sed result is a combined output that consist of a sequence of old_file_name and new_file_name.
Finally we pipe the resulting feed through xargs to get the effective rename of the files.
From xargs man:
-n number Execute command using as many standard input arguments as possible, up to number arguments maximum.
You can use the following command very close to the one of klashxx:
for f in *.pdf; do echo "$f"; mv "$f" "$(echo "$f" | sed 's#\(..\)-\(..\)-\(....\)#\3-\2-\1#')"; done
before:
ls *.pdf
12-01-1998.pdf 12-03-2018.pdf
after:
ls *.pdf
1998-01-12.pdf 2018-03-12.pdf
Also if you have other pdf files that does not respect this format in your folder, what you can do is to select only the files that respect the format: MM-DD-YYYY.pdf to do so use the following command:
for f in `find . -maxdepth 1 -type f -regextype sed -regex './[0-9]\{2\}-[0-9]\{2\}-[0-9]\{4\}.pdf' | xargs -n1 basename`; do echo "$f"; mv "$f" "$(echo "$f" | sed 's#\(..\)-\(..\)-\(....\)#\3-\2-\1#')"; done
Explanations:
find . -maxdepth 1 -type f -regextype sed -regex './[0-9]\{2\}-[0-9]\{2\}-[0-9]\{4\}.pdf this find command will look only for files in the current working directory that respect your syntax and extract their basename (remove the ./ at the beginning, folders and other type of files that would have the same name are not taken into account, other *.pdf files are also ignored.
for each file you do a move and the resulting file name is computed using sed and back reference to the 3 groups for MM,DD and YYYY
For these simple filenames, using a more verbose pattern, you can simplify the body of the loop a bit:
twodigit=[[:digit:]][[:digit:]]
fourdigit="$twodigit$twodigit"
for f in $twodigit-$twodigit-$fourdigit.pdf; do
IFS=- read month day year <<< "${f%.pdf}"
mv "$f" "$year-$month-$day.pdf"
done
This is basically #Kusalananda's answer, but without the verbosity of regular-expression matching.

how many files find found?

I'm writing a script where I want to error out if the file I'm searching for exists in multiple locations, and tell the user the locations (the find results). So I've got a find like:
file_location=$(find $dir -name $file -print)
I'm thinking it should be simple to see if the file is found in multiple places, but I must not be matching what find uses to separate results with (seems like space sometimes, and a newline others). As such, rather than matching on that, I want to see if there are any characters after $file in $file_location.
I'm checking for
echo "$file_location" | grep -q "${file}."; then
and this still doesn't work. So I guess I don't care what I use, except I want to capture $file_location as a result of the find, and then check that. Can you suggest a good way?
Something like below if you want to avoid errors on eols and such
files=()
while IFS= read -d $'\0' -r match; do
files+=("$match")
done < <(find "$dir" -name "$file" -print0)
(${#files[#]} > 1) && printf '%s\n' "${files[#]}"
Or in bash 4+
shopt -s globstar dotglob
files=("$dir"/**/"$file")
((${#files[#]} > 1)) && printf '%s\n' "${files[#]}"
found=$(find "$dir" -name "$file" -ls)
count=$(wc -l <<< "$found")
if [ "$count" -gt 1 ]
then
echo "I found more than one:"
echo "$found"
fi
For zero matches found you will still receive a 1 because of the intransparent way the shell strips a trailing newline with the $() operator, so effectively one line output and zero lines output are both one line in the end. See xxd <<< "" for demonstration of the automatic appending of a newline when used as input again. A simple way to circumvent this is to add a fake newline in the beginning of the string, so no empty string can happen: found=$(echo; find …), and then subtract one from the number of lines.
EDIT: I changed the usage of -printf "%p\n" in my answer to -ls which performs a proper quoting of newlines. Otherwise file names with newlines in them would mess up the counting.
If you specify the full name in the find command, the matches on name will be unique. That is, if you say find -name "hello.txt", just files named hello.txt will be found.
What you can do is something like
find $dir -name $file -printf '.'
^^^^^^^^^^^
this will print as many . as matches are found. Then, to see how many files are found with this name it is just a matter of counting the number of dots you got as output.
No need for find here if you're running a new (4.0+) bash which can do recursive globbing itself; just load glob results directly into a shell array, and check its length:
shopt -s nullglob globstar # enable recursive globbing, and null results
file_locations=( "$dir"/**/"$file" )
echo "${#file_locations[#]} files named $file found under $dir; they are:"
printf ' %q\n' "${file_locations[#]}"
If you don't want to mess with nullglob, then:
shopt -s globstar # enable recursive globbing
file_locations=( "$dir"/**/"$file" )
# without nullglob, a failed match will return the glob expression itself
# to test for this, see if our first entry exists
if [[ -e ${file_locations[0]} ]]; then
echo "No instances of $file found under $dir"
else
echo "${#file_locations[#]} files named $file found under $dir; they are:"
printf ' %q\n' "${file_locations[#]}"
fi
You can still use an array to unambiguously read find results on old versions of bash; unlike more naive approaches, this will work even when file or directory names contain literal newlines:
file_locations=( )
while IFS= read -r -d '' filename; do
file_locations+=( "$filename" )
done < <(find "$dir" -type f -name "$file" -print0)
echo "${#file_locations[#]} files named $file found under $dir; they are:"
printf ' %q\n' "${file_locations[#]}"
I recommend using:
find . -name blong.txt -print0
Which tells find to join its output together with null \0 characters. Makes it easier to use awk with the -F flag or xargs with the -0 flag.
Try:
N=0
for i in `find $dir -name $file -printf '. '`
do
N=$((N+1))
done
echo $N

find command with filename coming from bash printf builtin not working

I'm trying to do a script which lists files on a directory and then searchs one by one every file in other directory. For dealing with spaces and special characters like "[" or "]" I'm using $(printf %q "$FILENAME") as input for the find command: find /directory/to/search -type f -name $(printf %q "$FILENAME").
It works like a charm for every filename except in one case: when there's multibyte characters (UTF-8). In that case the output of printf is an external quoted string, i.e.: $'file name with blank spaces and quoted characters in the form of \NNN\NNN', and that string is not being expanded without the $'' quoting, so find searchs for a file with a name including that quote: «$'filename'».
Is there an alternative solution in order to be able to pass to find any kind of filename?
My script is like follows (I know some lines can be deleted, like the "RESNAME="):
#!/bin/bash
if [ -d $1 ] && [ -d $2 ]; then
IFSS=$IFS
IFS=$'\n'
FILES=$(find $1 -type f )
for FILE in $FILES; do
BASEFILE=$(printf '%q' "$(basename "$FILE")")
RES=$(find $2 -type f -name "$BASEFILE" -print )
if [ ${#RES} -gt 1 ]; then
RESNAME=$(printf '%q' "$(basename "$RES")")
else
RESNAME=
fi
if [ "$RESNAME" != "$BASEFILE" ]; then
echo "FILE NOT FOUND: $FILE"
fi
done
else
echo "Directories do not exist"
fi
IFS=$IFSS
As an answer said, I've used associative arrays, but with no luck, maybe I'm not using correctly the arrays, but echoing it (array[#]) returns nothing. This is the script I've written:
#!/bin/bash
if [ -d "$1" ] && [ -d "$2" ]; then
declare -A files
find "$2" -type f -print0 | while read -r -d $'\0' FILE;
do
BN2="$(basename "$FILE")"
files["$BN2"]="$BN2"
done
echo "${files[#]}"
find "$1" -type f -print0 | while read -r -d $'\0' FILE;
do
BN1="$(basename "$FILE")"
if [ "${files["$BN1"]}" != "$BN1" ]; then
echo "File not found: "$BN1""
fi
done
fi
Don't use for loops. First, it is slower. Your find has to complete before the rest of your program can run. Second, it is possible to overload the command line. The enter for command must fit in the command line buffer.
Most importantly of all, for sucks at handling funky file names. You're running conniptions trying to get around this. However:
find $1 -type f -print0 | while read -r -d $'\0' FILE
will work much better. It handles file names -- even file names that contain \n characters. The -print0 tells find to separate file names with the NUL character. The while read -r -d $'\0 FILE will read each file name (separate by the NUL character) into $FILE.
If you put quotes around the file name in the find command, you don't have to worry about special characters in the file names.
Your script is running find once for each file found. If you have 100 files in your first directory, you're running find 100 times.
Do you know about associative (hash) arrays in BASH? You are probably better off using associative arrays. Run find on the first directory, and store those files names in an associative array.
Then, run find (again using the find | while read syntax) for your second directory. For each file you find in the second directory, see if you have a matching entry in your associative array. If you do, you know that file is in both arrays.
Addendum
I've been looking at the find command. It appears there's no real way to prevent it from using pattern matching except through a lot of work (like you were doing with printf. I've tried using the -regex matching and using \Q and \E to remove the special meaning of pattern characters. I haven't been successful.
There comes a time that you need something a bit more powerful and flexible than shell to implement your script, and I believe this is the time.
Perl, Python, and Ruby are three fairly ubiquitous scripting languages found on almost all Unix systems and are available on other non-POSIX platforms (cough! ...Windows!... cough!).
Below is a Perl script that takes two directories, and searches them for matching files. It uses the find command once and uses associative arrays (called hashes in Perl). I key the hash to the name of my file. In the value portion of the hash, I store an array of the directories where I found this file.
I only need to run the find command once per directory. Once that is done, I can print out all the entries in the hash that contain more than one directory.
I know it's not shell, but this is one of the cases where you can spend a lot more time trying to figure out how to get shell to do what you want than its worth.
#! /usr/bin/env perl
use strict;
use warnings;
use feature qw(say);
use File::Find;
use constant DIRECTORIES => qw( dir1 dir2 );
my %files;
#
# Perl version of the find command. You give it a list of
# directories and a subroutine for filtering what you find.
# I am basically rejecting all non-file entires, then pushing
# them into my %files hash as an array.
#
find (
sub {
return unless -f;
$files{$_} = [] if not exists $files{$_};
push #{ $files{$_} }, $File::Find::dir;
}, DIRECTORIES
);
#
# All files are found and in %files hash. I can then go
# through all the entries in my hash, and look for ones
# with more than one directory in the array reference.
# IF there is more than one, the file is located in multiple
# directories, and I print them.
#
for my $file ( sort keys %files ) {
if ( #{ $files{$file} } > 1 ) {
say "File: $file: " . join ", ", #{ $files{$file} };
}
}
Try something like this:
find "$DIR1" -printf "%f\0" | xargs -0 -i find "$DIR2" -name \{\}
How about this one-liner?
find dir1 -type f -exec bash -c 'read < <(find dir2 -name "${1##*/}" -type f)' _ {} \; -printf "File %f is in dir2\n" -o -printf "File %f is not in dir2\n"
Absolutely 100% safe regarding files with funny symbols, newlines and spaces in their name.
How does it work?
find (the main one) will scan through directory dir1 and for each file (-type f) will execute
read < <(find dir2 -name "${1##*/} -type f")
with argument the name of the current file given by the main find. This argument is at position $1. The ${1##*/} removes everything before the last / so that if $1 is path/to/found/file the find statement is:
find dir2 -name "file" -type f
This outputs something if file is found, otherwise has no output. That's what is read by the read bash command. read's exit status is true if it was able to read something, and false if there wasn't anything read (i.e., in case nothing is found). This exit status becomes bash's exit status which becomes -exec's status. If true, the next -printf statement is executed, and if false, the -o -printf part will be executed.
If your dirs are given in variables $dir1 and $dir2 do this, so as to be safe regarding spaces and funny symbols that could occur in $dir2:
find "$dir1" -type f -exec bash -c 'read < <(find "$0" -name "${1##*/}" -type f)' "$dir2" {} \; -printf "File %f is in $dir2\n" -o -printf "File %f is not in $dir2\n"
Regarding efficiency: this is of course not an efficient method at all! the inner find will be executed as many times as there are found files in dir1. This is terrible, especially if the directory tree under dir2 is deep and has many branches (you can rely a little bit on caching, but there are limits!).
Regarding usability: you have fine-grained control on how both find's work and on the output, and it's very easy to add many more tests.
So, hey, tell me how to compare files from two directories? Well, if you agree on loosing a little bit of control, this will be the shortest and most efficient answer:
diff dir1 dir2
Try it, you'll be amazed!
Since you are only using find for its recursive directory following, it will be easier to simply use the globstar option in bash. (You're using associative arrays, so your bash is new enough).
#!/bin/bash
shopt -s globstar
declare -A files
if [[ -d $1 && -d $2 ]]; then
for f in "$2"/**/*; do
[[ -f "$f" ]] || continue
BN2=$(basename "$f")
files["$BN2"]=$BN2
done
echo "${files[#]}"
for f in "$1"/**/*; do
[[ -f "$f" ]] || continue
BN1=$(basename $f)
if [[ ${files[$BN1]} != $BN1 ]]; then
echo "File not found: $BN1"
fi
done
fi
** will match zero or more directories, so $1/**/* will match all the files and directories in $1, all the files and directories in those directories, and so forth all the way down the tree.
If you want to use associative arrays, here's one possibility that will work well with files with all sorts of funny symbols in their names (this script has too much to just show the point, but it is usable as is – just remove the parts you don't want and adapt to your needs):
#!/bin/bash
die() {
printf "%s\n" "$#"
exit 1
}
[[ -n $1 ]] || die "Must give two arguments (none found)"
[[ -n $2 ]] || die "Must give two arguments (only one given)"
dir1=$1
dir2=$2
[[ -d $dir1 ]] || die "$dir1 is not a directory"
[[ -d $dir2 ]] || die "$dir2 is not a directory"
declare -A dir1files
declare -A dir2files
while IFS=$'\0' read -r -d '' file; do
dir1files[${file##*/}]=1
done < <(find "$dir1" -type f -print0)
while IFS=$'\0' read -r -d '' file; do
dir2files[${file##*/}]=1
done < <(find "$dir2" -type f -print0)
# Which files in dir1 are in dir2?
for i in "${!dir1files[#]}"; do
if [[ -n ${dir2files[$i]} ]]; then
printf "File %s is both in %s and in %s\n" "$i" "$dir1" "$dir2"
# Remove it from dir2 has
unset dir2files["$i"]
else
printf "File %s is in %s but not in %s\n" "$i" "$dir1" "$dir2"
fi
done
# Which files in dir2 are not in dir1?
# Since I unset them from dir2files hash table, the only keys remaining
# correspond to files in dir2 but not in dir1
if [[ -n "${!dir2files[#]}" ]]; then
printf "File %s is in %s but not in %s\n" "$dir2" "$dir1" "${!dir2files[#]}"
fi
Remark. The identification of files is only based on their filenames, not their contents.

Recursive Shell Script and file extensions issue

I have a problem with this script. The script is supposed to go trough all the files and all sub-directories and sub-files (recursively). If the file ends with the extension .txt i need to replace a char/word in the text with a new char/word and then copy it into a existing directory. The first argument is the directory i need to start the search, the second is the old char/word, third the new char/word and fourth the directory to copy the files to. The script goes trough the files but only does the replacement and copies the files from the original directory. Here is the script
#!/bin/bash
funk(){
for file in `ls $1`
do
if [ -f $file ]
then
ext=${file##*.}
if [ "$ext" = "txt" ]
then
sed -i "s/$2/$3/g" $file
cp $file $4
fi
elif [ -d $file ]
then
funk $file $2 $3 $4
fi
done
}
if [ $# -lt 4 ]
then
echo "Need more arg"
exit 2;
fi
cw=$1
a=$2
b=$3
od=$4
funk $cw $a $b $od
You're using a lot of bad practices here: lack of quotings, you're parsing the output of ls... all this will break as soon as a filename contains a space of other funny symbol.
You don't need recursion if you either use bash's globstar optional behavior, or find.
Here's a possibility with the former, that will hopefully show you better practices:
#!/bin/bash
shopt -s globstar
shopt -s nullglob
funk() {
local search=${2//\//\\/}
local replace=${3//\//\\/}
for f in "$1"/**.txt; do
sed -i "s/$search/$replace/g" -- "$f"
cp -nvt "$4" -- "$f"
done
}
if (($#!=4)); then
echo >&2 "Need 4 arguments"
exit 1
fi
funk "$#"
The same function funk using find:
#!/bin/bash
funk() {
local search=${2//\//\\/}
local replace=${3//\//\\/}
find "$1" -name '*.txt' -type f -exec sed -i "s/$search/$replace/g" -- {} \; -exec cp -nvt "$4" -- {} \;
}
if (($#!=4)); then
echo >&2 "Need 4 arguments"
exit 1
fi
funk "$#"
In cp I'm using
the -n switch: no clobber, so as to not overwrite an existing file. Use it if your version of mv supports it, unless you actually want to overwrite files.
the -v switch: verbose, will show you the moved files (optional).
the -t switch: -t followed by a directory tells to copy into this directory. It's a very good thing to use cp this way: imagine instead of giving an existing directory, you give an existing file: without this feature, this file will get overwritten several times (well, this will be the case if you omit the -n option)! with this feature the existing file will remain safe.
Also notice the use of --. If your cp and sed supports it (it's the case for GNU sed and cp), use it always! it means end of options now. If you don't use it and if a filename start with a hyphen, it would confuse the command trying to interpret an option. With this --, we're safe to put a filename that may start with a hyphen.
Notice that in the search and replace patterns I replaced all slashes / by their escaped form \/ so as not to clash with the separator in sed if a slash happens to appear in search or replace.
Enjoy!
As pointed out, looping over find output is not a good idea. It also doesn't support slashes in search&replace.
Check gniourf_gniourf's answer.
How about using find for that?
#!/bin/bash
funk () {
local dir=$1; shift
local search=$1; shift
local replace=$1; shift
local dest=$1; shift
mkdir -p "$dest"
for file in `find $dir -name *.txt`; do
sed -i "s/$search/$replace/g" "$file"
cp "$file" "$dest"
done
}
if [[ $# -lt 4 ]] ; then
echo "Need 4 arguments"
exit 2;
fi
funk "$#"
Though you might have files with the same names in the subdirectories, then those will be overwritten. Is that an issue in your case?

Resources