Spaces in path names giving trouble with Find in Bash. Any *simple* work-around? - bash

Is there any way to change the following string so I don't get any problems when there are files/folders with spaces in them?
files=`find ~/$folder -name "*#*" -type f`
I'd prefer if there was a solution that wouldn't have to involve having to change other parts of my code but this line of code, as everything seems to be working correctly, apart from this minor detail.
Thanks
EDIT: Here is the code in a bit more detail:
abc=( $(find "$pasta" -name "$ficheiro_original#*" -type f) )
abc_length=${#abc[#]}

If you are not using those file names later in your script , just iterate them and process on the fly.
find ~/$folder -name "*#*" -type f | while read -r FILE
do
echo "do you stuff"
done
otherwise, you can set IFS
IFS=$'\n'
files=$(find ~/$folder -name "*#*" -type f)
Update:
$ IFS=$'\n'
$ a=($(find . -type f ))
$ echo ${#a[#]}
14

You'd have to make some changes, but to deal with arbitrary names, think in terms of using the GNU Find option -print0, etc.
find ~/$folder -name "*#*" -type f -print0 |
while read -d '^#' file
do
echo "<<$file>>"
done
(Where the single byte represented as '^#' is actually an ASCII NUL ('\0'; enter with Control-V Control-Shift-#).
find ~/$folder -name "*#*" -type f -print0 |
while read -d '' file
do
echo "<<$file>>"
done
The empty string for the delimiter means 'use the zero byte, ASCII NUL, as the delimiter' and is appropriate for parsing 'find ... -print0' output. (Thanks Dennis Williamson for the tip.)
This allows you to read any arbitrary names. You should probably use a bash array to hold the names, but that implies some changes further down the script.
(Given the comment response that only spaces have to be worried about, this might be overkill, though using read to process lines with the names is a key part of the operation, and using an array would probably make life simpler.)

If you need a list of files that might have spaces in the names, you pretty much have to store them as an array, rather than just a string. Create the array with something like this:
saveIFS="$IFS"; IFS=$'\n'; files=( $(find ~/"$folder" -name "*#*" -type f) ); IFS="$saveIFS"
and then you'll have to modify the rest of the script to use files as an array rather than a string, and it (and other filenames) should always be in double-quotes to keep spaces from being mistaken as separators. For instance, anyplace you're currently using $files, replace that with "${files[#]}"
ls "${files[#]}"
for f in "${files[#]}"; do
ls "$f"
done
echo "found ${#files[#]} files"

Here is another way to get around without changing the rest of code:
# files=($(find))
eval "files=($(find -printf '"%h/%f" '))"
for f in "${files[#]}"; do
echo "$f"
done
It's dirty and will not work for filename with special characters, e.g. ". It uses eval to evaluate a string of a Bash array and -printf of find to form that string.
I personally prefer changing $IFS, just FYI.

To read file names with spaces into a Bash array variable, you could use the "read" builtin command as well:
printf '%q\n' "$IFS"
IFS=$'\n' read -r -d "" -a abc <<< "$(find ~/$folder -name "*#*" -type f)"
IFS=$'\n' read -r -d "" -a abc < <(find ~/$folder -name "*#*" -type f) # alternative
abc_length=${#abc[#]}
for ((i=1; i <= ${#abc[#]}; i++)); do echo "$i: ${abc[i-1]}"; done
printf '%q\n' "$IFS"
Note that the scope of the newly set IFS variable is limited to the execution of the read command (which leaves the original IFS variable intact).

Related

How to find files with specific extensions recursively using the for/in syntax? [duplicate]

x=$(find . -name "*.txt")
echo $x
if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list.
Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it.
So what is the best way to loop through the results of a find command?
TL;DR: If you're just here for the most correct answer, you probably want my personal preference (see the bottom of this post):
# execute `process` once for each file
find . -name '*.txt' -exec process {} \;
If you have time, read through the rest to see several different ways and the problems with most of them.
The full answer:
The best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files:
for i in $x; do # Not recommended, will break on whitespace
process "$i"
done
Marginally better, cut out the temporary variable x:
for i in $(find -name \*.txt); do # Not recommended, will break on whitespace
process "$i"
done
It is much better to glob when you can. White-space safe, for files in the current directory:
for i in *.txt; do # Whitespace-safe but not recursive.
process "$i"
done
By enabling the globstar option, you can glob all matching files in this directory and all subdirectories:
# Make sure globstar is enabled
shopt -s globstar
for i in **/*.txt; do # Whitespace-safe and recursive
process "$i"
done
In some cases, e.g. if the file names are already in a file, you may need to use read:
# IFS= makes sure it doesn't trim leading and trailing whitespace
# -r prevents interpretation of \ escapes.
while IFS= read -r line; do # Whitespace-safe EXCEPT newlines
process "$line"
done < filename
read can be used safely in combination with find by setting the delimiter appropriately:
find . -name '*.txt' -print0 |
while IFS= read -r -d '' line; do
process "$line"
done
For more complex searches, you will probably want to use find, either with its -exec option or with -print0 | xargs -0:
# execute `process` once for each file
find . -name \*.txt -exec process {} \;
# execute `process` once with all the files as arguments*:
find . -name \*.txt -exec process {} +
# using xargs*
find . -name \*.txt -print0 | xargs -0 process
# using xargs with arguments after each filename (implies one run per filename)
find . -name \*.txt -print0 | xargs -0 -I{} process {} argument
find can also cd into each file's directory before running a command by using -execdir instead of -exec, and can be made interactive (prompt before running the command for each file) using -ok instead of -exec (or -okdir instead of -execdir).
*: Technically, both find and xargs (by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line, you're SOL find a different way.
What ever you do, don't use a for loop:
# Don't do this
for file in $(find . -name "*.txt")
do
…code using "$file"
done
Three reasons:
For the for loop to even start, the find must run to completion.
If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.
Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.
Always use a while read construct:
find . -name "*.txt" -print0 | while read -d $'\0' file
do
…code using "$file"
done
The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.
The -print0 will use the NULL as a file separator instead of a newline and the -d $'\0' will use NULL as the separator while reading.
find . -name "*.txt"|while read fname; do
echo "$fname"
done
Note: this method and the (second) method shown by bmargulies are safe to use with white space in the file/folder names.
In order to also have the - somewhat exotic - case of newlines in the file/folder names covered, you will have to resort to the -exec predicate of find like this:
find . -name '*.txt' -exec echo "{}" \;
The {} is the placeholder for the found item and the \; is used to terminate the -exec predicate.
And for the sake of completeness let me add another variant - you gotta love the *nix ways for their versatility:
find . -name '*.txt' -print0|xargs -0 -n 1 echo
This would separate the printed items with a \0 character that isn't allowed in any of the file systems in file or folder names, to my knowledge, and therefore should cover all bases. xargs picks them up one by one then ...
Filenames can include spaces and even control characters. Spaces are (default) delimiters for shell expansion in bash and as a result of that x=$(find . -name "*.txt") from the question is not recommended at all. If find gets a filename with spaces e.g. "the file.txt" you will get 2 separated strings for processing, if you process x in a loop. You can improve this by changing delimiter (bash IFS Variable) e.g. to \r\n, but filenames can include control characters - so this is not a (completely) safe method.
From my point of view, there are 2 recommended (and safe) patterns for processing files:
1. Use for loop & filename expansion:
for file in ./*.txt; do
[[ ! -e $file ]] && continue # continue, if file does not exist
# single filename is in $file
echo "$file"
# your code here
done
2. Use find-read-while & process substitution
while IFS= read -r -d '' file; do
# single filename is in $file
echo "$file"
# your code here
done < <(find . -name "*.txt" -print0)
Remarks
on Pattern 1:
bash returns the search pattern ("*.txt") if no matching file is found - so the extra line "continue, if file does not exist" is needed. see Bash Manual, Filename Expansion
shell option nullglob can be used to avoid this extra line.
"If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed." (from Bash Manual above)
shell option globstar: "If set, the pattern ‘**’ used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a ‘/’, only directories and subdirectories match." see Bash Manual, Shopt Builtin
other options for filename expansion: extglob, nocaseglob, dotglob & shell variable GLOBIGNORE
on Pattern 2:
filenames can contain blanks, tabs, spaces, newlines, ... to process filenames in a safe way, find with -print0 is used: filename is printed with all control characters & terminated with NUL. see also Gnu Findutils Manpage, Unsafe File Name Handling, safe File Name Handling, unusual characters in filenames. See David A. Wheeler below for detailed discussion of this topic.
There are some possible patterns to process find results in a while loop. Others (kevin, David W.) have shown how to do this using pipes:
files_found=1
find . -name "*.txt" -print0 |
while IFS= read -r -d '' file; do
# single filename in $file
echo "$file"
files_found=0 # not working example
# your code here
done
[[ $files_found -eq 0 ]] && echo "files found" || echo "no files found"
When you try this piece of code, you will see, that it does not work: files_found is always "true" & the code will always echo "no files found". Reason is: each command of a pipeline is executed in a separate subshell, so the changed variable inside the loop (separate subshell) does not change the variable in the main shell script. This is why I recommend using process substitution as the "better", more useful, more general pattern.See I set variables in a loop that's in a pipeline. Why do they disappear... (from Greg's Bash FAQ) for a detailed discussion on this topic.
Additional References & Sources:
Gnu Bash Manual, Pattern Matching
Filenames and Pathnames in Shell: How to do it Correctly, David A. Wheeler
Why you don't read lines with "for", Greg's Wiki
Why you shouldn't parse the output of ls(1), Greg's Wiki
Gnu Bash Manual, Process Substitution
(Updated to include #Socowi's execellent speed improvement)
With any $SHELL that supports it (dash/zsh/bash...):
find . -name "*.txt" -exec $SHELL -c '
for i in "$#" ; do
echo "$i"
done
' {} +
Done.
Original answer (shorter, but slower):
find . -name "*.txt" -exec $SHELL -c '
echo "$0"
' {} \;
If you can assume the file names don't contain newlines, you can read the output of find into a Bash array using the following command:
readarray -t x < <(find . -name '*.txt')
Note:
-t causes readarray to strip newlines.
It won't work if readarray is in a pipe, hence the process substitution.
readarray is available since Bash 4.
Bash 4.4 and up also supports the -d parameter for specifying the delimiter. Using the null character, instead of newline, to delimit the file names works also in the rare case that the file names contain newlines:
readarray -d '' x < <(find . -name '*.txt' -print0)
readarray can also be invoked as mapfile with the same options.
Reference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream
# Doesn't handle whitespace
for x in `find . -name "*.txt" -print`; do
process_one $x
done
or
# Handles whitespace and newlines
find . -name "*.txt" -print0 | xargs -0 -n 1 process_one
I like to use find which is first assigned to variable and IFS switched to new line as follow:
FilesFound=$(find . -name "*.txt")
IFSbkp="$IFS"
IFS=$'\n'
counter=1;
for file in $FilesFound; do
echo "${counter}: ${file}"
let counter++;
done
IFS="$IFSbkp"
As commented by #Konrad Rudolph this will not work with "new lines" in file name. I still think it is handy as it covers most of the cases when you need to loop over command output.
As already posted on the top answer by Kevin, the best solution is to use a for loop with bash glob, but as bash glob is not recursive by default, this can be fixed by a bash recursive function:
#!/bin/bash
set -x
set -eu -o pipefail
all_files=();
function get_all_the_files()
{
directory="$1";
for item in "$directory"/* "$directory"/.[^.]*;
do
if [[ -d "$item" ]];
then
get_all_the_files "$item";
else
all_files+=("$item");
fi;
done;
}
get_all_the_files "/tmp";
for file_path in "${all_files[#]}"
do
printf 'My file is "%s"\n' "$file_path";
done;
Related questions:
Bash loop through directory including hidden file
Recursively list files from a given directory in Bash
ls command: how can I get a recursive full-path listing, one line per file?
List files recursively in Linux CLI with path relative to the current directory
Recursively List all directories and files
bash script, create array of all files in a directory
How can I creates array that contains the names of all the files in a folder?
How can I creates array that contains the names of all the files in a folder?
How to get the list of files in a directory in a shell script?
based on other answers and comment of #phk, using fd #3:
(which still allows to use stdin inside the loop)
while IFS= read -r f <&3; do
echo "$f"
done 3< <(find . -iname "*filename*")
You can put the filenames returned by find into an array like this:
array=()
while IFS= read -r -d ''; do
array+=("$REPLY")
done < <(find . -name '*.txt' -print0)
Now you can just loop through the array to access individual items and do whatever you want with them.
Note: It's white space safe.
You can store your find output in array if you wish to use the output later as:
array=($(find . -name "*.txt"))
Now to print the each element in new line, you can either use for loop iterating to all the elements of array, or you can use printf statement.
for i in ${array[#]};do echo $i; done
or
printf '%s\n' "${array[#]}"
You can also use:
for file in "`find . -name "*.txt"`"; do echo "$file"; done
This will print each filename in newline
To only print the find output in list form, you can use either of the following:
find . -name "*.txt" -print 2>/dev/null
or
find . -name "*.txt" -print | grep -v 'Permission denied'
This will remove error messages and only give the filename as output in new line.
If you wish to do something with the filenames, storing it in array is good, else there is no need to consume that space and you can directly print the output from find.
I think using this piece of code (piping the command after while done):
while read fname; do
echo "$fname"
done <<< "$(find . -name "*.txt")"
is better than this answer because while loop is executed in a subshell according to here, if you use this answer and variable changes cannot be seen after while loop if you want to modify variables inside the loop.
function loop_through(){
length_="$(find . -name '*.txt' | wc -l)"
length_="${length_#"${length_%%[![:space:]]*}"}"
length_="${length_%"${length_##*[![:space:]]}"}"
for i in {1..$length_}
do
x=$(find . -name '*.txt' | sort | head -$i | tail -1)
echo $x
done
}
To grab the length of the list of files for loop, I used the first command "wc -l".
That command is set to a variable.
Then, I need to remove the trailing white spaces from the variable so the for loop can read it.
find <path> -xdev -type f -name *.txt -exec ls -l {} \;
This will list the files and give details about attributes.
Another alternative is to not use bash, but call Python to do the heavy lifting. I recurred to this because bash solutions as my other answer were too slow.
With this solution, we build a bash array of files from inline Python script:
#!/bin/bash
set -eu -o pipefail
dsep=":" # directory_separator
base_directory=/tmp
all_files=()
all_files_string="$(python3 -c '#!/usr/bin/env python3
import os
import sys
dsep="'"$dsep"'"
base_directory="'"$base_directory"'"
def log(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)
def check_invalid_characther(file_path):
for thing in ("\\", "\n"):
if thing in file_path:
raise RuntimeError(f"It is not allowed {thing} on \"{file_path}\"!")
def absolute_path_to_relative(base_directory, file_path):
relative_path = os.path.commonprefix( [ base_directory, file_path ] )
relative_path = os.path.normpath( file_path.replace( relative_path, "" ) )
# if you use Windows Python, it accepts / instead of \\
# if you have \ on your files names, rename them or comment this
relative_path = relative_path.replace("\\", "/")
if relative_path.startswith( "/" ):
relative_path = relative_path[1:]
return relative_path
for directory, directories, files in os.walk(base_directory):
for file in files:
local_file_path = os.path.join(directory, file)
local_file_name = absolute_path_to_relative(base_directory, local_file_path)
log(f"local_file_name {local_file_name}.")
check_invalid_characther(local_file_name)
print(f"{base_directory}{dsep}{local_file_name}")
' | dos2unix)";
if [[ -n "$all_files_string" ]];
then
readarray -t temp <<< "$all_files_string";
all_files+=("${temp[#]}");
fi;
for item in "${all_files[#]}";
do
OLD_IFS="$IFS"; IFS="$dsep";
read -r base_directory local_file_name <<< "$item"; IFS="$OLD_IFS";
printf 'item "%s", base_directory "%s", local_file_name "%s".\n' \
"$item" \
"$base_directory" \
"$local_file_name";
done;
Related:
os.walk without hidden folders
How to do a recursive sub-folder search and return files in a list?
How to split a string into an array in Bash?
How about if you use grep instead of find?
ls | grep .txt$ > out.txt
Now you can read this file and the filenames are in the form of a list.

Renaming directories in bash using sed

i have several directories which i want to rename:
etc:
"duedate-year" directory to "duedate" (just removing -year)
"start-year" directory to "start"
This is what i've tried:
for CACHE in `find ${DESTINATION_REPO} -maxdepth 1 -type d -name "*year" ` ;
do
set UPDATE="awk -F"-year" '{print $1}' $CACHE" ;
mv $CACHE $UPDATE
done
However it doesn't succeed. Is there away to rename directory using "sed" command?
You're assigning the result of awk incorrectly. It should be inside backticks or $(...). And to process a variable, you need to pipe echo $CACHE to it, not use $CACHE as the filename argument (that will process the contents of the file). So that line should be:
And variables aren't assigned using set, you just write var=value.
So that line should be:
UPDATE=$(echo "$CACHE" | awk -F-year '{print $1}`)
But there's no need to use awk for this at all, you can use shell variable expansion operators:
UPDATE=${CACHE%%-year*}
%%year* means to remove the longest trailing part of the value that matches the wildcard -year*.
Many shell solutions will "work" for a given sample input set and then blow up disastrously later, usually due to unquoted variables, incorrect processing of blanks, etc. This should be safe unless your file name contains newlines (in which case see find -print0 and xargs -0):
find "$DESTINATION_REPO" -maxdepth 1 -type d -name "*-year" |
while IFS= read -r CACHE
do
mv -- "$CACHE" "${CACHE%-year}"
done
Or use the rename command
rename 's/-year//' *year
Yes you can use a pipe. I do this:
for DIR in $(find ${DESTINATION_REPO} -maxdepth 1 -type d -name "*year"); do
mv "${DIR}" $(echo "${DIR}" | sed -E 's/year//')
done
It should be noted that I am very much self taught and sometimes have bad habits...
After consulting gniourf_gniourf I am posting a more robust version (read more) which is a "code lift" of Ed Morton's answer below.
find ${DESTINATION_REPO} -maxdepth 1 -type d -name "*-year"|
while IFS= read -r DIR
do
mv "${DIR}" $(echo "${DIR}" | sed -E 's/year//')
done

Understanding (multiple and automatic) filename quoting

I'm starting bash scripting and i can't handle spaced filename. I searched other questions about my problem here, like:
how to output file names surrounded with quotes in SINGLE line?
In my script i need to take all directorys, end for each one doing some work. So the skeleton of my script is:
for directory in `find "$DIR" -type d`
do
# ... some work ...
done
This script, doesn't work with spaced filename. So i tried using find in combination with exec, that seems an elegant and easy solution, but nothing changed. More precisely i tried:
(I) find "$DIR" -type d -exec echo -n '"{}" ' \;
(II) find "$DIR" -type d -exec echo -n '"{}" ' +
Before using the command -exec i read something about it, but something is still not clear to me, so:
What's and how works '"{}" ' \;
What's and how works '"{}" ' +
How can i fix my script also for spaced filename?
What's and how works '"{}" ' \;
For each filename, this will basically execute echo -n "\"filename\" ", producing a bunch of lines like "filename".
Adding literal quotes doesn't help though, because bash doesn't start interpretting data as shell script code.
If you read foo; ls $foo, you can't just type in $(rm /) to run a command, and you can't just type in "foo bar" to avoid splitting a filename. Data is data, not code, and shell syntax isn't respected.
What's and how works '"{}" ' +
+ runs the command once for groups of filenames (for efficiency, instead of one at a time). Due to the way substitution works, it's identical to {} +, since the argument is just removed and replaced with a bunch of filenames.
How can i fix my script also for spaced filename?
You can make it work in 95% of cases by simply changing which characters bash uses for splitting, through the use of the IFS variable:
IFS=$'\n'
for dir in $(find "$DIR" -type d)
do
echo "Directory: >> $dir <<"
done
This makes bash split on just line feeds rather than the default of all whitespace (this can have some interesting side effects in the rest of your code, so you might want to set the value of IFS back to its original afterwards).
Filenames can contain line feeds too though. The best approach is to use:
while IFS= read -d '' -r dir
do
echo "Directory: >> $dir <<"
done < <(find "$DIR" -type d -print0)
which makes find output files split by NUL bytes, which filenames can never contain.
I use this format & it works fine:
find '/path/to/directories' -type d |\
while read DIRNAME
do
echo "${DIRNAME}"
done
This may work better for you:
while IFS= read -r adir ; do
echo "Directory: $adir"
done < <(find "$DIR" -type d)

How to loop through file names returned by find?

x=$(find . -name "*.txt")
echo $x
if I run the above piece of code in Bash shell, what I get is a string containing several file names separated by blank, not a list.
Of course, I can further separate them by blank to get a list, but I'm sure there is a better way to do it.
So what is the best way to loop through the results of a find command?
TL;DR: If you're just here for the most correct answer, you probably want my personal preference (see the bottom of this post):
# execute `process` once for each file
find . -name '*.txt' -exec process {} \;
If you have time, read through the rest to see several different ways and the problems with most of them.
The full answer:
The best way depends on what you want to do, but here are a few options. As long as no file or folder in the subtree has whitespace in its name, you can just loop over the files:
for i in $x; do # Not recommended, will break on whitespace
process "$i"
done
Marginally better, cut out the temporary variable x:
for i in $(find -name \*.txt); do # Not recommended, will break on whitespace
process "$i"
done
It is much better to glob when you can. White-space safe, for files in the current directory:
for i in *.txt; do # Whitespace-safe but not recursive.
process "$i"
done
By enabling the globstar option, you can glob all matching files in this directory and all subdirectories:
# Make sure globstar is enabled
shopt -s globstar
for i in **/*.txt; do # Whitespace-safe and recursive
process "$i"
done
In some cases, e.g. if the file names are already in a file, you may need to use read:
# IFS= makes sure it doesn't trim leading and trailing whitespace
# -r prevents interpretation of \ escapes.
while IFS= read -r line; do # Whitespace-safe EXCEPT newlines
process "$line"
done < filename
read can be used safely in combination with find by setting the delimiter appropriately:
find . -name '*.txt' -print0 |
while IFS= read -r -d '' line; do
process "$line"
done
For more complex searches, you will probably want to use find, either with its -exec option or with -print0 | xargs -0:
# execute `process` once for each file
find . -name \*.txt -exec process {} \;
# execute `process` once with all the files as arguments*:
find . -name \*.txt -exec process {} +
# using xargs*
find . -name \*.txt -print0 | xargs -0 process
# using xargs with arguments after each filename (implies one run per filename)
find . -name \*.txt -print0 | xargs -0 -I{} process {} argument
find can also cd into each file's directory before running a command by using -execdir instead of -exec, and can be made interactive (prompt before running the command for each file) using -ok instead of -exec (or -okdir instead of -execdir).
*: Technically, both find and xargs (by default) will run the command with as many arguments as they can fit on the command line, as many times as it takes to get through all the files. In practice, unless you have a very large number of files it won't matter, and if you exceed the length but need them all on the same command line, you're SOL find a different way.
What ever you do, don't use a for loop:
# Don't do this
for file in $(find . -name "*.txt")
do
…code using "$file"
done
Three reasons:
For the for loop to even start, the find must run to completion.
If a file name has any whitespace (including space, tab or newline) in it, it will be treated as two separate names.
Although now unlikely, you can overrun your command line buffer. Imagine if your command line buffer holds 32KB, and your for loop returns 40KB of text. That last 8KB will be dropped right off your for loop and you'll never know it.
Always use a while read construct:
find . -name "*.txt" -print0 | while read -d $'\0' file
do
…code using "$file"
done
The loop will execute while the find command is executing. Plus, this command will work even if a file name is returned with whitespace in it. And, you won't overflow your command line buffer.
The -print0 will use the NULL as a file separator instead of a newline and the -d $'\0' will use NULL as the separator while reading.
find . -name "*.txt"|while read fname; do
echo "$fname"
done
Note: this method and the (second) method shown by bmargulies are safe to use with white space in the file/folder names.
In order to also have the - somewhat exotic - case of newlines in the file/folder names covered, you will have to resort to the -exec predicate of find like this:
find . -name '*.txt' -exec echo "{}" \;
The {} is the placeholder for the found item and the \; is used to terminate the -exec predicate.
And for the sake of completeness let me add another variant - you gotta love the *nix ways for their versatility:
find . -name '*.txt' -print0|xargs -0 -n 1 echo
This would separate the printed items with a \0 character that isn't allowed in any of the file systems in file or folder names, to my knowledge, and therefore should cover all bases. xargs picks them up one by one then ...
Filenames can include spaces and even control characters. Spaces are (default) delimiters for shell expansion in bash and as a result of that x=$(find . -name "*.txt") from the question is not recommended at all. If find gets a filename with spaces e.g. "the file.txt" you will get 2 separated strings for processing, if you process x in a loop. You can improve this by changing delimiter (bash IFS Variable) e.g. to \r\n, but filenames can include control characters - so this is not a (completely) safe method.
From my point of view, there are 2 recommended (and safe) patterns for processing files:
1. Use for loop & filename expansion:
for file in ./*.txt; do
[[ ! -e $file ]] && continue # continue, if file does not exist
# single filename is in $file
echo "$file"
# your code here
done
2. Use find-read-while & process substitution
while IFS= read -r -d '' file; do
# single filename is in $file
echo "$file"
# your code here
done < <(find . -name "*.txt" -print0)
Remarks
on Pattern 1:
bash returns the search pattern ("*.txt") if no matching file is found - so the extra line "continue, if file does not exist" is needed. see Bash Manual, Filename Expansion
shell option nullglob can be used to avoid this extra line.
"If the failglob shell option is set, and no matches are found, an error message is printed and the command is not executed." (from Bash Manual above)
shell option globstar: "If set, the pattern ‘**’ used in a filename expansion context will match all files and zero or more directories and subdirectories. If the pattern is followed by a ‘/’, only directories and subdirectories match." see Bash Manual, Shopt Builtin
other options for filename expansion: extglob, nocaseglob, dotglob & shell variable GLOBIGNORE
on Pattern 2:
filenames can contain blanks, tabs, spaces, newlines, ... to process filenames in a safe way, find with -print0 is used: filename is printed with all control characters & terminated with NUL. see also Gnu Findutils Manpage, Unsafe File Name Handling, safe File Name Handling, unusual characters in filenames. See David A. Wheeler below for detailed discussion of this topic.
There are some possible patterns to process find results in a while loop. Others (kevin, David W.) have shown how to do this using pipes:
files_found=1
find . -name "*.txt" -print0 |
while IFS= read -r -d '' file; do
# single filename in $file
echo "$file"
files_found=0 # not working example
# your code here
done
[[ $files_found -eq 0 ]] && echo "files found" || echo "no files found"
When you try this piece of code, you will see, that it does not work: files_found is always "true" & the code will always echo "no files found". Reason is: each command of a pipeline is executed in a separate subshell, so the changed variable inside the loop (separate subshell) does not change the variable in the main shell script. This is why I recommend using process substitution as the "better", more useful, more general pattern.See I set variables in a loop that's in a pipeline. Why do they disappear... (from Greg's Bash FAQ) for a detailed discussion on this topic.
Additional References & Sources:
Gnu Bash Manual, Pattern Matching
Filenames and Pathnames in Shell: How to do it Correctly, David A. Wheeler
Why you don't read lines with "for", Greg's Wiki
Why you shouldn't parse the output of ls(1), Greg's Wiki
Gnu Bash Manual, Process Substitution
(Updated to include #Socowi's execellent speed improvement)
With any $SHELL that supports it (dash/zsh/bash...):
find . -name "*.txt" -exec $SHELL -c '
for i in "$#" ; do
echo "$i"
done
' {} +
Done.
Original answer (shorter, but slower):
find . -name "*.txt" -exec $SHELL -c '
echo "$0"
' {} \;
If you can assume the file names don't contain newlines, you can read the output of find into a Bash array using the following command:
readarray -t x < <(find . -name '*.txt')
Note:
-t causes readarray to strip newlines.
It won't work if readarray is in a pipe, hence the process substitution.
readarray is available since Bash 4.
Bash 4.4 and up also supports the -d parameter for specifying the delimiter. Using the null character, instead of newline, to delimit the file names works also in the rare case that the file names contain newlines:
readarray -d '' x < <(find . -name '*.txt' -print0)
readarray can also be invoked as mapfile with the same options.
Reference: https://mywiki.wooledge.org/BashFAQ/005#Loading_lines_from_a_file_or_stream
# Doesn't handle whitespace
for x in `find . -name "*.txt" -print`; do
process_one $x
done
or
# Handles whitespace and newlines
find . -name "*.txt" -print0 | xargs -0 -n 1 process_one
I like to use find which is first assigned to variable and IFS switched to new line as follow:
FilesFound=$(find . -name "*.txt")
IFSbkp="$IFS"
IFS=$'\n'
counter=1;
for file in $FilesFound; do
echo "${counter}: ${file}"
let counter++;
done
IFS="$IFSbkp"
As commented by #Konrad Rudolph this will not work with "new lines" in file name. I still think it is handy as it covers most of the cases when you need to loop over command output.
As already posted on the top answer by Kevin, the best solution is to use a for loop with bash glob, but as bash glob is not recursive by default, this can be fixed by a bash recursive function:
#!/bin/bash
set -x
set -eu -o pipefail
all_files=();
function get_all_the_files()
{
directory="$1";
for item in "$directory"/* "$directory"/.[^.]*;
do
if [[ -d "$item" ]];
then
get_all_the_files "$item";
else
all_files+=("$item");
fi;
done;
}
get_all_the_files "/tmp";
for file_path in "${all_files[#]}"
do
printf 'My file is "%s"\n' "$file_path";
done;
Related questions:
Bash loop through directory including hidden file
Recursively list files from a given directory in Bash
ls command: how can I get a recursive full-path listing, one line per file?
List files recursively in Linux CLI with path relative to the current directory
Recursively List all directories and files
bash script, create array of all files in a directory
How can I creates array that contains the names of all the files in a folder?
How can I creates array that contains the names of all the files in a folder?
How to get the list of files in a directory in a shell script?
based on other answers and comment of #phk, using fd #3:
(which still allows to use stdin inside the loop)
while IFS= read -r f <&3; do
echo "$f"
done 3< <(find . -iname "*filename*")
You can put the filenames returned by find into an array like this:
array=()
while IFS= read -r -d ''; do
array+=("$REPLY")
done < <(find . -name '*.txt' -print0)
Now you can just loop through the array to access individual items and do whatever you want with them.
Note: It's white space safe.
I think using this piece of code (piping the command after while done):
while read fname; do
echo "$fname"
done <<< "$(find . -name "*.txt")"
is better than this answer because while loop is executed in a subshell according to here, if you use this answer and variable changes cannot be seen after while loop if you want to modify variables inside the loop.
You can store your find output in array if you wish to use the output later as:
array=($(find . -name "*.txt"))
Now to print the each element in new line, you can either use for loop iterating to all the elements of array, or you can use printf statement.
for i in ${array[#]};do echo $i; done
or
printf '%s\n' "${array[#]}"
You can also use:
for file in "`find . -name "*.txt"`"; do echo "$file"; done
This will print each filename in newline
To only print the find output in list form, you can use either of the following:
find . -name "*.txt" -print 2>/dev/null
or
find . -name "*.txt" -print | grep -v 'Permission denied'
This will remove error messages and only give the filename as output in new line.
If you wish to do something with the filenames, storing it in array is good, else there is no need to consume that space and you can directly print the output from find.
function loop_through(){
length_="$(find . -name '*.txt' | wc -l)"
length_="${length_#"${length_%%[![:space:]]*}"}"
length_="${length_%"${length_##*[![:space:]]}"}"
for i in {1..$length_}
do
x=$(find . -name '*.txt' | sort | head -$i | tail -1)
echo $x
done
}
To grab the length of the list of files for loop, I used the first command "wc -l".
That command is set to a variable.
Then, I need to remove the trailing white spaces from the variable so the for loop can read it.
find <path> -xdev -type f -name *.txt -exec ls -l {} \;
This will list the files and give details about attributes.
Another alternative is to not use bash, but call Python to do the heavy lifting. I recurred to this because bash solutions as my other answer were too slow.
With this solution, we build a bash array of files from inline Python script:
#!/bin/bash
set -eu -o pipefail
dsep=":" # directory_separator
base_directory=/tmp
all_files=()
all_files_string="$(python3 -c '#!/usr/bin/env python3
import os
import sys
dsep="'"$dsep"'"
base_directory="'"$base_directory"'"
def log(*args, **kwargs):
print(*args, file=sys.stderr, **kwargs)
def check_invalid_characther(file_path):
for thing in ("\\", "\n"):
if thing in file_path:
raise RuntimeError(f"It is not allowed {thing} on \"{file_path}\"!")
def absolute_path_to_relative(base_directory, file_path):
relative_path = os.path.commonprefix( [ base_directory, file_path ] )
relative_path = os.path.normpath( file_path.replace( relative_path, "" ) )
# if you use Windows Python, it accepts / instead of \\
# if you have \ on your files names, rename them or comment this
relative_path = relative_path.replace("\\", "/")
if relative_path.startswith( "/" ):
relative_path = relative_path[1:]
return relative_path
for directory, directories, files in os.walk(base_directory):
for file in files:
local_file_path = os.path.join(directory, file)
local_file_name = absolute_path_to_relative(base_directory, local_file_path)
log(f"local_file_name {local_file_name}.")
check_invalid_characther(local_file_name)
print(f"{base_directory}{dsep}{local_file_name}")
' | dos2unix)";
if [[ -n "$all_files_string" ]];
then
readarray -t temp <<< "$all_files_string";
all_files+=("${temp[#]}");
fi;
for item in "${all_files[#]}";
do
OLD_IFS="$IFS"; IFS="$dsep";
read -r base_directory local_file_name <<< "$item"; IFS="$OLD_IFS";
printf 'item "%s", base_directory "%s", local_file_name "%s".\n' \
"$item" \
"$base_directory" \
"$local_file_name";
done;
Related:
os.walk without hidden folders
How to do a recursive sub-folder search and return files in a list?
How to split a string into an array in Bash?
How about if you use grep instead of find?
ls | grep .txt$ > out.txt
Now you can read this file and the filenames are in the form of a list.

How can I escape white space in a bash loop list?

I have a bash shell script that loops through all child directories (but not files) of a certain directory. The problem is that some of the directory names contain spaces.
Here are the contents of my test directory:
$ls -F test
Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt
And the code that loops through the directories:
for f in `find test/* -type d`; do
echo $f
done
Here's the output:
test/Baltimore
test/Cherry
Hill
test/Edison
test/New
York
City
test/Philadelphia
Cherry Hill and New York City are treated as 2 or 3 separate entries.
I tried quoting the filenames, like so:
for f in `find test/* -type d | sed -e 's/^/\"/' | sed -e 's/$/\"/'`; do
echo $f
done
but to no avail.
There's got to be a simple way to do this.
The answers below are great. But to make this more complicated - I don't always want to use the directories listed in my test directory. Sometimes I want to pass in the directory names as command-line parameters instead.
I took Charles' suggestion of setting the IFS and came up with the following:
dirlist="${#}"
(
[[ -z "$dirlist" ]] && dirlist=`find test -mindepth 1 -type d` && IFS=$'\n'
for d in $dirlist; do
echo $d
done
)
and this works just fine unless there are spaces in the command line arguments (even if those arguments are quoted). For example, calling the script like this: test.sh "Cherry Hill" "New York City" produces the following output:
Cherry
Hill
New
York
City
First, don't do it that way. The best approach is to use find -exec properly:
# this is safe
find test -type d -exec echo '{}' +
The other safe approach is to use NUL-terminated list, though this requires that your find support -print0:
# this is safe
while IFS= read -r -d '' n; do
printf '%q\n' "$n"
done < <(find test -mindepth 1 -type d -print0)
You can also populate an array from find, and pass that array later:
# this is safe
declare -a myarray
while IFS= read -r -d '' n; do
myarray+=( "$n" )
done < <(find test -mindepth 1 -type d -print0)
printf '%q\n' "${myarray[#]}" # printf is an example; use it however you want
If your find doesn't support -print0, your result is then unsafe -- the below will not behave as desired if files exist containing newlines in their names (which, yes, is legal):
# this is unsafe
while IFS= read -r n; do
printf '%q\n' "$n"
done < <(find test -mindepth 1 -type d)
If one isn't going to use one of the above, a third approach (less efficient in terms of both time and memory usage, as it reads the entire output of the subprocess before doing word-splitting) is to use an IFS variable which doesn't contain the space character. Turn off globbing (set -f) to prevent strings containing glob characters such as [], * or ? from being expanded:
# this is unsafe (but less unsafe than it would be without the following precautions)
(
IFS=$'\n' # split only on newlines
set -f # disable globbing
for n in $(find test -mindepth 1 -type d); do
printf '%q\n' "$n"
done
)
Finally, for the command-line parameter case, you should be using arrays if your shell supports them (i.e. it's ksh, bash or zsh):
# this is safe
for d in "$#"; do
printf '%s\n' "$d"
done
will maintain separation. Note that the quoting (and the use of $# rather than $*) is important. Arrays can be populated in other ways as well, such as glob expressions:
# this is safe
entries=( test/* )
for d in "${entries[#]}"; do
printf '%s\n' "$d"
done
find . -type d | while read file; do echo $file; done
However, doesn't work if the file-name contains newlines. The above is the only solution i know of when you actually want to have the directory name in a variable. If you just want to execute some command, use xargs.
find . -type d -print0 | xargs -0 echo 'The directory is: '
Here is a simple solution which handles tabs and/or whitespaces in the filename. If you have to deal with other strange characters in the filename like newlines, pick another answer.
The test directory
ls -F test
Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt
The code to go into the directories
find test -type d | while read f ; do
echo "$f"
done
The filename must be quoted ("$f") if used as argument. Without quotes, the spaces act as argument separator and multiple arguments are given to the invoked command.
And the output:
test/Baltimore
test/Cherry Hill
test/Edison
test/New York City
test/Philadelphia
This is exceedingly tricky in standard Unix, and most solutions run foul of newlines or some other character. However, if you are using the GNU tool set, then you can exploit the find option -print0 and use xargs with the corresponding option -0 (minus-zero). There are two characters that cannot appear in a simple filename; those are slash and NUL '\0'. Obviously, slash appears in pathnames, so the GNU solution of using a NUL '\0' to mark the end of the name is ingenious and fool-proof.
You could use IFS (internal field separator) temporally using :
OLD_IFS=$IFS # Stores Default IFS
IFS=$'\n' # Set it to line break
for f in `find test/* -type d`; do
echo $f
done
IFS=$OLD_IFS
<!>
Why not just put
IFS='\n'
in front of the for command? This changes the field separator from < Space>< Tab>< Newline> to just < Newline>
find . -print0|while read -d $'\0' file; do echo "$file"; done
I use
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for f in $( find "$1" -type d ! -path "$1" )
do
echo $f
done
IFS=$SAVEIFS
Wouldn't that be enough?
Idea taken from http://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
Don't store lists as strings; store them as arrays to avoid all this delimiter confusion. Here's an example script that'll either operate on all subdirectories of test, or the list supplied on its command line:
#!/bin/bash
if [ $# -eq 0 ]; then
# if no args supplies, build a list of subdirs of test/
dirlist=() # start with empty list
for f in test/*; do # for each item in test/ ...
if [ -d "$f" ]; then # if it's a subdir...
dirlist=("${dirlist[#]}" "$f") # add it to the list
fi
done
else
# if args were supplied, copy the list of args into dirlist
dirlist=("$#")
fi
# now loop through dirlist, operating on each one
for dir in "${dirlist[#]}"; do
printf "Directory: %s\n" "$dir"
done
Now let's try this out on a test directory with a curve or two thrown in:
$ ls -F test
Baltimore/
Cherry Hill/
Edison/
New York City/
Philadelphia/
this is a dirname with quotes, lfs, escapes: "\''?'?\e\n\d/
this is a file, not a directory
$ ./test.sh
Directory: test/Baltimore
Directory: test/Cherry Hill
Directory: test/Edison
Directory: test/New York City
Directory: test/Philadelphia
Directory: test/this is a dirname with quotes, lfs, escapes: "\''
'
\e\n\d
$ ./test.sh "Cherry Hill" "New York City"
Directory: Cherry Hill
Directory: New York City
ps if it is only about space in the input, then some double quotes worked smoothly for me...
read artist;
find "/mnt/2tb_USB_hard_disc/p_music/$artist" -type f -name *.mp3 -exec mpg123 '{}' \;
To add to what Jonathan said: use the -print0 option for find in conjunction with xargs as follows:
find test/* -type d -print0 | xargs -0 command
That will execute the command command with the proper arguments; directories with spaces in them will be properly quoted (i.e. they'll be passed in as one argument).
#!/bin/bash
dirtys=()
for folder in *
do
if [ -d "$folder" ]; then
dirtys=("${dirtys[#]}" "$folder")
fi
done
for dir in "${dirtys[#]}"
do
for file in "$dir"/\*.mov # <== *.mov
do
#dir_e=`echo "$dir" | sed 's/[[:space:]]/\\\ /g'` -- This line will replace each space into '\ '
out=`echo "$file" | sed 's/\(.*\)\/\(.*\)/\2/'` # These two line code can be written in one line using multiple sed commands.
out=`echo "$out" | sed 's/[[:space:]]/_/g'`
#echo "ffmpeg -i $out_e -sameq -vcodec msmpeg4v2 -acodec pcm_u8 $dir_e/${out/%mov/avi}"
`ffmpeg -i "$file" -sameq -vcodec msmpeg4v2 -acodec pcm_u8 "$dir"/${out/%mov/avi}`
done
done
The above code will convert .mov files to .avi. The .mov files are in different folders and
the folder names have white spaces too. My above script will convert the .mov files to .avi file in the same folder itself. I don't know whether it help you peoples.
Case:
[sony#localhost shell_tutorial]$ ls
Chapter 01 - Introduction Chapter 02 - Your First Shell Script
[sony#localhost shell_tutorial]$ cd Chapter\ 01\ -\ Introduction/
[sony#localhost Chapter 01 - Introduction]$ ls
0101 - About this Course.mov 0102 - Course Structure.mov
[sony#localhost Chapter 01 - Introduction]$ ./above_script
... successfully executed.
[sony#localhost Chapter 01 - Introduction]$ ls
0101_-_About_this_Course.avi 0102_-_Course_Structure.avi
0101 - About this Course.mov 0102 - Course Structure.mov
[sony#localhost Chapter 01 - Introduction]$ CHEERS!
Cheers!
Had to be dealing with whitespaces in pathnames, too. What I finally did was using a recursion and for item in /path/*:
function recursedir {
local item
for item in "${1%/}"/*
do
if [ -d "$item" ]
then
recursedir "$item"
else
command
fi
done
}
Convert the file list into a Bash array. This uses Matt McClure's approach for returning an array from a Bash function:
http://notes-matthewlmcclure.blogspot.com/2009/12/return-array-from-bash-function-v-2.html
The result is a way to convert any multi-line input to a Bash array.
#!/bin/bash
# This is the command where we want to convert the output to an array.
# Output is: fileSize fileNameIncludingPath
multiLineCommand="find . -mindepth 1 -printf '%s %p\\n'"
# This eval converts the multi-line output of multiLineCommand to a
# Bash array. To convert stdin, remove: < <(eval "$multiLineCommand" )
eval "declare -a myArray=`( arr=(); while read -r line; do arr[${#arr[#]}]="$line"; done; declare -p arr | sed -e 's/^declare -a arr=//' ) < <(eval "$multiLineCommand" )`"
for f in "${myArray[#]}"
do
echo "Element: $f"
done
This approach appears to work even when bad characters are present, and is a general way to convert any input to a Bash array. The disadvantage is if the input is long you could exceed Bash's command line size limits, or use up large amounts of memory.
Approaches where the loop that is eventually working on the list also have the list piped in have the disadvantage that reading stdin is not easy (such as asking the user for input), and the loop is a new process so you may be wondering why variables you set inside the loop are not available after the loop finishes.
I also dislike setting IFS, it can mess up other code.
Well, I see too many complicated answers. I don't want to pass the output of find utility or to write a loop , because find has "exec" option for this.
My problem was that I wanted to move all files with dbf extension to the current folder and some of them contained white space.
I tackled it so:
find . -name \*.dbf -print0 -exec mv '{}' . ';'
Looks much simple for me
just found out there are some similarities between my question and yours. Aparrently if you want to pass arguments into commands
test.sh "Cherry Hill" "New York City"
to print them out in order
for SOME_ARG in "$#"
do
echo "$SOME_ARG";
done;
notice the $# is surrounded by double quotes, some notes here
I needed the same concept to compress sequentially several directories or files from a certain folder. I have solved using awk to parsel the list from ls and to avoid the problem of blank space in the name.
source="/xxx/xxx"
dest="/yyy/yyy"
n_max=`ls . | wc -l`
echo "Loop over items..."
i=1
while [ $i -le $n_max ];do
item=`ls . | awk 'NR=='$i'' `
echo "File selected for compression: $item"
tar -cvzf $dest/"$item".tar.gz "$item"
i=$(( i + 1 ))
done
echo "Done!!!"
what do you think?
find Downloads -type f | while read file; do printf "%q\n" "$file"; done
For me this works, and it is pretty much "clean":
for f in "$(find ./test -type d)" ; do
echo "$f"
done
Just had a simple variant problem... Convert files of typed .flv to .mp3 (yawn).
for file in read `find . *.flv`; do ffmpeg -i ${file} -acodec copy ${file}.mp3;done
recursively find all the Macintosh user flash files and turn them into audio (copy, no transcode) ... it's like the while above, noting that read instead of just 'for file in ' will escape.

Resources