How can I use multiple Bash arguments in loop dynamically without using long regex strings? - bash

I have a directory with the following files:
file1.jpg
file2.jpg
file3.jpg
file1.png
file2.png
file3.png
I have a bash function named filelist and it looks like this:
filelist() {
if [ "$1" ]
then
shopt -s nullglob
for filelist in *."$#" ; do
echo "$filelist" >> created-file-list.txt;
done
echo "file created listing: " $#;
else
filelist=`find . -type f -name "*.*" -exec basename \{} \;`
echo "$filelist" >> created-file-list.txt
echo "file created listing: All Files";
fi
}
Goal: Be able to type as many arguments as I want for example filelist jpg png and create a file with a list of files of only the extensions I used as arguments. So if I type filelist jpg it would only show a list of files that have .jpg.
Currently: My code works great with one argument thanks to $#, but when I use both jpg and png it creates the following list
file1.jpg
file2.jpg
file3.jpg
png
It looks like my for loop is only running once and only using the first argument. My suspicion is I need to count how many arguments and run the loop on each one.
An obvious fix for this is to create a long regex check like (jpg|png|jpeg|html|css) and all of the different extensions one could ever think to type. This is not ideal because I want other people to be free to type their file extensions without breaking it if they type one that I don't have identified in my regex. Dynamic is key.

You can rewrite your function as shown below - just loop through each extension and append the list of matching files to the output file:
filelist() {
if [ $# -gt 0 ]; then
shopt -s nullglob
for ext in "$#"; do
printf '%s\n' *."$ext" >> created-file-list.txt
echo "created listing for extension $ext"
done
else
find . -type f -name "*.*" -exec basename \{} \; >> created-file-list.txt
echo "created listing for all files"
fi
}
And you can invoke your function as:
filelist jpg png

Try this
#!/bin/bash
while [ -n "$1" ]
do
echo "Current Parameter: $1 , Remaining $#"
#Pass $1 to some bash function or do whatever
shift
done
Using the shift you shift the args left and get the next one by reading the $1 variable.
See man bash on what shift does.
shift [n]
The positional parameters from n+1 ... are renamed to $1 .... Parameters represented by the numbers $# down to $#-n+1 are
unset. n must
be a non-negative number less than or equal to $#. If n is 0, no parameters are changed. If n is not given, it is assumed to
be 1. If n
is greater than $#, the positional parameters are not changed. The return status is greater than zero if n is greater than
$# or less
than zero; otherwise 0.
Or you can iterate like as follows
for this in "$#"
do
echo "Param = $this";
done

Related

bash: using find with multiple file types, provided as an array

In a bash function, I want to list all the files in a given folder which correspond to a given set of file types. In pseudo-code, I am imagining something like this:
getMatchingFiles() {
output=$1
directory=$2
shift 2
_types_=("$#")
file_array=find $directory -type f where-name-matches-item-in-_types_
# do other stuff with $file_array, such as trimming file names to
# just the basename with no extension
eval $output="${file_array[#]}"
}
dir=/path/to/folder
types=(ogg mp3)
getMatchingFiles result dir types
echo "${result[#]}"
For your amusement, here are the multiple workarounds, based on my current knowledge of bash, that I am using to achieve this. I have a problem with the way the function returns the array of files: the final command tries to execute each file, rather than to set the output parameter.
getMatchingFiles() {
local _output=$1
local _dir=$2
shift 2
local _type=("$#")
local _files=($_dir/$_type/*)
local -i ii=${#_files[#]}
local -a _filetypes
local _file _regex
case $_type in
audio )
_filetypes=(ogg mp3)
;;
images )
_filetypes=(jpg png)
;;
esac
_regex="^.*\.("
for _filetype in "${_filetypes[#]}"
do
_regex+=$_filetype"|"
done
_regex=${_regex:0:-1}
_regex+=")$"
for (( ; ii-- ; ))
do
_file=${_files[$ii]}
if ! [[ $_file =~ $_regex ]];then
unset _files[ii]
fi
done
echo "${_files[#]}"
# eval $_output="${_files[#]}" # tries to execute the files
}
dir=/path/to/parent
getMatchingFiles result $dir audio
echo "${result[#]}"
As a matter of fact, it is possible to use nameref (note that you need bash 4.3 or later) to reference an array. If you want to put the output of find to an array specified by a name, you can reference it like this:
#!/usr/bin/env bash
getMatchingFiles() {
local -n output=$1
local dir=$2
shift 2
local types=("$#")
local ext file
local -a find_ext
[[ ${#types[#]} -eq 0 ]] && return 1
for ext in "${types[#]}"; do
find_ext+=(-o -name "*.${ext}")
done
unset 'find_ext[0]'
output=()
while IFS= read -r -d $'\0' file; do
output+=("$file")
done < <(find "$dir" -type f \( "${find_ext[#]}" \) -print0)
}
dir=/some/path
getMatchingFiles result "$dir" mp3 txt
printf '%s\n' "${result[#]}"
getMatchingFiles other_result /some/other/path txt
printf '%s\n' "${other_result[#]}"
Don't pass your variable $dir as a reference, pass it as a value instead. You will be able to pass a literal as well.
Update: namerefs can indeed be arrays (see PesaThe's answer)
Without spaces in file and directory names
I first assume you do not have spaces in your file and directory names. See the second part of this answer if you have spaces in your file and directory names.
In order to pass result, dir and types by name to your function, you need to use namerefs (local -n or declare -n, available only in recent versions of bash).
Another difficulty is to build the find command based on the types you passed but this is not a major one. Pattern substitutions can do this. All in all, something like this should do about what you want:
#!/usr/bin/env bash
getMatchingFiles() {
local -n output=$1
local -n directory=$2
local -n _types_=$3
local filter
filter="${_types_[#]/#/ -o -name *.}"
filter="${filter# -o }"
output=( $( find "$directory" -type f \( $filter \) ) )
# do other stuff with $output, such as trimming file names to
# just the basename with no extension
}
declare dir
declare -a types
declare -a result=()
dir=/path/to/folder
types=(ogg mp3)
getMatchingFiles result dir types
for f in "${result[#]}"; do echo "$f"; done
With spaces in file and directory names (but not in file suffixes)
If you have spaces in your file and directory names, things are a bit more difficult because you must assign your array such that names are not split in words; one possibility to do this is to use \0 as file names separator, instead of a space, thanks to the -print0 option of find and the -d $'\0' option of read:
#!/usr/bin/env bash
getMatchingFiles() {
local -n output=$1
local -n directory=$2
local -n _types_=$3
local filter
filter="${_types_[#]/#/ -o -name *.}"
filter="${filter# -o }"
while read -d $'\0' file; do
output+=( "$file" )
done < <( find "$directory" -type f \( $filter \) -print0 )
# do other stuff with $output, such as trimming file names to
# just the basename with no extension
}
declare dir
declare -a types
declare -a result=()
dir=/path/to/folder
types=(ogg mp3)
getMatchingFiles result dir types[#]
for f in "${result[#]}"; do echo "$f"; done
With spaces in file and directory names, even in file suffixes
Well, you deserve what happens to you... Still possible but left as an exercise.
Supporting the original, unmodified calling convention, and correctly handling extensions with whitespace or glob characters:
#!/usr/bin/env bash
getMatchingFiles() {
declare -g -a "$1=()"
declare -n gMF_result="$1" # variables are namespaced to avoid conflicts w/ targets
declare -n gMF_dir="$2"
declare -n gMF_types="$3"
local gMF_args=( -false ) # empty type list not a special case
local gMF_type gMF_item
for gMF_type in "${gMF_types[#]}"; do
gMF_args+=( -o -name "*.$gMF_type" )
done
while IFS= read -r -d '' gMF_item; do
gMF_result+=( "$gMF_item" )
done < <(find "$gMF_dir" '(' "${gMF_args[#]}" ')' -print0)
}
dir=/path/to/folder
types=(ogg mp3)
getMatchingFiles result dir types

find -iname not working in script

Following find command.
find Work/Linux4/test/test/test_goal/spyglass_reports/clock-reset/Ac_coherency06/ -iname "Ac_coherency*.csv"
is working fine when run on shell.
But in perl script it return nothing.
#!/bin/bash
REPORT_DIR=$1
FIND_CMD=$2
echo "##";
echo $REPORT_DIR ;
echo $FIND_CMD ;
LIST_OF_CSV=$(find $REPORT_DIR $FIND_CMD)
echo $LIST_OF_CSV
if [ "X" == "X${LIST_OF_CSV}" ]; then
echo "No files Found for : '$FIND_CMD' in directory ";
echo " '$REPORT_DIR' " | sed -e 's;Work/.*/test_reports;Work/PLATFORM/test_reports;g';
echo;
exit 0;
fi
Output of script:
##
Work/$PLATFORM_SPECIES/test_reports/clock-reset/Ac_coherency06 -iname "Ac_coherency06*.csv"
No files Found for : '-iname "Ac_coherency06*.csv"' in directory 'Work/PLATFORM/test_reports/clock-reset/Ac_coherency06'
If you're allowing a list of find predicates to be passed, keep them in list form, one argument to find per argument to your script. As an example implemented in this manner:
#!/bin/bash
# read report_dir off the command line, and shift it from arguments
report_dir=$1; shift
# generate a version of report_dir for human consumption
re='Work/.*/test_reports'
replacement='Work/PLATFORM/test_reports'
if [[ $report_dir =~ $re ]]; then
report_dir_name=${report_dir//${BASH_REMATCH[0]}/$replacement}
else
report_dir_name=$report_dir
fi
# read results from find -- stored NUL-delimited -- into an array
# using NUL-delimited inputs ensure that even unusual filenames work correctly
declare -a list_of_csv
while IFS= read -r -d '' filename; do
list_of_csv+=( "$filename" )
done < <(find "$report_dir" '(' "$#" ')' -print0)
# Use the length of that array to determine whether we found contents
echo "Found ${#list_of_csv[#]} files" >&2
if (( ${#list_of_csv[#]} == 0 )); then
echo "No files found in $report_dir_name" >&2
fi
Here, shift consumes the first argument from your list, and "$#" refers to all the others that remain after that point. This means that the items you want to have passed as separate, individual arguments to find can (and must) be passed as separate, individual arguments to your script.
Thus, with usage yourscript "/path/to/report/dir" -name '*.txt', initially, $1 will be /path/to/report/dir, $2 will be -name, and $3 will be *.txt. However, after shift is run, $1 will be -name, and $2 will be *.txt; and "$#" will refer to both of those, each passed as a separate word.
For details on the use of a while read loop to read items off of a stream, see BashFAQ #001.
For details on the syntax used for bash-native string replacement, see BashFAQ #100 or http://wiki.bash-hackers.org/syntax/pe
For details on shell arrays, including ${#arrayname[#]} to check their length or "${arrayname[#]}" to expand to their contents, see BashFAQ #005.
If you have a command that is running well on the shell but not on your script, the first thing I would try would be to specify Bash on the command being called, see if this works:
bash -c 'find Work/Linux4/test/test/test_goal/spyglass_reports/clock-reset/Ac_coherency06/ -iname "Ac_coherency*.csv"'
Or even better:
/bin/bash -c 'find Work/Linux4/test/test/test_goal/spyglass_reports/clock-reset/Ac_coherency06/ -iname "Ac_coherency*.csv"'
You could also store the result on a variable or other data structure as needed, and pass it later to the script, for example:
ResultCommand="$(bash -c 'find Work/Linux4/test/test/test_goal/spyglass_reports/clock-reset/Ac_coherency06/ -iname "Ac_coherency*.csv"')"
Edit: this answer was edited more than once to fix possible issues.

Creating a which command in bash script

For an assignment, I'm supposed to create a script called my_which.sh that will "do the same thing as the Unix command, but do it using a for loop over an if." I am also not allowed to call which in my script.
I'm brand new to this, and have been reading tutorials, but I'm pretty confused on how to start. Doesn't which just list the path name of a command?
If so, how would I go about displaying the correct path name without calling which, and while using a for loop and an if statement?
For example, if I run my script, it will echo % and wait for input. But then how do I translate that to finding the directory? So it would look like this?
#!/bin/bash
path=(`echo $PATH`)
echo -n "% "
read ans
for i in $path
do
if [ -d $i ]; then
echo $i
fi
done
I would appreciate any help, or even any starting tutorials that can help me get started on this. I'm honestly very confused on how I should implement this.
Split your PATH variable safely. This is a general method to split a string at delimiters, that is 100% safe regarding any possible characters (including newlines):
IFS=: read -r -d '' -a paths < <(printf '%s:\0' "$PATH")
We artificially added : because if PATH ends with a trailing :, then it is understood that current directory should be in PATH. While this is dangerous and not recommended, we must also take it into account if we want to mimic which. Without this trailing colon, a PATH like /bin:/usr/bin: would be split into
declare -a paths='( [0]="/bin" [1]="/usr/bin" )'
whereas with this trailing colon the resulting array is:
declare -a paths='( [0]="/bin" [1]="/usr/bin" [2]="" )'
This is one detail that other answers miss. Of course, we'll do this only if PATH is set and non-empty.
With this split PATH, we'll use a for-loop to check whether the argument can be found in the given directory. Note that this should be done only if argument doesn't contain a / character! this is also something other answers missed.
My version of which handles a unique option -a that print all matching pathnames of each argument. Otherwise, only the first match is printed. We'll have to take this into account too.
My version of which handles the following exit status:
0 if all specified commands are found and executable
1 if one or more specified commands is nonexistent or not executable
2 if an invalid option is specified
We'll handle that too.
I guess the following mimics rather faithfully the behavior of my which (and it's pure Bash):
#!/bin/bash
show_usage() {
printf 'Usage: %s [-a] args\n' "$0"
}
illegal_option() {
printf >&2 'Illegal option -%s\n' "$1"
show_usage
exit 2
}
check_arg() {
if [[ -f $1 && -x $1 ]]; then
printf '%s\n' "$1"
return 0
else
return 1
fi
}
# manage options
show_only_one=true
while (($#)); do
[[ $1 = -- ]] && { shift; break; }
[[ $1 = -?* ]] || break
opt=${1#-}
while [[ $opt ]]; do
case $opt in
(a*) show_only_one=false; opt=${opt#?} ;;
(*) illegal_option "${opt:0:1}" ;;
esac
done
shift
done
# If no arguments left or empty PATH, exit with return code 1
(($#)) || exit 1
[[ $PATH ]] || exit 1
# split path
IFS=: read -r -d '' -a paths < <(printf '%s:\0' "$PATH")
ret=0
# loop on arguments
for arg; do
# Check whether arg contains a slash
if [[ $arg = */* ]]; then
check_arg "$arg" || ret=1
else
this_ret=1
for p in "${paths[#]}"; do
if check_arg "${p:-.}/$arg"; then
this_ret=0
"$show_only_one" && break
fi
done
((this_ret==1)) && ret=1
fi
done
exit "$ret"
To test whether an argument is executable or not, I'm checking whether it's a regular file1 which is executable with:
[[ -f $arg && -x $arg ]]
I guess that's close to my which's behavior.
1 As #mklement0 points out (thanks!) the -f test, when applied against a symbolic link, tests the type of the symlink's target.
#!/bin/bash
#Get the user's first argument to this script
exe_name=$1
#Set the field separator to ":" (this is what the PATH variable
# uses as its delimiter), then read the contents of the PATH
# into the array variable "paths" -- at the same time splitting
# the PATH by ":"
IFS=':' read -a paths <<< $PATH
#Iterate over each of the paths in the "paths" array
for e in ${paths[*]}
do
#Check for the $exe_name in this path
find $e -name $exe_name -maxdepth 1
done
This is similar to the accepted answer with the difference that it does not set the IFS and checks if the execute bits are set.
#!/bin/bash
for i in $(echo "$PATH" | tr ":" "\n")
do
find "$i" -name "$1" -perm +111 -maxdepth 1
done
Save this as my_which.sh (or some other name) and run it as ./my_which java etc.
However if there is an "if" required:
#!/bin/bash
for i in $(echo "$PATH" | tr ":" "\n")
do
# this is a one liner that works. However the user requires an if statment
# find "$i" -name "$1" -perm +111 -maxdepth 1
cmd=$i/$1
if [[ ( -f "$cmd" || -L "$cmd" ) && -x "$cmd" ]]
then
echo "$cmd"
break
fi
done
You might want to take a look at this link to figure out the tests in the "if".
For a complete, rock-solid implementation, see gniourf_gniourf's answer.
Here's a more concise alternative that makes do with a single invocation of find [per name to investigate].
The OP later clarified that an if statement should be used in a loop, but the question is general enough to warrant considering other approaches.
A naïve implementation would even work as a one-liner, IF you're willing to make a few assumptions (the example uses 'ls' as the executable to locate):
find -L ${PATH//:/ } -maxdepth 1 -type f -perm -u=x -name 'ls' 2>/dev/null
The assumptions - which will hold in many, but not all situations - are:
$PATH must not contain entries that when used unquoted result in shell expansions (e.g., no embedded spaces that would result in word splitting, no characters such as * that would result in pathname expansion)
$PATH must not contain an empty entry (which must be interpreted as the current dir).
Explanation:
-L tells find to investigate the targets of symlinks rather than the symlinks themselves - this ensures that symlinks to executable files are also recognized by -type f
${PATH//:/ } replaces all : chars. in $PATH with a space each, causing the result - due to being unquoted - to be passed as individual arguments split by spaces.
-maxdepth 1 instructs find to only look directly in each specified directory, not also in subdirectories
-type f matches only files, not directories.
-perm -u=x matches only files and directories that the current user (u) can execute (x).
2>/dev/null suppresses error messages that may stem from non-existent directories in the $PATH or failed attempts to access files due to lack of permission.
Here's a more robust script version:
Note:
For brevity, only handles a single argument (and no options).
Does NOT handle the case where entries or result paths may contain embedded \n chars - however, this is extremely rare in practice and likely leads to bigger problems overall.
#!//bin/bash
# Assign argument to variable; error out, if none given.
name=${1:?Please specify an executable filename.}
# Robustly read individual $PATH entries into a bash array, splitting by ':'
# - The additional trailing ':' ensures that a trailing ':' in $PATH is
# properly recognized as an empty entry - see gniourf_gniourf's answer.
IFS=: read -r -a paths <<<"${PATH}:"
# Replace empty entries with '.' for use with `find`.
# (Empty entries imply '.' - this is legacy behavior mandated by POSIX).
for (( i = 0; i < "${#paths[#]}"; i++ )); do
[[ "${paths[i]}" == '' ]] && paths[i]='.'
done
# Invoke `find` with *all* directories and capture the 1st match, if any, in a variable.
# Simply remove `| head -n 1` to print *all* matches.
match=$(find -L "${paths[#]}" -maxdepth 1 -type f -perm -u=x -name "$name" 2>/dev/null |
head -n 1)
# Print result, if found, and exit with appropriate exit code.
if [[ -n $match ]]; then
printf '%s\n' "$match"
exit 0
else
exit 1
fi

How to test filename expansion result in bash?

I want to check whether a directory has files or not in bash.
My code is here.
for d in {,/usr/local}/etc/bash_completion.d ~/.bash/completion.d
do
[ -d "$d" ] && [ -n "${d}/*" ] &&
for f in $d/*; do
[ -f "$f" ] && echo "$f" && . "$f"
done
done
The problem is that "~/.bash/completion.d" has no file.
So, $d/* is regarded as simple string "~/.bash/completion.d/*", not empty string which is result of filename expansion.
As a result of that code, bash tries to run
. "~/.bash/completion.d/*"
and of course, it generates error message.
Can anybody help me?
If you set the nullglob bash option, through
shopt -s nullglob
then globbing will drop patterns that don't match any file.
# NOTE: using only bash builtins
# Assuming $d contains directory path
shopt -s nullglob
# Assign matching files to array
files=( "$d"/* )
if [ ${#files[#]} -eq 0 ]; then
echo 'No files found.'
else
# Whatever
fi
Assignment to an array has other benefits, including desirable (correct!) handling of filenames/paths containing white-space, and simple iteration without using a sub-shell, as the following code does:
find "$d" -type f |
while read; do
# Process $REPLY
done
Instead, you can use:
for file in "${files[#]}"; do
# Process $file
done
with the benefit that the loop is run by the main shell, meaning that side-effects (such as variable assignment, say) made within the loop are visible for the remainder of script. Of course, it's also way faster, if performance is an issue.
Finally, an array can also be inserted in command line arguments (without splitting arguments containing white-space):
$ md5sum fileA "${files[#]}" fileZ
You should always attempt to correctly handle files/paths containing white-space, because one day, they will happen!
You could use find directly in the following way:
for f in $(find {,/usr/local}/etc/bash_completion.d ~/.bash/completion.d -maxdepth 1 -type f);
do echo $f; . $f;
done
But find will print a warning if some of the directory isn't found, you can either put a 2> /dev/null or put the find call after testing if the directories exist (like in your code).
find() {
for files in "$1"/*;do
if [ -d "$files" ];then
numfile=$(ls $files|wc -l)
if [ "$numfile" -eq 0 ];then
echo "dir: $files has no files"
continue
fi
recurse "$files"
elif [ -f "$files" ];then
echo "file: $files";
:
fi
done
}
find /path
Another approach
# prelim stuff to set up d
files=`/bin/ls $d`
if [ ${#files} -eq 0 ]
then
echo "No files were found"
else
# do processing
fi

Checking from shell script if a directory contains files

From a shell script, how do I check if a directory contains files?
Something similar to this
if [ -e /some/dir/* ]; then echo "huzzah"; fi;
but which works if the directory contains one or several files (the above one only works with exactly 0 or 1 files).
Three best tricks
shopt -s nullglob dotglob; f=your/dir/*; ((${#f}))
This trick is 100% bash and invokes (spawns) a sub-shell. The idea is from Bruno De Fraine and improved by teambob's comment.
files=$(shopt -s nullglob dotglob; echo your/dir/*)
if (( ${#files} ))
then
echo "contains files"
else
echo "empty (or does not exist or is a file)"
fi
Note: no difference between an empty directory and a non-existing one (and even when the provided path is a file).
There is a similar alternative and more details (and more examples) on the 'official' FAQ for #bash IRC channel:
if (shopt -s nullglob dotglob; f=(*); ((${#f[#]})))
then
echo "contains files"
else
echo "empty (or does not exist, or is a file)"
fi
[ -n "$(ls -A your/dir)" ]
This trick is inspired from nixCraft's article posted in 2007. Add 2>/dev/null to suppress the output error "No such file or directory".
See also Andrew Taylor's answer (2008) and gr8can8dian's answer (2011).
if [ -n "$(ls -A your/dir 2>/dev/null)" ]
then
echo "contains files (or is a file)"
else
echo "empty (or does not exist)"
fi
or the one-line bashism version:
[[ $(ls -A your/dir) ]] && echo "contains files" || echo "empty"
Note: ls returns $?=2 when the directory does not exist. But no difference between a file and an empty directory.
[ -n "$(find your/dir -prune -empty)" ]
This last trick is inspired from gravstar's answer where -maxdepth 0 is replaced by -prune and improved by phils's comment.
if [ -n "$(find your/dir -prune -empty 2>/dev/null)" ]
then
echo "empty (directory or file)"
else
echo "contains files (or does not exist)"
fi
a variation using -type d:
if [ -n "$(find your/dir -prune -empty -type d 2>/dev/null)" ]
then
echo "empty directory"
else
echo "contains files (or does not exist or is not a directory)"
fi
Explanation:
find -prune is similar than find -maxdepth 0 using less characters
find -empty prints the empty directories and files
find -type d prints directories only
Note: You could also replace [ -n "$(find your/dir -prune -empty)" ] by just the shorten version below:
if [ `find your/dir -prune -empty 2>/dev/null` ]
then
echo "empty (directory or file)"
else
echo "contains files (or does not exist)"
fi
This last code works most of the cases but be aware that malicious paths could express a command...
The solutions so far use ls. Here's an all bash solution:
#!/bin/bash
shopt -s nullglob dotglob # To include hidden files
files=(/some/dir/*)
if [ ${#files[#]} -gt 0 ]; then echo "huzzah"; fi
How about the following:
if find /some/dir/ -maxdepth 0 -empty | read v; then echo "Empty dir"; fi
This way there is no need for generating a complete listing of the contents of the directory. The read is both to discard the output and make the expression evaluate to true only when something is read (i.e. /some/dir/ is found empty by find).
Try:
if [ ! -z `ls /some/dir/*` ]; then echo "huzzah"; fi
Take care with directories with a lot of files! It could take a some time to evaluate the ls command.
IMO the best solution is the one that uses
find /some/dir/ -maxdepth 0 -empty
# Works on hidden files, directories and regular files
### isEmpty()
# This function takes one parameter:
# $1 is the directory to check
# Echoes "huzzah" if the directory has files
function isEmpty(){
if [ "$(ls -A $1)" ]; then
echo "huzzah"
else
echo "has no files"
fi
}
DIR="/some/dir"
if [ "$(ls -A $DIR)" ]; then
echo 'There is something alive in here'
fi
Could you compare the output of this?
ls -A /some/dir | wc -l
This may be a really late response but here is a solution that works. This line only recognizes th existance of files! It will not give you a false positive if directories exist.
if find /path/to/check/* -maxdepth 0 -type f | read
then echo "Files Exist"
fi
# Checks whether a directory contains any nonhidden files.
#
# usage: if isempty "$HOME"; then echo "Welcome home"; fi
#
isempty() {
for _ief in $1/*; do
if [ -e "$_ief" ]; then
return 1
fi
done
return 0
}
Some implementation notes:
The for loop avoids a call to an external ls process. It still reads all the directory entries once. This can only be optimized away by writing a C program that uses readdir() explicitly.
The test -e inside the loop catches the case of an empty directory, in which case the variable _ief would be assigned the value "somedir/*". Only if that file exists will the function return "nonempty"
This function will work in all POSIX implementations. But be aware that the Solaris /bin/sh doesn't fall into that category. Its test implementation doesn't support the -e flag.
This tells me if the directory is empty or if it's not, the number of files it contains.
directory="/some/dir"
number_of_files=$(ls -A $directory | wc -l)
if [ "$number_of_files" == "0" ]; then
echo "directory $directory is empty"
else
echo "directory $directory contains $number_of_files files"
fi
ZSH
I know the question was marked for bash; but, just for reference, for zsh users:
Test for non-empty directory
To check if foo is non-empty:
$ for i in foo(NF) ; do ... ; done
where, if foo is non-empty, the code in the for block will be executed.
Test for empty directory
To check if foo is empty:
$ for i in foo(N/^F) ; do ... ; done
where, if foo is empty, the code in the for block will be executed.
Notes
We did not need to quote the directory foo above, but we can do so if we need to:
$ for i in 'some directory!'(NF) ; do ... ; done
We can also test more than one object, even if it is not a directory:
$ mkdir X # empty directory
$ touch f # regular file
$ for i in X(N/^F) f(N/^F) ; do echo $i ; done # echo empty directories
X
Anything that is not a directory will just be ignored.
Extras
Since we are globbing, we can use any glob (or brace expansion):
$ mkdir X X1 X2 Y Y1 Y2 Z
$ touch Xf # create regular file
$ touch X1/f # directory X1 is not empty
$ touch Y1/.f # directory Y1 is not empty
$ ls -F # list all objects
X/ X1/ X2/ Xf Y/ Y1/ Y2/ Z/
$ for i in {X,Y}*(N/^F); do printf "$i "; done; echo # print empty directories
X X2 Y Y2
We can also examine objects that are placed in an array. With the directories as above, for example:
$ ls -F # list all objects
X/ X1/ X2/ Xf Y/ Y1/ Y2/ Z/
$ arr=(*) # place objects into array "arr"
$ for i in ${^arr}(N/^F); do printf "$i "; done; echo
X X2 Y Y2 Z
Thus, we can test objects that may already be set in an array parameter.
Note that the code in the for block is, obviously, executed on every directory in turn. If this is not desirable then you can simply populate an array parameter and then operate on that parameter:
$ for i in *(NF) ; do full_directories+=($i) ; done
$ do_something $full_directories
Explanation
For zsh users there is the (F) glob qualifier (see man zshexpn), which matches "full" (non-empty) directories:
$ mkdir X Y
$ touch Y/.f # Y is now not empty
$ touch f # create a regular file
$ ls -dF * # list everything in the current directory
f X/ Y/
$ ls -dF *(F) # will list only "full" directories
Y/
The qualifier (F) lists objects that match: is a directory AND is not empty. So, (^F) matches: not a directory OR is empty. Thus, (^F) alone would also list regular files, for example. Thus, as explained on the zshexp man page, we also need the (/) glob qualifier, which lists only directories:
$ mkdir X Y Z
$ touch X/f Y/.f # directories X and Y now not empty
$ for i in *(/^F) ; do echo $i ; done
Z
Thus, to check if a given directory is empty, you can therefore run:
$ mkdir X
$ for i in X(/^F) ; do echo $i ; done ; echo "finished"
X
finished
and just to be sure that a non-empty directory would not be captured:
$ mkdir Y
$ touch Y/.f
$ for i in Y(/^F) ; do echo $i ; done ; echo "finished"
zsh: no matches found: Y(/^F)
finished
Oops! Since Y is not empty, zsh finds no matches for (/^F) ("directories that are empty") and thus spits out an error message saying that no matches for the glob were found. We therefore need to suppress these possible error messages with the (N) glob qualifier:
$ mkdir Y
$ touch Y/.f
$ for i in Y(N/^F) ; do echo $i ; done ; echo "finished"
finished
Thus, for empty directories we need the qualifier (N/^F), which you can read as: "don't warn me about failures, directories that are not full".
Similarly, for non-empty directories we need the qualifier (NF), which we can likewise read as: "don't warn me about failures, full directories".
dir_is_empty() {
[ "${1##*/}" = "*" ]
}
if dir_is_empty /some/dir/* ; then
echo "huzzah"
fi
Assume you don't have a file named * into /any/dir/you/check, it should work on bash dash posh busybox sh and zsh but (for zsh) require unsetopt nomatch.
Performances should be comparable to any ls which use *(glob), I guess will be slow on directories with many nodes (my /usr/bin with 3000+ files went not that slow), will use at least memory enough to allocate all dirs/filenames (and more) as they are all passed (resolved) to the function as arguments, some shell probably have limits on number of arguments and/or length of arguments.
A portable fast O(1) zero resources way to check if a directory is empty would be nice to have.
update
The version above doesn't account for hidden files/dirs, in case some more test is required, like the is_empty from Rich’s sh (POSIX shell) tricks:
is_empty () (
cd "$1"
set -- .[!.]* ; test -f "$1" && return 1
set -- ..?* ; test -f "$1" && return 1
set -- * ; test -f "$1" && return 1
return 0 )
But, instead, I'm thinking about something like this:
dir_is_empty() {
[ "$(find "$1" -name "?*" | dd bs=$((${#1}+3)) count=1 2>/dev/null)" = "$1" ]
}
Some concern about trailing slashes differences from the argument and the find output when the dir is empty, and trailing newlines (but this should be easy to handle), sadly on my busybox sh show what is probably a bug on the find -> dd pipe with the output truncated randomically (if I used cat the output is always the same, seems to be dd with the argument count).
Taking a hint (or several) from olibre's answer, I like a Bash function:
function isEmptyDir {
[ -d $1 -a -n "$( find $1 -prune -empty 2>/dev/null )" ]
}
Because while it creates one subshell, it's as close to an O(1) solution as I can imagine and giving it a name makes it readable. I can then write
if isEmptyDir somedir
then
echo somedir is an empty directory
else
echo somedir does not exist, is not a dir, is unreadable, or is not empty
fi
As for O(1) there are outlier cases: if a large directory has had all or all but the last entry deleted, "find" may have to read the whole thing to determine whether it's empty. I believe that expected performance is O(1) but worst-case is linear in the directory size. I have not measured this.
I am surprised the wooledge guide on empty directories hasn't been mentioned. This guide, and all of wooledge really, is a must read for shell type questions.
Of note from that page:
Never try to parse ls output. Even ls -A solutions can break (e.g. on HP-UX, if you are root, ls -A does
the exact opposite of what it does if you're not root -- and no, I can't make up something that
incredibly stupid).
In fact, one may wish to avoid the direct question altogether. Usually people want to know whether a
directory is empty because they want to do something involving the files therein, etc. Look to the larger
question. For example, one of these find-based examples may be an appropriate solution:
# Bourne
find "$somedir" -type f -exec echo Found unexpected file {} \;
find "$somedir" -maxdepth 0 -empty -exec echo {} is empty. \; # GNU/BSD
find "$somedir" -type d -empty -exec cp /my/configfile {} \; # GNU/BSD
Most commonly, all that's really needed is something like this:
# Bourne
for f in ./*.mpg; do
test -f "$f" || continue
mympgviewer "$f"
done
In other words, the person asking the question may have thought an explicit empty-directory test was
needed to avoid an error message like mympgviewer: ./*.mpg: No such file or directory when in fact no
such test is required.
Small variation of Bruno's answer:
files=$(ls -1 /some/dir| wc -l)
if [ $files -gt 0 ]
then
echo "Contains files"
else
echo "Empty"
fi
It works for me
With some workaround I could find a simple way to find out whether there are files in a directory. This can extend with more with grep commands to check specifically .xml or .txt files etc. Ex : ls /some/dir | grep xml | wc -l | grep -w "0"
#!/bin/bash
if ([ $(ls /some/dir | wc -l | grep -w "0") ])
then
echo 'No files'
else
echo 'Found files'
fi
if [[ -s somedir ]]; then
echo "Files present"
fi
In my testing with bash 5.0.17, [[ -s somedir ]] will return true if somedir has any children. The same is true of [ -s somedir ]. Note that this will also return true if there are hidden files or subdirectories. It may also be filesystem-dependent.
It really feels like there should be an option to test for an empty directory.
I'll leave that editorial comment as a suggestion to the maintainers of the test command, but the counterpart exists for empty files.
In the trivial use case that brought me here, I'm not worried about looping through a huge number of files, nor am I worried about .files. I was hoping to find the aforementioned "missing" operand to test. C'est la guerre.
In the example below directory empty is empty, and full has files.
$ for f in empty/*; do test -e $f; done
$ echo $?
1
$ for f in full/*; do test -e $f; done
$ echo $?
0
Or, shorter and uglier still, but again only for relatively trivial use cases:
$ echo empty/*| grep \*
$ echo $?
1
$ echo full/* | grep \*
$ echo $?
0
So far I haven't seen an answer that uses grep which I think would give a simpler answer (with not too many weird symbols!). Here is how I would
check if any files exist in the directory using bourne shell:
this returns the number of files in a directory:
ls -l <directory> | egrep -c "^-"
you can fill in the directory path in where directory is written. The first half of the pipe ensures that the first character of output is "-" for each file. egrep then counts the number of line that start with that
symbol using regular expressions. now all you have to do is store the number you obtain and compare it using backquotes like:
#!/bin/sh
fileNum=`ls -l <directory> | egrep -c "^-"`
if [ $fileNum == x ]
then
#do what you want to do
fi
x is a variable of your choice.
Mixing prune things and last answers, I got to
find "$some_dir" -prune -empty -type d | read && echo empty || echo "not empty"
that works for paths with spaces too
Simple answer with bash:
if [[ $(ls /some/dir/) ]]; then echo "huzzah"; fi;
I would go for find:
if [ -z "$(find $dir -maxdepth 1 -type f)" ]; then
echo "$dir has NO files"
else
echo "$dir has files"
This checks the output of looking for just files in the directory, without going through the subdirectories. Then it checks the output using the -z option taken from man test:
-z STRING
the length of STRING is zero
See some outcomes:
$ mkdir aaa
$ dir="aaa"
Empty dir:
$ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty"
empty
Just dirs in it:
$ mkdir aaa/bbb
$ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty"
empty
A file in the directory:
$ touch aaa/myfile
$ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty"
$ rm aaa/myfile
A file in a subdirectory:
$ touch aaa/bbb/another_file
$ [ -z "$(find aaa/ -maxdepth 1 -type f)" ] && echo "empty"
empty
Without calling utils like ls, find, etc.:
POSIX safe, i.e. not dependent on your Bash / xyz shell / ls / etc. version:
dir="/some/dir"
[ "$(echo $dir/*)x" != "$dir/*x" ] || [ "$(echo $dir/.[^.]*)x" != "$dir/.[^.]*x" ] || echo "empty dir"
The idea:
echo * lists non-dot files
echo .[^.]* lists dot files except of "." and ".."
if echo finds no matches, it returns the search expression, i.e. here * or .[^.]* - which both are no real strings and have to be concatenated with e.g. a letter to coerce a string
|| alternates the possibilities in a short circuit: there is at least one non-dot file or dir OR at least one dot file or dir OR the directory is empty - on execution level: "if first possibility fails, try next one, if this fails, try next one"; here technically Bash "tries to execute" echo "empty dir", put your action for empty dirs here (eg. exit).
Checked with symlinks, yet to check with more exotic possible file types.
In another thread How to test if a directory is empty with find i proposed this
[ "$(cd $dir;echo *)" = "*" ] && echo empty || echo non-empty
With the rationale that, $dir do exist because the question is "Checking from shell script if a directory contains files", and that * even on big dir is not that big, on my system /usr/bin/* is just 12Kb.
Update: Thanx #hh skladby, the fixed one.
[ "$(cd $dir;echo .* *)" = ". .. *" ] && echo empty || echo non-empty
if ls /some/dir/* >/dev/null 2>&1 ; then echo "huzzah"; fi;
to test a specific target directory
if [ -d $target_dir ]; then
ls_contents=$(ls -1 $target_dir | xargs);
if [ ! -z "$ls_contents" -a "$ls_contents" != "" ]; then
echo "is not empty";
else
echo "is empty";
fi;
else
echo "directory does not exist";
fi;
Try with command find.
Specify the directory hardcoded or as argument.
Then initiate find to search all files inside the directory.
Check if return of find is null.
Echo the data of find
#!/bin/bash
_DIR="/home/user/test/"
#_DIR=$1
_FIND=$(find $_DIR -type f )
if [ -n "$_FIND" ]
then
echo -e "$_DIR contains files or subdirs with files \n\n "
echo "$_FIND"
else
echo "empty (or does not exist)"
fi
I dislike the ls - A solutions posted. Most likely you wish to test if the directory is empty because you don't wish to delete it. The following does that. If however you just wish to log an empty file, surely deleting and recreating it is quicker then listing possibly infinite files?
This should work...
if ! rmdir ${target}
then
echo "not empty"
else
echo "empty"
mkdir ${target}
fi
Works well for me this (when dir exist):
some_dir="/some/dir with whitespace & other characters/"
if find "`echo "$some_dir"`" -maxdepth 0 -empty | read v; then echo "Empty dir"; fi
With full check:
if [ -d "$some_dir" ]; then
if find "`echo "$some_dir"`" -maxdepth 0 -empty | read v; then echo "Empty dir"; else "Dir is NOT empty" fi
fi

Resources