Finding empty directories - bash

I need to find empty directories for a given list of directories.
Some directories have directories inside it.
If inside directories are also empty I can say main directory is empty otherwise it's not empty.
How can I test this?
For example:
A>A1(file1),A2 this is not empty beacuse of file1
B>B1(no file) this is empty
C>C1,C2 this is empty

It depends a little on what you want to do with the empty directories. I use the command below when I wish to delete all empty directories within a tree, say test directory.
find test -depth -empty -delete
One thing to notice about the command above is that it will also remove empty files, so use the -type d option to avoid that.
find test -depth -type d -empty -delete
Drop -delete to see the files and directories matched.
If your definition of an empty directory tree is that it contains no files then you be able to stick something together based on whether find test -type f returns anything.
find is a great utility, and RTFM early and often to really understand how much it can do :-)

You can use the following command:
find . -type d -empty

Check whether find <dir> -type f outputs anything. Here's an example:
for dir in A B C; do
[ -z "`find $dir -type f`" ] && echo "$dir is empty"
done

find directory -mindepth 1 -type d -empty -delete
This is the version that I found most interesting. If executed from inside directory, it will delete all empty directories below (a directory is considered empty if it only contains empty directories).
The mindepth option prevents the directory itself from being deleted if it happens to be empty.

find . -type d -empty
finds and lists empty directories and sub-directories in the current tree.
E.g. resulting list of empty dirs and subdirs:
./2047
./2032
./2049
./2063
./NRCP26LUCcct1/2039
./NRCP26LUCcct1/2054
./NRCP26LUCcct1/2075
./NRCP26LUCcct1/2070
No operation is made on the directories. They are simply listed.
This works for me.

Just find empty dirs
In order to just find empty directories (as specified in the question title), the mosg's answer is correct:
find -type d -empty
But -empty may not be available on very old find versions (this is the case of HP-UX for example). If this is your case, see the techniques described in below section Is a directory empty?.
Delete empty dirs
This is a bit tricky: Suppose a directory MyDir contains empty directories. After removing these empty directories, MyDir will become an empty directory and should also be removed. Therefore I use the command rmdir with the option --parents (or -p) that also removes parent directories when possible:
find -type d -empty -exec rmdir -vp --ignore-fail-on-non-empty {} +
On older find version the statement + is not yet supported, therefore you may use ; instead:
find -type d -empty -exec rmdir -vp --ignore-fail-on-non-empty {} `;`
Is a directory empty?
Most of these answers explain how to check if a directory is empty. Therefore I provide here the three different techniques I know:
[ $(find your/dir -prune -empty) = your/dir ]
d=your/dir
if [ x$(find "$d" -prune -empty) = x"$d" ]
then
echo "empty (directory or file)"
else
echo "contains files (or does not exist)"
fi
a variation:
d=your/dir
if [ x$(find "$d" -prune -empty -type d) = x"$d" ]
then
echo "empty directory"
else
echo "contains files (or does not exist or is not a directory)"
fi
Explanation:
find -prune is similar than find -maxdepth 0 using less characters
find -type d prints directories only
find -empty prints the empty directories and files
> mkdir -v empty1 empty2 not_empty
mkdir: created directory 'empty1'
mkdir: created directory 'empty2'
mkdir: created directory 'not_empty'
> touch not_empty/file
> find empty1 empty2 not_empty -prune -empty
empty1
empty2
(( ${#files} ))
This trick is 100% bash but invokes (spawns) a sub-shell. The idea is from Bruno De Fraine and improved by teambob's comment. I advice this one if you use bash and if your script does not have to be portable.
files=$(shopt -s nullglob dotglob; echo your/dir/*)
if (( ${#files} ))
then
echo "contains files"
else
echo "empty (or does not exist or is a file)"
fi
Note: no difference between an empty directory and a non-existing one (and even when the provided path is a file).
[ $(ls -A your/dir) ]
This trick is inspired from nixCraft's article posted in 2007. Andrew Taylor answered in 2008 and gr8can8dian in 2011.
if [ "$(ls -A your/dir)" ]
then
echo "contains files"
else
echo "empty (or does not exist or is a file)"
fi
or the one-line bashism version:
[[ "$(ls -A your/dir)" ]] && echo "contains files" || echo "empty or ..."
Note: ls returns $?=2 when the directory does not exist. But no difference between a file and an empty directory.

How about rmdir *? That command will fail on non-empty directories.

This recursive function would seem to do the trick:
# Bash
findempty() {
find ${1:-.} -mindepth 1 -maxdepth 1 -type d | while read -r dir
do
if [[ -z "$(find "$dir" -mindepth 1 -type f)" ]] >/dev/null
then
findempty "$dir"
echo "$dir"
fi
done
}
Given this example directory structure:
.
|-- dir1/
|-- dir2/
| `-- dirB/
|-- dir3/
| `-- dirC/
| `-- file5
|-- dir4/
| |-- dirD/
| `-- file4
`-- dir5/
`-- dirE/
`-- dir_V/
The result of running that function would be:
./dir1
./dir5/dirE/dir_V
./dir5/dirE
./dir5
./dir2/dirB
./dir2
which misses /dir4/dirD. If you move the recursive call findempty "$dir" after the fi, the function will include that directory in its results.

The following command returns 1 if a directory is empty (or does not exists) and 0 otherwise (so it is possible to invert the return code with ! in a shell script):
find $dir -type d -prune -empty -exec false {} +

I created a simple structure as follows:
test/
test/test2/
test/test2/test2.2/
test/test3/
test/test3/file
The test/test3/file contains some junk text.
Issuing find test -empty returns "test/test2/test2.2" as the only empty directory.

a simple approach would be,
$ [ "$(ls -A /path/to/direcory)" ] && echo "not empty" || echo "its empty"
also,
if [ "$(ls -A /path/to/direcory)" ]; then
echo "its not empty"
else
echo "empty directory"

find . -name -type d -ls |awk '($2==0){print $11}'

Related

Bash : Verify empty folder before ls -A

I'm starting to study some sh implementations and im running into some troubles when trying to do some actions with files inside some folders.
Here is the scenario:
I have a list of TXT files inside two different subfolders :
├── Folder A
├── randomFile1.txt
├── randomFile2.txt
├── Folder B
├── File1.txt
├── Folder C
├── File2.txt
And depending of the folder that the file resides in, i should take an specify action.
obs1 : The files from folderA should not be processed.
basicaly i tried two different aprroachs:
first one :
files_b="$incoming/$origin"/FolderB/*.txt
files_c="$incoming/$origin"/FolderC/*.txt
if [ "$(ls -A $files_b)" ]; then
for file in $files_b
do
#take action
done
else
echo -e "\033[1;33mWarning: No files\033[0m"
fi
if [ "$(ls -A $files_c)" ]; then
for file in $files_c
do
#take action
done
else
echo -e "\033[1;33mWarning: No files\033[0m"
fi
the problem for this one is that when i run the command ls -A if one of the folders (B or C) is empty, it throws an error because of the " *.txt " in the end of the path.
The second :
path="$incoming/$origin"/*.txt
find $path -type f -name "*.txt" | while read txt; do
for file in $txt
do
name=$(basename "$file")
dir=$(basename $(dirname $file))
if [ "$dir" == FolderB]; then
# Do something to files"
elif [ "$dir" == FolderC]; then
# Do something to files"
fi
done
done
For that approach the problem is that i'm picking the files from folder A and i dont want that (because it will decrease performance due to "if" statements), and i dont know how to verify if the folder is empty using the find command.
Can annyone help me?
Thank you all.
I would write the code like this:
No unquoted parameter expansions
Don't use ls to check if the directory is empty
Use printf instead of echo.
# You cannot safely expand a parameter so that it does file globbing
# but does *not* to word-splitting. Put the glob directly in the loop
# or use an array.
shopt -s nullglob
found=
for file in "$incoming/$origin"/FolderB/*.txt; do
do
found=1
#take action
done
if [ "$found" ]; then
printf "\033[1;33mWarning: No files\033[0m\n"
fi
In the first solution you can simply hide the error messages.
if [ "$(ls -A $files_b 2>/dev/null)" ]; then
In the second solution, start find at the subdirectories instead of the parent directory:
path="$incoming/$origin/FolderA $incoming/$origin/FolderB"
I think using find should be better
files_b="${incoming}/${origin}/FolderB"
files_c="${incoming}/${origin}/FolderC"
find files_b -name "*.txt" -exec action1 {} \;
find files_b -name "*.txt" -exec action2 {} \;
or even just find
find "${incoming}/${origin}/FolderB" -name "*.txt" -exec action1 {} \;
find "${incoming}/${origin}/FolderC" -name "*.txt" -exec action2 {} \;
of course you should think about your action, but you can make function or separate script which accept file name(s)

Find all directories, that don't contain other directories

Currently:
$ find -type d
./a
./a/sub
./b
./b/sub
./b/sub/dub
./c/sub
./c/bub
I need:
$ find -type d -not -contains -type d
./a/sub
./b/sub/dub
./c/sub
./c/bub
How do I exclude directories, that contain other (sub)directories, but are not empty (contain files)?
You can find the leaf directories that only have 2 links (or less) and then check if each found directory contains some files.
Something like this:
# find leaf directories
find -type d -links -3 -print0 | while read -d '' dir
do
# check if it contains some files
if ls -1qA "$dir" | grep -q .
then
echo "$dir"
fi
done
Or simply:
find -type d -links -3 ! -empty
Note that you may need the find option -noleaf on some filesystems, like CD-ROM or some MS-DOS filesystems. It works without it in WSL2 though.
In the btrfs filesystem the directories always have 1 link so using -links won't work there.
A much slower, but filesystem agnostic, find based version:
prev='///' # some impossible dir
# A depth first find to collect non-empty directories
readarray -d '' dirs < <(find -depth -type d ! -empty -print0)
for dir in "${dirs[#]}"
do
dirterm=$dir'/'
# skip if it matches the previous dir
[[ $dirterm == ${prev:0:${#dirterm}} ]] && continue
# skip if it has sub directories
[[ $(find "$dir" -mindepth 1 -maxdepth 1 -type d -print -quit) != '' ]] && continue
echo "$dir"
prev=$dir
done # add "| sort" if you want the same order as a "find" without "-depth"
You didn't show us which of these directories do and do not contain files. You specify files, so I'm working on the assumption that you only want directories that have no subdirectories but do have files.
shopt -s dotglob nullglob globstar # customize glob evaluation
for d in **/ # loop directories only
do for s in "${d}"*/ # check subdirs in each
do [[ -h "$s" ]] || continue 2 # skip dirs with subdirs
done
for f in "${d}"* # check for nondirs in each
do echo "$d" # there's something here!
continue 2 # done with this dir, check next
done
done
dotglob includes "hidden" files whose names start with a "dot" (.foo)
nullglob makes no*such return nothing instead of the string 'no*such'.
globstar makes **/ match arbitrary depth - e.g., ./x/, ./x/y/, and ./x/y/z/.
for d in **/ loops over all subdirectories, including subdirectories of subdirectories, though the trailing / means it will only report directories, not files.
for s in "${d}"*/ loops over all the subdirectories of $d if there are any. nullglob means if there are none, the loop won't execute at all. If we see a subdirectory, [[ -h "$s" ]] || continue 2 says if it entered this loop at all, symlinks are ok, but anything else disqualifies $d, so skip up 2 enclosing loops and advance the top level to the next dir.
if it gets this far, there are no invalidating real subdirectories, so we have to confirm there are files of some sort, even if they are just symlinks to other directories. for f in "${d}"* loops through anything else in the directory, since we know there aren't subdirs. It won't even enter the loop if the directory doesn't have something because of the nullglob, so if it goes in at all, anything there is a reason to report the dir (echo "$d") as non-empty. Once that's done, there's no reason to keep checking, so continue 2 again advances the top loop to the next dir to check!
I expected **/ to work, but it fails to get any subdirectories at all on my Windows/Git Bash emulation. **/*/ ignores subdirectories of the current directory, which is why I originally used */ **/*/, but **/ prevents redundancies when run on a proper Centos VM. Use that.

Move files from several folders to subfolders

I have many folders with files in them, in each of them I have a subfolder called wvfm . What I want to do is move all the files into each of the wvfm folder.
I tried doing the line below but it is not working
for i in "./*"; mv $i "./wvfm"; done
but that didn't work quite right
Use a for loop (for i in */; do) to list the files, then move to the folders to list and save all files in to a variable (F=$(ls)).
Then move all your files with excluding your folder (mv ${F/wvfm/} wvfm), like this:
#!/bin/bash
subdirectory="wvfm"
for i in */; do
cd $i
F=$(ls)
mv ${F/$subdirectory/} $subdirectory
cd ../
done
find * -maxdepth 0 -type d | xargs -n 1 -I {} mv {}/* {}/wvfm
should do the trick; quick and dirty. Not a MacOS user, but works in bash.
Explanation:
find * -maxdepth 0 -type d find all directories at depth 0 (ie: do not descend dirs),
pipe to xargs, using options -n 1, operate on one value at a time, -I replace-str, string replacement (see man xargs).
action command: mv {}/* {}/wvfm substitues to mv dirA/* dirA/wvfm for each dir match.
You will get an "error", mv: cannot move /wvfm' to a subdirectory of itself, 'wvfm/wvfm', but you can ignore / take advantage of it (quick and dirty).
You could cover all bases with:
for entry in $(find * -maxdepth 0 -type d); do
(cd $entry; [[ -d wvfm ]] && \
find * -maxdepth 0 ! -name wvfm -exec mv {} wvfm \;
)
done
find * -maxdepth 0 -type d, again, find only the top-level directories,
in a subshell, change to the input directory, if there's a directory wvfm,
look for all contents except the wvfm directory and -exec the mv command
exiting the subshell leaves you back in the starting (root) directory, ready for next input.

Sorting loop function creates infinite subdirectories

I routinely have to sort large amounts of images in multiple folders into two 2 file types, ".JPG" and ".CR2". I'm fairly new to bash but have created a basic script that will sort through one individual folder successfully and divide these file types into distinct folders.
I'm having problems scaling this up to automatically loop through subdirectories. My current script creates an infinite loop of new subfolders until terminal times out.
How can I use the loop function without having it cycle through new folders?
function f {
cd "$1"
count=`ls -1 *.JPG 2>/dev/null | wc -l`
if [ $count != 0 ]; then
echo true
mkdir JPG; mv *.JPG jpg
else
echo false
fi
count=`ls -1 *.CR2 2>/dev/null | wc -l`
if [ $count != 0 ]; then
echo true
mkdir RAW; mv *.CR2 raw;
else
echo false
fi
for d in * .[!.]* ..?*; do
cd "$1"
test -d "$1/$d" && f "$1/$d"
done
}
f "`pwd`"
I still advise people to use find instead of globulation * in scripts. The * may not work reliably always, may fail and confuse.
First we create directories to move to:
mkdir -p ./jpg ./cr2
Note that -p in mkdir will make mkdir not fail in case the directory already exists.
Use find. Find all files named *.JPG and move each file to jpg :
find . -maxdepth 1 -mindepth 1 -name '*.JPG' -exec mv {} ./jpg \;
// similar
find . -maxdepth 1 -mindepth 1 -name '*.CR2' -exec mv {} ./cr2 \;
The -maxdepth 1 -mindepth 1 is so that the find does not scan the directory recursively, which is the default. You can remove them, but If you want, you can add -type f to include files only.
Notes to your script:
Don't parse ls output
You can use the find . -mindepth 1 -maxdepth 1 -file '*.jpg' -print . | wc -c to get the number of files in a directory instead.
for d in * .[!.]* ..?*; do I have a vague idea what this is supposed to do, some kind of recursively scanning the directory. Buf if the directory JPG is inside $(pwd) then you will scan infinitely into yourself and move the file into yourself etc... If the destination folder is outside current directory, just modify the find scripts by removing -mindepth 1, it will scan recursively then.
Don't use backticks, they are less readable and are deprecated. Use $( .. ) instead.

bash if then cp as hardlinks

I would like for the following to hardlink all files to destination, except those directories defined. The find piece is working, but it won't copy any files.
#!/bin/sh
tag_select=$1
source=$3
dest="/backup/"
{
if [[ "$1" = "backup" ]]; then
find . -mindepth 1 -maxdepth 1 ! -name "dir1" ! -name "dir2" | while read line
do
cp -lr "$3" "$dest"
done
fi
}
Please note, I do not want to use rysnc as I would like to create hardlinks in the destination. Thank you in advance!
I guess you know why "$2" doesn't appear anywhere, so we will just presume you are correct. You also understand that every file you find source (e.g. "$3") will be linked to $dest no matter what filenames are discovered by find because you make no use of "$line" that you use as your while read line loop variable. It appears from the question, you want to link all files in source in dest (you must confirm this is your intent) If so, find itself is all you need, e.g.
find source -maxdepth 1 ! -name "dir1" ! -name "dir2" -execdir cp -lr '{}' "$dest" \;
which will find all files (and directories) for 1-level and hardlink each of the files in dest. If that wasn't your intent, please let me know and I'm happy to help further. Your original posts was somewhat an opaque pot of shell stew...
Replace your find command with a simple glob; this also has the benefit of working for any valid file name, not just the ones that don't have newlines in them.
#!/bin/sh
tag_select=$1
source=$3
dest="/backup/"
if [ "$1" = "backup" ]; then
for f in "$source"/*; do
case $f in
dir1|dir2) continue ;;
esac
cp -lr "$f" "$dest"
done
fi
try This
#!/bin/sh
tag_select=$1;
source=$2;
dest="/backup/";
if [ "$1" = "backup" ]; then
find $source -mindepth 1 -maxdepth 1 ! -name "dir1" ! -name "dir2" -exec cp -lr {} "$dest" \;
fi
your command should be
./code.sh backup source_folder_path
example
./code.sh backup ~/Desktop
Try below code for only files in the dir
find $source -maxdepth 1 -type f -exec sh -c "ln -f \"\$(realpath {})\" \"$dest\$(basename {})\"" \;
you cant hard link folders.

Resources