counting the total numbers of files and directories in a provided folder including subdirectories and their files - shell

I want to count all the files and directories from a provided folder including files and directories in a subdirectory. I have written a script which will count accurately the number of files and directory but it does not handle the subdirectories any ideas ???
I want to do it without using FIND command
#!/bin/bash
givendir=$1
cd "$givendir" || exit
file=0
directories=0
for d in *;
do
if [ -d "$d" ]; then
directories=$((directories+1))
else
file=$((file+1))
fi
done
echo "Number of directories :" $directories
echo "Number of file Files :" $file

Use find:
echo "Number of directories: $(find "$1" -type d | wc -l)"
echo "Number of files/symlinks/sockets: $(find "$1" ! -type d | wc -l)"
Using plain shell and recursion:
#!/bin/bash
countdir() {
cd "$1"
dirs=1
files=0
for f in *
do
if [[ -d $f ]]
then
read subdirs subfiles <<< "$(countdir "$f")"
(( dirs += subdirs, files += subfiles ))
else
(( files++ ))
fi
done
echo "$dirs $files"
}
shopt -s dotglob nullglob
read dirs files <<< "$(countdir "$1")"
echo "There are $dirs dirs and $files files"

find "$1" -type f | wc -l will give you the files, find "$1" -type d | wc -l the directories
My quick-and-dirty shellscript would read
#!/bin/bash
test -d "$1" || exit
files=0
# Start with 1 to count the starting dir (as find does), else with 0
directories=1
function docount () {
for d in $1/*; do
if [ -d "$d" ]; then
directories=$((directories+1))
docount "$d";
else
files=$((files+1))
fi
done
}
docount "$1"
echo "Number of directories :" $directories
echo "Number of file Files :" $files
but mind it: On my build folder for a project, there were quite some differences:
find: 6430 dirs, 74377 non-dirs
my script: 6032 dirs, 71564 non-dirs
#thatotherguy's script: 6794 dirs, 76862 non-dirs
I assume that has to do with the legions of links, hidden files etc., but I am too lazy to investigate: find is the tool of choice.

Here are some one-line commands that work without find:
Number of directories: ls -Rl ./ | grep ":$" | wc -l
Number of files: ls -Rl ./ | grep "[0-9]:[0-9]" | wc -l
Explanation:
ls -Rl lists all files and directories recursively, one line each.
grep ":$" finds just the results whose last character is ':'. These are all of the directory names.
grep "[0-9]:[0-9]" matches on the HH:MM part of the timestamp. The timestamp only shows up on file, not directories. If your timestamp format is different then you will need to pick a different grep.
wc -l counts the number of lines that matched from the grep.

Related

Separate folders into subfolders according to numbering in bash

I have the following directory tree:
1_loc
2_buzdfg
4_foodga
5_bardfg
6_loc
8_buzass
9_foossd
12_bardaf
There may be numbers missing in the folder ordering.
I want to separate these folders into subfolders according to their numbers, so that all folders with a number smaller than 6 (before the second _loc folder) would go to folder1 and all folders with a number equal or greater than 6 with go to folder2.
I can solve the problem very easily using the mouse, of course, but I wanted a suggestion of how to do this automatically from the terminal.
Any ideas?
while read -r line; do
# Regex match the beginning of the string for a character between 1 and 5 - change this as you'd please to any regex
FOLDERNUMBER=""
[[ "$line" ~= "^[1-5]" ]] && FOLDERNUMBER="1" || FOLDERNUMBER="2"
# So FOLDERPATH = "./folder1", if FOLDERNUMBER=1
FOLDERPATH="./folder$FOLDERNUMBER"
# Check folder exists, if not create it
[[ ! -d "$FOLDERPATH" ]] && mkdir "$FOLDERPATH"
# Finally, move the file to FOLDERPATH
mv "$line" "$FOLDERPATH/$(basename $line)"
done < <(find . -type f)
# This iterates through each line of the command in the brackets - in this case, each file path from the `find` command.
I think the solution is to loop through the files and check the number before the first _.
Firstly, let's check how to get the number before _:
$ d="1_loc_b"
$ echo "${d%%_*}"
1
OK, so this works. Then, let's loop:
for file in *
do
echo "$file"
(( ${file%%_*} > 5)) && echo "moving to dir2/" || echo "moving to dir1/"
done
Suppose folder1 and folder2 exists in the same directory, I will do it like this:
for d in *_*; do # to avoid folder1 and folder2
# check if the first field seperated by _ is less than 5
if ((`echo $d | cut -d"_" -f1` < 6)); then
mv $d folder1/$d;
else
mv $d folder2/$d;
fi;
done
(more about cut)
You can go to the current directory and run these simple commands:
mv {1,2,3,4}_* folder1/
mv {5,6,7,8}_* folder2/
This assumes no other files/directory starting with these prefixes (i.e. 1-8).
Another pure bash, parameter-expansion solution:-
#!/bin/bash
# 'find' returns folders having a '_' in their names, the flag -print0 option to
# preserve special characters in names.
# the folders are names as './1_folder', './2_folder', bash magic is done
# to remove those special characters.
# '-v' flag in 'mv' for verbose action
while IFS= read -r -d '' folder; do
folderName="${folder%_*}" # To strip the characters after the '_'
finalName="${folderName##*/}" # To strip the everything before '/'
((finalName > 5)) && mv -v "$folder" folder1 || mv -v "$folder" folder2
done < <(find . -maxdepth 1 -mindepth 1 -name "*_*" -type d -print0)
You can create the a script with the following code and when you run it, the folders will be moved as desired..
#seperate the folders into 2 folders
#this is a generic solution for any folder that start with a number
#!/bin/bash
for file in *
do
prefix=`echo $file | awk 'BEGIN{FS="_"};{print $1}'`
if [[ $prefix != ?(-)+([0-9]) ]]
then continue
fi
if [ $prefix -le 4 ]
then mv "$file" folder1
elif [ $prefix -ge 5 ]
then mv "$file" folder2
fi
done

How to locate the directory where the sum of the number of lines of regular file is greatest (in bash)

Hi i'm new in Unix and bash and I'd like to ask q. how can i do this
The specified directory is given as arguments. Locate the directory
where the sum of the number of lines of regular file is greatest.
Browse all specific directories and their subdirectories. Amounts
count only for files that are directly in the directory.
I try somethnig but it's not working properly.
while [ $# -ne 0 ];
do case "$1" in
-h) show_help ;;
-*) echo "Error: Wrong arguments" 1>&2 exit 1 ;;
*) directories=("$#") break ;;
esac
shift
done
IFS='
'
amount=0
for direct in "${directories[#]}"; do
for subdirect in `find $direct -type d `; do
temp=`find "$subdirect" -type f -exec cat {} \; | wc -l | tr -s " "`
if [ $amount -lt $temp ]; then
amount=$temp
subdirect2=$subdirect
fi
done
echo Output: "'"$subdirect2$amount"'"
done
the problem is here when i use as arguments this dirc.(just example)
/home/usr/first and there are this direct.
/home/usr/first/tmp/first.txt (50 lines)
/home/usr/first/tmp/second.txt (30 lines)
/home/usr/first/tmp1/one.txt (20 lines)
it will give me on Output /home/usr/first/tmp1 100 and this is wrong it should be /home/usr/first/tmp 80
I'd like to scan all directories and all its subdirectories in depth. Also if multiple directories meets the maximum should list all.
Given your sample files, I'm going to assume you only want to look at the immediate subdirectories, not recurse down several levels:
max=-1
# the trailing slash limits the wildcard to directories only
for dir in */; do
count=0
for file in "$dir"/*; do
[[ -f "$file" ]] && (( count += $(wc -l < "$file") ))
done
if (( count > max )); then
max=$count
maxdir="$dir"
fi
done
echo "files in $maxdir have $max lines"
files in tmp/ have 80 lines
In the spirit of Unix (caugh), here's an absolutely disgusting chain of pipes that I personally hate, but it's a lot of fun to construct :):
find . -mindepth 1 -maxdepth 1 -type d -exec sh -c 'find "$1" -maxdepth 1 -type f -print0 | wc -l --files0-from=- | tail -1 | { read a _ && echo "$a $1"; }' _ {} \; | sort -nr | head -1
Of course, don't use this unless you're mentally ill, use glenn jackman's nice answer instead.
You can have great control on find's unlimited filtering possibilities, too. Yay. But use glenn's answer!

Why is while not not working?

AIM: To find files with a word count less than 1000 and move them another folder. Loop until all under 1k files are moved.
STATUS: It will only move one file, then error with "Unable to move file as it doesn't exist. For some reason $INPUT_SMALL doesn't seem to update with the new file name."
What am I doing wrong?
Current Script:
Check for input files already under 1k and move to Split folder
INPUT_SMALL=$( ls -S /folder1/ | grep -i reply | tail -1 )
INPUT_COUNT=$( cat /folder1/$INPUT_SMALL 2>/dev/null | wc -l )
function moveSmallInput() {
while [[ $INPUT_SMALL != "" ]] && [[ $INPUT_COUNT -le 1003 ]]
do
echo "Files smaller than 1k have been found in input folder, these will be moved to the split folder to be processed."
mv /folder1/$INPUT_SMALL /folder2/
done
}
I assume you are looking for files that has the word reply somewhere in the path. My solution is:
wc -w $(find /folder1 -type f -path '*reply*') | \
while read wordcount filename
do
if [[ $wordcount -lt 1003 ]]
then
printf "%4d %s\n" $wordcount $filename
#mv "$filename" /folder2
fi
done
Run the script once, if the output looks correct, then uncomment the mv command and run it for real this time.
Update
The above solution has trouble with files with embedded spaces. The problem occurs when the find command hands its output to the wc command. After a little bit of thinking, here is my revised soltuion:
find /folder1 -type f -path '*reply*' | \
while read filename
do
set $(wc -w "$filename") # $1= word count, $2 = filename
wordcount=$1
if [[ $wordcount -lt 1003 ]]
then
printf "%4d %s\n" $wordcount $filename
#mv "$filename" /folder2
fi
done
A somewhat shorter version
#!/bin/bash
find ./folder1 -type f | while read f
do
(( $(wc -w "$f" | awk '{print $1}' ) < 1000 )) && cp "$f" folder2
done
I left cp instead of mv for safery reasons. Change to mv after validating
I you also want to filter with reply use #Hai's version of the find command
Your variables INPUT_SMALL and INPUT_COUNT are not functions, they're just values you assigned once. You either need to move them inside your while loop or turn them into functions and evaluate them each time (rather than just expanding the variable values, as you are now).

Need a bash scripts to move files to sub folders automatically

I have a folder with 320G images, I want to move the images to 5 sub folders randomly(just need to move to 5 sub folders). But I know nothing on bash scripts.Please could someone help? thanks!
You could move the files do different directories based on their first letter:
mv [A-Fa-f]* dir1
mv [F-Kf-k]* dir2
mv [^A-Ka-k]* dir3
Here is my take on this. In order to use it place the script somewhere else (not in you folder) but run it from your folder. If you call your script file rmove.sh, you can place it in, say ~/scripts/, then cd to your folder and run:
source ~/scripts/rmove.sh
#/bin/bash
ndirs=$((`find -type d | wc -l` - 1))
for file in *; do
if [ -f "${file}" ]; then
rand=`dd if=/dev/random bs=1 count=1 2>/dev/null | hexdump -b | head -n1 | cut -d" " -f2`
rand=$((rand % ndirs))
i=0
for directory in `find -type d`; do
if [ "${directory}" = . ]; then
continue
fi
if [ $i -eq $rand ]; then
mv "${file}" "${directory}"
fi
i=$((i + 1))
done
fi
done
Here's my stab at the problem:
#!/usr/bin/env bash
sdprefix=subdir
dirs=5
# pre-create all possible sub dirs
for n in {1..5} ; do
mkdir -p "${sdprefix}$n"
done
fcount=$(find . -maxdepth 1 -type f | wc -l)
while IFS= read -r -d $'\0' file ; do
subdir="${sdprefix}"$(expr \( $RANDOM % $dirs \) + 1)
mv -f "$file" "$subdir"
done < <(find . -maxdepth 1 -type f -print0)
Works with huge numbers of files
Does not beak if a file is not moveable
Creates subdirectories if necessary
Does not break on unusual file names
Relatively cheap
Any scripting language will do so I'll write in Python here:
#!/usr/bin/python
import os
import random
new_paths = ['/path1', '/path2', '/path3', '/path4', '/path5']
image_directory = '/path/to/images'
for file_path in os.listdir(image_directory):
full_path = os.path.abspath(os.path.join(image_directory, file_path))
random_subdir = random.choice(new_paths)
new_path = os.path.abspath(os.path.join(random_subdir, file_path))
os.rename(full_path, new_path)
mv `ls | while read x; do echo "`expr $RANDOM % 1000`:$x"; done \
| sort -n| sed 's/[0-9]*://' | head -1` ./DIRNAME
run it in your current image directory, this command will select one file at a time and move it to ./DIRNAME, iterate this command until there are no more files to move.
Pay attention that ` is backquotes and not just quotes characters.

Bash script to list files not found

I have been looking for a way to list file that do not exist from a list of files that are required to exist. The files can exist in more than one location. What I have now:
#!/bin/bash
fileslist="$1"
while read fn
do
if [ ! -f `find . -type f -name $fn ` ];
then
echo $fn
fi
done < $fileslist
If a file does not exist the find command will not print anything and the test does not work. Removing the not and creating an if then else condition does not resolve the problem.
How can i print the filenames that are not found from a list of file names?
New script:
#!/bin/bash
fileslist="$1"
foundfiles="~/tmp/tmp`date +%Y%m%d%H%M%S`.txt"
touch $foundfiles
while read fn
do
`find . -type f -name $fn | sed 's:./.*/::' >> $foundfiles`
done < $fileslist
cat $fileslist $foundfiles | sort | uniq -u
rm $foundfiles
#!/bin/bash
fileslist="$1"
while read fn
do
FPATH=`find . -type f -name $fn`
if [ "$FPATH." = "." ]
then
echo $fn
fi
done < $fileslist
You were close!
Here is test.bash:
#!/bin/bash
fn=test.bash
exists=`find . -type f -name $fn`
if [ -n "$exists" ]
then
echo Found it
fi
It sets $exists = to the result of the find. the if -n checks if the result is not null.
Try replacing body with [[ -z "$(find . -type f -name $fn)" ]] && echo $fn. (note that this code is bound to have problems with filenames containing spaces).
More efficient bashism:
diff <(sort $fileslist|uniq) <(find . -type f -printf %f\\n|sort|uniq)
I think you can handle diff output.
Give this a try:
find -type f -print0 | grep -Fzxvf - requiredfiles.txt
The -print0 and -z protect against filenames which contain newlines. If your utilities don't have these options and your filenames don't contain newlines, you should be OK.
The repeated find to filter one file at a time is very expensive. If your file list is directly compatible with the output from find, run a single find and remove any matches from your list:
find . -type f |
fgrep -vxf - "$1"
If not, maybe you can massage the output from find in the pipeline before the fgrep so that it matches the format in your file; or, conversely, massage the data in your file into find-compatible.
I use this script and it works for me
#!/bin/bash
fileslist="$1"
found="Found:"
notfound="Not found:"
len=`cat $1 | wc -l`
n=0;
while read fn
do
# don't worry about this, i use it to display the file list progress
n=$((n + 1))
echo -en "\rLooking $(echo "scale=0; $n * 100 / $len" | bc)% "
if [ $(find / -name $fn | wc -l) -gt 0 ]
then
found=$(printf "$found\n\t$fn")
else
notfound=$(printf "$notfound\n\t$fn")
fi
done < $fileslist
printf "\n$found\n$notfound\n"
The line counts the number of lines and if its greater than 0 the find was a success. This searches everything on the hdd. You could replace / with . for just the current directory.
$(find / -name $fn | wc -l) -gt 0
Then i simply run it with the files in the files list being separated by newline
./search.sh files.list

Resources