How do I add numbers from result? - bash

I want to make a bash script that counts how many files there are in specific folders.
Example:
#!/usr/bin/env bash
sum=0
find /home/user/Downloads | wc -l
find /home/user/Documents | wc -l
find /home/user/videos | wc -l
echo "files are $sum "
Downloads folder has 5 files, Documents has 10 files and videos has 10 files.
I want to add all files from above directories and print the number of files.
echo "files are $sum "
Please I would like to use "only" find command, because my script delete some files. My goal is how many files I deleted.

Anything that's piping to wc -l isn't counting how many files you have, it's counting how many newlines are present, and so it'll fail if any of your file names contain newlines (a very real possibility, especially given the specific directories you're searching). You could do this instead using GNU tools:
find \
/home/user/Downloads \
/home/user/Documents \
/home/user/videos \
-print0 |
awk -v RS='\0' 'END{print NR}'

I'm not sure about this, but you can try this code.
#!/usr/bin/env bash
sum=0
downloads=$(find /home/user/Downloads | wc -l)
((sum += downloads))
documents=$(find /home/user/Documents | wc -l)
((sum += documents))
videos=$(find /home/user/videos | wc -l)
((sum += videos))
echo "files are $sum "

You can save the individual results and sum them as as per FS-GSW's answer.
That's what I would do in Java or C, but shell scripts have their own patterns. Whenever you find yourself with lots of variables or explicit looping, take a step back. That imperative style is not super idiomatic in shell scripts.
Can you pass multiple file names to the same command?
Can that loop become a pipe?
In this case, I would pass all three directories to a single find command. Then you only need to call wc -l once on the combined listing:
find /home/user/Downloads /home/user/Documents /home/user/videos | wc -l
Or using tilde and brace expansion:
find ~user/{Downloads,Documents,videos} | wc -l
And if the list of files is getting long you could store it in an array:
files=(
/home/user/Downloads
/home/user/Documents
/home/user/videos
)
find "${files[#]}" | wc -l

Just use ..
find /home/user/ -type f | wc -l
.. to count the files inside the user directory recursively.
With tree you could also find/count (or whatever else you want to do) hidden files.
e.g. tree -a /home/user/ - for which the output would be: XXX directories, XXX files - which then wouldn't apply to your question though.

Related

file and number of lines matching

I am trying to count lines in the files which matches to the given pattern. now the problem is it gives me only the number of lines. How can i get the file location or name along with number of matched lines?
command i am using now is
for i in $(find . -name 'foo.txt' | sed 's/\.\///g');
do
grep -l && -c '^>' $i;
done
so output i am expecting is like "file location/name number of lines matching"
grep can show you the file name if you specify it as a command-line argument. You can use xargs to invoke grep for each batch of filenames. It'll read the names from standard input and use them as command line arguments for grep.
find . | xargs grep -cH '^>'
Using your find command:
find . -name 'foo.txt' | sed 's/\.\///g' | xargs grep -cH '^>'
You capture the number of lines matching in a variable and test it:
n=$(grep -c '^>' "$i")
(( n > 0 )) && echo "$i:$n"
Actually, you don't even need the test: grep exits unsuccessfully if no matches found, so
n=$(grep -c '^>' "$i") && echo "$i:$n"
Actually, that's too much work. With GNU grep at least, use the -H option. A demo with a file I have lying around:
$ grep -c zero note.xml
16
$ grep -Hc zero note.xml
note.xml:16

using "wc -l" on script counts more than using on terminal

I'm making a bash script and it's like this:
#!/bin/bash
DNUM=$(ls -lAR / 2> /dev/null | grep '^d' | wc -l)
echo there are $DNUM directories.
the problem is, that when I run this line directly on the terminal:
ls -lAR / 2> /dev/null | grep '^d' | wc -l
I get a number.
But when I run the script it displays me a greater number, like 30 to 50 more.
What is the problem here?
Why is the "wc" command counting more lines when running it from a script?
You may have different directory roots for the two runs. Instead of ls to find the directories only you can use this
find parent_directory -type d
and pipe to wc -l to count.
The /proc directory will have processes and treated as directories and will change from run to run. To exclude it from the count use
find / -path /proc -prune -o -type d | wc -l
To find the differences in your exact case I would suggest to run
#!/bin/bash
for r in 1 2; do
ls -lAR / 2> /dev/null | grep '^d' > run${r}.txt 1> out${r}.txt
done
diff -Nura out1.txt out2.txt
rm -f out1.txt out2.txt
But as the most ppl. already said it would make sense to exclude directories like sys,proc ...

Bash: count occurrences of a char in all files of folder

I need to count occurrences of a char in all files of folder. I am using this script:
TEMPFILE=/tmp/1.tmp
echo 0 > $TEMPFILE
y=0
cat $TEMPFILE
for file in `find -name "*.*"`
do
grep -o c $file | y=$(cat $TEMPFILE)+$(wc -l);
echo $y > $TEMPFILE
done
echo $(cat $TEMPFILE)
But the value of y is always 0. Why?
One possibility to count the number of a given character in a file is to delete all the other characters and count the remaining ones. tr is a good choice for this:
tr -cd X < file
will only output the characters X from file file. Then to count the number of X in file file:
tr -cd X < file | wc -m
For several files, using find and no external arithmetic:
find . -type f -exec cat {} + | tr -cd X | wc -m
The trick here is to have find spit out the content of all files with -exec cat {} + and then do the filtering with tr and the counting with wc.
Edit: Misunderstood the question :)
Try this, it should work:
find /etc/ -type f -print0| xargs -0 grep -o c | wc -l
PS. replace /etc/ for whatever folder you like
But the value of y is always 0. Why?
Whenever you connect multiple commands using pipes |, each command is run in a subshell, meaning that it gets a separate copy of the execution environment (variables, shell functions, working directory, and so on). So when you assign something to y inside one of the commands in a pipeline, you're actually assigning something to a copy of y — a separate variable named y that the surrounding script can never see.
There are some other problems with your script as well, but that's the main one.
To be honest, it's probably simplest to dispense with the variable y, as well as the temporary file, and just use grep's --recursive flag to have it search the whole directory for you. Then all you need to do is pipe its output to wc -l to count the occurrences it finds. Your whole script can be written as:
grep -o --recursive -h c . | wc -l

Counting the contents of a directory

So I know how I would approach counting the number of files in a directory- I would use a for filename in * loop and then test the files names to fit my purpose, but I'm having trouble figuring out how to loop through a directory and then count how many (sub)directories are in it.
Could anyone point me in the right direction?
You can test if its a directory by using -d.
You can use find: find . -mindepth 1 -maxdepth 1 -type d
((n=0))
for fn in *
do
[[ -d "${fn}" ]] && ((n=1+${n}))
done
Keep a counter and only increment it for directories...
What are you trying to do? Take a look at the wc command. Specifically wc -l which counts the number of lines in the output. You can use a whole array of commands that generate output and then pipe that to the wc -l. Be careful of commands that add headers and footers to the files (like ls -l).
Here are some examples:
This will count all files and directories that don't start with .:
$ ls | wc -l
It's the same as your for loop you had in your question.
This will count all files and directories including those hidden ones. Note the ls -A instead of ls -a. The first won't list . and .. as files while the second will:
$ ls -A | wc -l
This will count all files and directories in the entire directory tree
$ find . | wc -l
This will only count the directories in the whole directory tree
$ find . -type d| wc -l
This will count all the files in the whole directory tree
$ find . -type f | wc -l
ls -
This will limit you to the number of directories in the current directory
$ find . -mindepth 1 -maxdepth 1 -type d | wc -l
And, you can use this to assign it to a variable:
$ num_of_files=$(find . -type f | wc -l)
here is how count directories or do stuff with directory names.
#!/bin/bash
old_IFS=$IFS
IFS=$'\n'
array=($(ls -F /foo/bar/ | grep '/$')) # this creates an array named "array" that holds
IFS=$old_IFS # all the directory names located in /foo/bar/
echo ${#array[#]} # this will give you the number of directories in /foo/bar/
for ((i=0; i<${#array[#]}; i++))
do
echo ${array[$i]} # this will output a list of all the directories
done
alternatively you could:
ls -F /foo/bar/ | grep '/$' | cat > directorynames.txt
and then count the number of lines. or you could get rid of the cat and just put the above in a for loop that would count up for every newline character.

Delete all but the most recent X files in bash

Is there a simple way, in a pretty standard UNIX environment with bash, to run a command to delete all but the most recent X files from a directory?
To give a bit more of a concrete example, imagine some cron job writing out a file (say, a log file or a tar-ed up backup) to a directory every hour. I'd like a way to have another cron job running which would remove the oldest files in that directory until there are less than, say, 5.
And just to be clear, there's only one file present, it should never be deleted.
The problems with the existing answers:
inability to handle filenames with embedded spaces or newlines.
in the case of solutions that invoke rm directly on an unquoted command substitution (rm `...`), there's an added risk of unintended globbing.
inability to distinguish between files and directories (i.e., if directories happened to be among the 5 most recently modified filesystem items, you'd effectively retain fewer than 5 files, and applying rm to directories will fail).
wnoise's answer addresses these issues, but the solution is GNU-specific (and quite complex).
Here's a pragmatic, POSIX-compliant solution that comes with only one caveat: it cannot handle filenames with embedded newlines - but I don't consider that a real-world concern for most people.
For the record, here's the explanation for why it's generally not a good idea to parse ls output: http://mywiki.wooledge.org/ParsingLs
ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {}
Note: This command operates in the current directory; to target a directory explicitly, use a subshell ((...)) with cd:
(cd /path/to && ls -tp | grep -v '/$' | tail -n +6 | xargs -I {} rm -- {})
The same applies analogously to the commands below.
The above is inefficient, because xargs has to invoke rm separately for each filename.
However, your platform's specific xargs implementation may allow you to solve this problem:
A solution that works with GNU xargs is to use -d '\n', which makes xargs consider each input line a separate argument, yet passes as many arguments as will fit on a command line at once:
ls -tp | grep -v '/$' | tail -n +6 | xargs -d '\n' -r rm --
Note: Option -r (--no-run-if-empty) ensures that rm is not invoked if there's no input.
A solution that works with both GNU xargs and BSD xargs (including on macOS) - though technically still not POSIX-compliant - is to use -0 to handle NUL-separated input, after first translating newlines to NUL (0x0) chars., which also passes (typically) all filenames at once:
ls -tp | grep -v '/$' | tail -n +6 | tr '\n' '\0' | xargs -0 rm --
Explanation:
ls -tp prints the names of filesystem items sorted by how recently they were modified , in descending order (most recently modified items first) (-t), with directories printed with a trailing / to mark them as such (-p).
Note: It is the fact that ls -tp always outputs file / directory names only, not full paths, that necessitates the subshell approach mentioned above for targeting a directory other than the current one ((cd /path/to && ls -tp ...)).
grep -v '/$' then weeds out directories from the resulting listing, by omitting (-v) lines that have a trailing / (/$).
Caveat: Since a symlink that points to a directory is technically not itself a directory, such symlinks will not be excluded.
tail -n +6 skips the first 5 entries in the listing, in effect returning all but the 5 most recently modified files, if any.
Note that in order to exclude N files, N+1 must be passed to tail -n +.
xargs -I {} rm -- {} (and its variations) then invokes on rm on all these files; if there are no matches at all, xargs won't do anything.
xargs -I {} rm -- {} defines placeholder {} that represents each input line as a whole, so rm is then invoked once for each input line, but with filenames with embedded spaces handled correctly.
-- in all cases ensures that any filenames that happen to start with - aren't mistaken for options by rm.
A variation on the original problem, in case the matching files need to be processed individually or collected in a shell array:
# One by one, in a shell loop (POSIX-compliant):
ls -tp | grep -v '/$' | tail -n +6 | while IFS= read -r f; do echo "$f"; done
# One by one, but using a Bash process substitution (<(...),
# so that the variables inside the `while` loop remain in scope:
while IFS= read -r f; do echo "$f"; done < <(ls -tp | grep -v '/$' | tail -n +6)
# Collecting the matches in a Bash *array*:
IFS=$'\n' read -d '' -ra files < <(ls -tp | grep -v '/$' | tail -n +6)
printf '%s\n' "${files[#]}" # print array elements
Remove all but 5 (or whatever number) of the most recent files in a directory.
rm `ls -t | awk 'NR>5'`
(ls -t|head -n 5;ls)|sort|uniq -u|xargs rm
This version supports names with spaces:
(ls -t|head -n 5;ls)|sort|uniq -u|sed -e 's,.*,"&",g'|xargs rm
Simpler variant of thelsdj's answer:
ls -tr | head -n -5 | xargs --no-run-if-empty rm
ls -tr displays all the files, oldest first (-t newest first, -r reverse).
head -n -5 displays all but the 5 last lines (ie the 5 newest files).
xargs rm calls rm for each selected file.
find . -maxdepth 1 -type f -printf '%T# %p\0' | sort -r -z -n | awk 'BEGIN { RS="\0"; ORS="\0"; FS="" } NR > 5 { sub("^[0-9]*(.[0-9]*)? ", ""); print }' | xargs -0 rm -f
Requires GNU find for -printf, and GNU sort for -z, and GNU awk for "\0", and GNU xargs for -0, but handles files with embedded newlines or spaces.
All these answers fail when there are directories in the current directory. Here's something that works:
find . -maxdepth 1 -type f | xargs -x ls -t | awk 'NR>5' | xargs -L1 rm
This:
works when there are directories in the current directory
tries to remove each file even if the previous one couldn't be removed (due to permissions, etc.)
fails safe when the number of files in the current directory is excessive and xargs would normally screw you over (the -x)
doesn't cater for spaces in filenames (perhaps you're using the wrong OS?)
ls -tQ | tail -n+4 | xargs rm
List filenames by modification time, quoting each filename. Exclude first 3 (3 most recent). Remove remaining.
EDIT after helpful comment from mklement0 (thanks!): corrected -n+3 argument, and note this will not work as expected if filenames contain newlines and/or the directory contains subdirectories.
Ignoring newlines is ignoring security and good coding. wnoise had the only good answer. Here is a variation on his that puts the filenames in an array $x
while IFS= read -rd ''; do
x+=("${REPLY#* }");
done < <(find . -maxdepth 1 -printf '%T# %p\0' | sort -r -z -n )
For Linux (GNU tools), an efficient & robust way to keep the n newest files in the current directory while removing the rest:
n=5
find . -maxdepth 1 -type f -printf '%T# %p\0' |
sort -z -nrt ' ' -k1,1 |
sed -z -e "1,${n}d" -e 's/[^ ]* //' |
xargs -0r rm -f
For BSD, find doesn't have the -printf predicate, stat can't output NULL bytes, and sed + awk can't handle NULL-delimited records.
Here's a solution that doesn't support newlines in paths but that safeguards against them by filtering them out:
#!/bin/bash
n=5
find . -maxdepth 1 -type f ! -path $'*\n*' -exec stat -f '%.9Fm %N' {} + |
sort -nrt ' ' -k1,1 |
awk -v n="$n" -F'^[^ ]* ' 'NR > n {printf "%s%c", $2, 0}' |
xargs -0 rm -f
note: I'm using bash because of the $'\n' notation. For sh you can define a variable containing a literal newline and use it instead.
Solution for UNIX & Linux (inspired from AIX/HP-UX/SunOS/BSD/Linux ls -b):
Some platforms don't provide find -printf, nor stat, nor support NUL-delimited records with stat/sort/awk/sed/xargs. That's why using perl is probably the most portable way to tackle the problem, because it is available by default in almost every OS.
I could have written the whole thing in perl but I didn't. I only use it for substituting stat and for encoding-decoding-escaping the filenames. The core logic is the same as the previous solutions and is implemented with POSIX tools.
note: perl's default stat has a resolution of a second, but starting from perl-5.8.9 you can get sub-second resolution with the stat function of the module Time::HiRes (when both the OS and the filesystem support it). That's what I'm using here; if your perl doesn't provide it then you can remove the ‑MTime::HiRes=stat from the command line.
n=5
find . '(' -name '.' -o -prune ')' -type f -exec \
perl -MTime::HiRes=stat -le '
foreach (#ARGV) {
#st = stat($_);
if ( #st > 0 ) {
s/([\\\n])/sprintf( "\\%03o", ord($1) )/ge;
print sprintf( "%.9f %s", $st[9], $_ );
}
else { print STDERR "stat: $_: $!"; }
}
' {} + |
sort -nrt ' ' -k1,1 |
sed -e "1,${n}d" -e 's/[^ ]* //' |
perl -l -ne '
s/\\([0-7]{3})/chr(oct($1))/ge;
s/(["\n])/"\\$1"/g;
print "\"$_\"";
' |
xargs -E '' sh -c '[ "$#" -gt 0 ] && rm -f "$#"' sh
Explanations:
For each file found, the first perl gets the modification time and outputs it along the encoded filename (each newline and backslash characters are replaced with the literals \012 and \134 respectively).
Now each time filename is guaranteed to be single-line, so POSIX sort and sed can safely work with this stream.
The second perl decodes the filenames and escapes them for POSIX xargs.
Lastly, xargs calls rm for deleting the files. The sh command is a trick that prevents xargs from running rm when there's no files to delete.
I realize this is an old thread, but maybe someone will benefit from this. This command will find files in the current directory :
for F in $(find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n' | sort -r -z -n | tail -n+5 | awk '{ print $2; }'); do rm $F; done
This is a little more robust than some of the previous answers as it allows to limit your search domain to files matching expressions. First, find files matching whatever conditions you want. Print those files with the timestamps next to them.
find . -maxdepth 1 -type f -name "*_srv_logs_*.tar.gz" -printf '%T# %p\n'
Next, sort them by the timestamps:
sort -r -z -n
Then, knock off the 4 most recent files from the list:
tail -n+5
Grab the 2nd column (the filename, not the timestamp):
awk '{ print $2; }'
And then wrap that whole thing up into a for statement:
for F in $(); do rm $F; done
This may be a more verbose command, but I had much better luck being able to target conditional files and execute more complex commands against them.
If the filenames don't have spaces, this will work:
ls -C1 -t| awk 'NR>5'|xargs rm
If the filenames do have spaces, something like
ls -C1 -t | awk 'NR>5' | sed -e "s/^/rm '/" -e "s/$/'/" | sh
Basic logic:
get a listing of the files in time order, one column
get all but the first 5 (n=5 for this example)
first version: send those to rm
second version: gen a script that will remove them properly
With zsh
Assuming you don't care about present directories and you will not have more than 999 files (choose a bigger number if you want, or create a while loop).
[ 6 -le `ls *(.)|wc -l` ] && rm *(.om[6,999])
In *(.om[6,999]), the . means files, the o means sort order up, the m means by date of modification (put a for access time or c for inode change), the [6,999] chooses a range of file, so doesn't rm the 5 first.
Adaptation of #mklement0's excellent answer with some parameters and without needing to navigate to the folder containing the files to be deleted...
TARGET_FOLDER="/my/folder/path"
FILES_KEEP=5
ls -tp "$TARGET_FOLDER"**/* | grep -v '/$' | tail -n +$((FILES_KEEP+1)) | xargs -d '\n' -r rm --
[Ref(s).: https://stackoverflow.com/a/3572628/3223785 ]
Thanks! 😉
found interesting cmd in Sed-Onliners - Delete last 3 lines - fnd it perfect for another way to skin the cat (okay not) but idea:
#!/bin/bash
# sed cmd chng #2 to value file wish to retain
cd /opt/depot
ls -1 MyMintFiles*.zip > BigList
sed -n -e :a -e '1,2!{P;N;D;};N;ba' BigList > DeList
for i in `cat DeList`
do
echo "Deleted $i"
rm -f $i
#echo "File(s) gonzo "
#read junk
done
exit 0
Removes all but the 10 latest (most recents) files
ls -t1 | head -n $(echo $(ls -1 | wc -l) - 10 | bc) | xargs rm
If less than 10 files no file is removed and you will have :
error head: illegal line count -- 0
To count files with bash
I needed an elegant solution for the busybox (router), all xargs or array solutions were useless to me - no such command available there. find and mtime is not the proper answer as we are talking about 10 items and not necessarily 10 days. Espo's answer was the shortest and cleanest and likely the most unversal one.
Error with spaces and when no files are to be deleted are both simply solved the standard way:
rm "$(ls -td *.tar | awk 'NR>7')" 2>&-
Bit more educational version: We can do it all if we use awk differently. Normally, I use this method to pass (return) variables from the awk to the sh. As we read all the time that can not be done, I beg to differ: here is the method.
Example for .tar files with no problem regarding the spaces in the filename. To test, replace "rm" with the "ls".
eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}')
Explanation:
ls -td *.tar lists all .tar files sorted by the time. To apply to all the files in the current folder, remove the "d *.tar" part
awk 'NR>7... skips the first 7 lines
print "rm \"" $0 "\"" constructs a line: rm "file name"
eval executes it
Since we are using rm, I would not use the above command in a script! Wiser usage is:
(cd /FolderToDeleteWithin && eval $(ls -td *.tar | awk 'NR>7 { print "rm \"" $0 "\""}'))
In the case of using ls -t command will not do any harm on such silly examples as: touch 'foo " bar' and touch 'hello * world'. Not that we ever create files with such names in real life!
Sidenote. If we wanted to pass a variable to the sh this way, we would simply modify the print (simple form, no spaces tolerated):
print "VarName="$1
to set the variable VarName to the value of $1. Multiple variables can be created in one go. This VarName becomes a normal sh variable and can be normally used in a script or shell afterwards. So, to create variables with awk and give them back to the shell:
eval $(ls -td *.tar | awk 'NR>7 { print "VarName=\""$1"\"" }'); echo "$VarName"
leaveCount=5
fileCount=$(ls -1 *.log | wc -l)
tailCount=$((fileCount - leaveCount))
# avoid negative tail argument
[[ $tailCount < 0 ]] && tailCount=0
ls -t *.log | tail -$tailCount | xargs rm -f
I made this into a bash shell script. Usage: keep NUM DIR where NUM is the number of files to keep and DIR is the directory to scrub.
#!/bin/bash
# Keep last N files by date.
# Usage: keep NUMBER DIRECTORY
echo ""
if [ $# -lt 2 ]; then
echo "Usage: $0 NUMFILES DIR"
echo "Keep last N newest files."
exit 1
fi
if [ ! -e $2 ]; then
echo "ERROR: directory '$1' does not exist"
exit 1
fi
if [ ! -d $2 ]; then
echo "ERROR: '$1' is not a directory"
exit 1
fi
pushd $2 > /dev/null
ls -tp | grep -v '/' | tail -n +"$1" | xargs -I {} rm -- {}
popd > /dev/null
echo "Done. Kept $1 most recent files in $2."
ls $2|wc -l
Modified version of the answer of #Fabien if you want to specify a path. Useful if you're running the script elsewhere.
ls -tr /path/foo/ | head -n -5 | xargs -I% --no-run-if-empty rm /path/foo/%

Resources