I am looking for assistance in creating a bash script that will run several similar commands, sum up the totals and output that total to the screen. I want to run the following commands:
find /var/log/audit -xdev -type f -printf '%i\n' | sort -u | wc -l
find /boot -xdev -type f -printf '%i\n' | sort -u | wc -l
find /home -xdev -type f -printf '%i\n' | sort -u | wc -l
And so on. I have a few others. What I am basically doing is counting up all of the files in each mount point on my system, then I need the script to sum up all of the output from each commands "wc -l" and output the grand total to the screen. Any help is greatly appreciated.
this should work:
a=$(find /var/log/audit -xdev -type f -printf '%i\n' | sort -u | wc -l)
b=$(find /boot -xdev -type f -printf '%i\n' | sort -u | wc -l)
c=$(find /home -xdev -type f -printf '%i\n' | sort -u | wc -l)
final=$(($a+$b+$c))
echo $final
this will work without naming names, change the echo n with your scripts
awk '{sum+=$1} END{print "total: "sum}' < <(echo 4; echo 5; echo 6)
alternatively if the individual counts are not required you can pass more than one path to find
find path1 path2 path3 ...
This might be a good place for dc
{
for mnt in /var/log/audit /boot /home; do
find "$mnt" -xdev -type f -printf '%i\n' | sort -u | wc -l
done
echo "+"
echo "+"
echo "p"
} | dc
You need one less "+" than your number of mountpoints.
I would redirect each commands output to a file
your_command >> results.txt
and sum them up
awk '{ sum += $1 } END { print sum }' results.txt
Related
I've this script. I would like to print only the non-zero results.
My enviroment is Os X
find /PATH/ -type f -exec basename "{}" | grep -i "Word" | wc -l
First, here is a much faster find command that will do the same thing:
find /PATH/ -type f -iname '*Word*' | wc -l
Now, you can put this optimized command into an if statement:
if [[ `find /PATH/ -type f -iname '*Word*' | wc -l` ]]; then
find /PATH/ -type f -iname '*Word*' | wc -l
fi
To run the command just once, save the result into a variable:
count=`find /PATH/ -type f -iname '*Word*' | wc -l`
if [[ $count -gt 0 ]]; then
echo $count
fi
You can use grep -v to remove output that consists of just zero (with spaces before it, 'cause that's what wc prints). With #joanis' optimization of the search, that gives:
find /PATH/ -type f -iname '*Word*' | wc -l | grep -v '^ *0$'
When you count selected records, you do not have to filter on 0 hits.
This command shows all basenames that appear once or more.
find . -type f -iname '*Word*' -printf "%f\n" | sort | uniq -c
You might want to add | sort -n on the and to see which file occurs most.
Maybe you wanted something: How often Word occurs in different files.
grep -Rci while | grep -v ":0$"
Hi hoping someone can help, I have some directories on disk and I want to count the number of files in them (as well as dir size if possible) and then strip info from the output. So far I have this
find . -type d -name "*,d" -print0 | xargs -0 -I {} sh -c 'echo -e $(find "{}" | wc -l) "{}"' | sort -n
This gets me all the dir's that match my pattern as well as the number of files - great!
This gives me something like
2 ./bob/sourceimages/psd/dzv_body.psd,d
2 ./bob/sourceimages/psd/dzv_body_nrm.psd,d
2 ./bob/sourceimages/psd/dzv_body_prm.psd,d
2 ./bob/sourceimages/psd/dzv_eyeball.psd,d
2 ./bob/sourceimages/psd/t_zbody.psd,d
2 ./bob/sourceimages/psd/t_gear.psd,d
2 ./bob/sourceimages/psd/t_pupil.psd,d
2 ./bob/sourceimages/z_vehicles_diff.tga,d
2 ./bob/sourceimages/zvehiclesa_diff.tga,d
5 ./bob/sourceimages/zvehicleswheel_diff.jpg,d
From that I would like to filter based on max number of files so > 4 for example, I would like to capture filetype as a variable for each remaining result e.g ./bob/sourceimages/zvehicleswheel_diff.jpg,d
I guess I could use awk for this?
Then finally I would like like to remove all the results from disk, with find I normally just do something like -exec rm -rf {} \; but I'm not clear how it would work here
Thanks a lot
EDITED
While this is clearly not the answer, these commands get me the info I want in the form I want it. I just need a way to put it all together and not search multiple times as that's total rubbish
filetype=$(find . -type d -name "*,d" -print0 | awk 'BEGIN { FS = "." }; {
print $3 }' | cut -d',' -f1)
filesize=$(find . -type d -name "*,d" -print0 | xargs -0 -I {} sh -c 'du -h
{};' | awk '{ print $1 }')
filenumbers=$(find . -type d -name "*,d" -print0 | xargs -0 -I {} sh -c
'echo -e $(find "{}" | wc -l);')
files_count=`ls -keys | nl`
For instance:
ls | nl
nl printed numbers of lines
I'm trying to calculate the number of words written in a project. There are a few levels of folders and lots of text files within them.
Can anyone help me find out a quick way to do this?
bash or vim would be good!
Thanks
use find the scan the dir tree and wc will do the rest
$ find path -type f | xargs wc -w | tail -1
last line gives the totals.
tldr;
$ find . -type f -exec wc -w {} + | awk '/total/{print $1}' | paste -sd+ | bc
Explanation:
The find . -type f -exec wc -w {} + will run wc -w on all the files (recursively) contained by . (the current working directory). find will execute wc as few times as possible but as many times as is necessary to comply with ARG_MAX --- the system command length limit. When the quantity of files (and/or their constituent lengths) exceeds ARG_MAX, then find invokes wc -w more than once, giving multiple total lines:
$ find . -type f -exec wc -w {} + | awk '/total/{print $0}'
8264577 total
654892 total
1109527 total
149522 total
174922 total
181897 total
1229726 total
2305504 total
1196390 total
5509702 total
9886665 total
Isolate these partial sums by printing only the first whitespace-delimited field of each total line:
$ find . -type f -exec wc -w {} + | awk '/total/{print $1}'
8264577
654892
1109527
149522
174922
181897
1229726
2305504
1196390
5509702
9886665
paste the partial sums with a + delimiter to give an infix summation:
$ find . -type f -exec wc -w {} + | awk '/total/{print $1}' | paste -sd+
8264577+654892+1109527+149522+174922+181897+1229726+2305504+1196390+5509702+9886665
Evaluate the infix summation using bc, which supports both infix expressions and arbitrary precision:
$ find . -type f -exec wc -w {} + | awk '/total/{print $1}' | paste -sd+ | bc
30663324
References:
https://www.cyberciti.biz/faq/argument-list-too-long-error-solution/
https://www.in-ulm.de/~mascheck/various/argmax/
https://linux.die.net/man/1/find
https://linux.die.net/man/1/wc
https://linux.die.net/man/1/awk
https://linux.die.net/man/1/paste
https://linux.die.net/man/1/bc
You could find and print all the content and pipe to wc:
find path -type f -exec cat {} \; -exec echo \; | wc -w
Note: the -exec echo \; is needed in case a file doesn't end with a newline character, in which case the last word of one file and the first word of the next will not be separated.
Or you could find and wc and use awk to aggregate the counts:
find . -type f -exec wc -w {} \; | awk '{ sum += $1 } END { print sum }'
If there's one thing I've learned from all the bash questions on SO, it's that a filename with a space will mess you up. This script will work even if you have whitespace in the file names.
#!/usr/bin/env bash
shopt -s globstar
count=0
for f in **/*.txt
do
words=$(wc -w "$f" | awk '{print $1}')
count=$(($count + $words))
done
echo $count
Assuming you don't need to recursively count the words and that you want to include all the files in the current directory , you can use a simple approach such as:
wc -l *
10 000292_0
500 000297_0
510 total
If you want to count the words for only a specific extension in the current directory , you could try :
cat *.txt | wc -l
I made the following script to find files based on a 'find' command and then print out the results:
#!/bin/bash
loc_to_look='./'
file_list=$(find $loc_to_look -type f -name "*.txt" -size +5M)
total_size=`du -ch $file_list | tail -1 | cut -f 1`
echo 'total size of all files is: '$total_size
for file in $file_list; do
size_of_file=`du -h $file | cut -f 1`
echo $file" "$size_of_file
done
...which give me output like:
>>> ./file_01.txt 12.0M
>>> ./file_04.txt 24.0M
>>> ./file_06.txt 6.0M
>>> ./file_02.txt 6.2M
>>> ./file_07.txt 84.0M
>>> ./file_09.txt 55.0M
>>> ./file_10.txt 96.0M
What I would like to do first, though, is sort the list by file size before printing it out. What is the best way to go about doing this?
Easy to do if you grab the file size in bytes, just pipe to sort
find $loc_to_look -type f -name "*.txt" -size +5M -printf "%f %s\n" | sort -n -k 2
If you wanted to make the file sizes print in MB, you could finally pipe to awk:
find $loc_to_look -type f -printf "%f %s\n" | sort -n -k 2 | awk '{ printf "%s %.1fM\n", $1, $2/1024/1024}'
I want to get the total count of the number of lines from all the files returned by the following command:
shell> find . -name *.info
All the .info files are nested in sub-directories so I can't simply do:
shell> wc -l *.info
Am sure this should be in any bash users repertoire, but am stuck!
Thanks
wc -l `find . -name *.info`
If you just want the total, use
wc -l `find . -name *.info` | tail -1
Edit: Piping to xargs also works, and hopefully can avoid the 'command line too long'.
find . -name *.info | xargs wc -l
You can use xargs like so:
find . -name *.info -print0 | xargs -0 cat | wc -l
some googling turns up
find /topleveldirectory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}'
which seems to do the trick
#!/bin/bash
# bash 4.0
shopt -s globstar
sum=0
for file in **/*.info
do
if [ -f "$file" ];then
s=$(wc -l< "$file")
sum=$((sum+s))
fi
done
echo "Total: $sum"
find . -name "*.info" -exec wc -l {} \;
Note to self - read the question
find . -name "*.info" -exec cat {} \; | wc -l
# for a speed-up use: find ... -exec ... '{}' + | ...
find . -type f -name "*.info" -exec sed -n '$=' '{}' + | awk '{total += $0} END{print total}'