I've this script. I would like to print only the non-zero results.
My enviroment is Os X
find /PATH/ -type f -exec basename "{}" | grep -i "Word" | wc -l
First, here is a much faster find command that will do the same thing:
find /PATH/ -type f -iname '*Word*' | wc -l
Now, you can put this optimized command into an if statement:
if [[ `find /PATH/ -type f -iname '*Word*' | wc -l` ]]; then
find /PATH/ -type f -iname '*Word*' | wc -l
fi
To run the command just once, save the result into a variable:
count=`find /PATH/ -type f -iname '*Word*' | wc -l`
if [[ $count -gt 0 ]]; then
echo $count
fi
You can use grep -v to remove output that consists of just zero (with spaces before it, 'cause that's what wc prints). With #joanis' optimization of the search, that gives:
find /PATH/ -type f -iname '*Word*' | wc -l | grep -v '^ *0$'
When you count selected records, you do not have to filter on 0 hits.
This command shows all basenames that appear once or more.
find . -type f -iname '*Word*' -printf "%f\n" | sort | uniq -c
You might want to add | sort -n on the and to see which file occurs most.
Maybe you wanted something: How often Word occurs in different files.
grep -Rci while | grep -v ":0$"
Related
I use find . -type d -name "Financials" to find all the directories called "Financials" under the current directory. Since I am on Mac, I can use the following (which I found from another stackoverflow question) to find the latest modified file in my current directory: find . -type f -print0 | xargs -0 stat -f "%m %N" | sort -rn | head -1 | cut -f2- -d" ". What I would like to do is find a way to pipe the results of the first command into the second command--i.e. to find the most recently modified file in each "Financials" directory. Is there a way to do this?
I think you could:
find . -type d -name "Financials" -print0 |
xargs -0 -I{} find {} -type f -print0 |
xargs -0 stat -f "%m %N" | sort -rn | head -1 | cut -f2- -d" "
But if you want separately for each dir, then... why not just loop it:
find . -type d -name "Financials" |
while IFS= read -r dir; do
echo "newest file in $dir is $(
find "$dir" -type f -print0 |
xargs -0 stat -f "%m %N" | sort -rn | head -1 | cut -f2- -d" "
)"
done
Nest the 2nd file+xargs inside a first find+xargs:
find . -type d -name "Financials" -print0 \
| xargs -0 sh -c '
for d in "$#"; do
find "$d" -type f -print0 \
| xargs -0 stat -f "%m %N" \
| sort -rn \
| head -1 \
| cut -f2- -d" "
done
' sh
Note the trailing "sh" in sh -c '...' sh -- that word becomes "$0" inside the shell script so the for-loop can iterate over all the directories.
A robust way that will also avoid problems with funny filenames that contain special characters is:
find all files within this particular subdirectory, and extract the inode number and modifcation time
$ find . -type f -ipath '*/Financials/*' -printf "%T# %i\n"
extract youngest file's inode number
$ ... | awk '($1>t){t=$1;i=$2}END{print i}'
search file information by inode number
$ find . -type f -inum "$inode" '*/Financials/*'
So this gives you:
$ inode="$(find . -type f -ipath '*/Financials/*' -printf "%T# %i\n" | awk '($1>t){t=$1;i=$2}END{print i}')"
$ find . -type f -inum "$inode" '*/Financials/*'
I have for example 3 files (it could 1 or it could be 30) like this :
name_date1.tgz
name_date2.tgz
name_date3.tgz
When extracted it will look like :
name_date1/data/info/
name_date2/data/info/
name_date3/data/info/
Here how it looks inside each folder:
name_date1/data/info/
you.log
you.log.1.gz
you.log.2.gz
you.log.3.gz
name_date2/data/info/
you.log
name_date3/data/info/
you.log
you.log.1.gz
you.log.2.gz
What I want to do is concatenate all you file from each folder and concatenate one more time all the concatenated one to one single file.
1st step: extract all the folder
for a in *.tgz
do
a_dir=${a%.tgz}
mkdir $a_dir 2>/dev/null
tar -xvzf $a -C $a_dir >/dev/null
done
2nd step: executing an if statement on each folder available and cat everything
myarray=(`find */data/info/ -maxdepth 1 -name "you.log.*.gz"`)
ls -d */ | xargs -I {} bash -c "cd '{}' &&
if [ ${#myarray[#]} -gt 0 ];
then
find data/info -name "you.log.*.gz" -print0 | sort -z -rn -t. -k4 | xargs -0 zcat | cat -
data/info/you.log > youfull1.log
else
cat - data/info/you.log > youfull1.log
fi "
cat */youfull1.log > youfull.log
My issue when I put multiple name_date*.tgzit gives me this error:
gzip: stdin: unexpected end of file
With the error, I still have all my files concatenated, but why error message ?
But when I put only one .tgz file then I don't have any issue regardless the number you file.
any suggestion please ?
Try something simpler. No need for myarray. Pass files one at a time as they are inputted and decide what to do with them one at a time. Try:
find */data/info -type f -maxdepth 1 -name "you.log*" -print0 |
sort -z |
xargs -0 -n1 bash -c '
if [[ "${1##*.}" == "gz" ]]; then
zcat "$1";
else
cat "$1";
fi
' --
If you have to iterate over directories, don't use ls, still use find.
find . -maxdepth 1 -type d -name 'name_date*' -print0 |
sort -z |
while IFS= read -r -d '' dir; do
cat "$dir"/data/info/you.log
find "$dir"/data/info -type f -maxdepth 1 -name 'you.log.*.gz' -print0 |
sort -z -t'.' -n -k3 |
xargs -r -0 zcat
done
or (if you have to) with xargs, which should give you the idea how it's used:
find . -maxdepth 1 -type d -name 'name_date*' -print0 |
sort -z |
xargs -0 -n1 bash -c '
cat "$1"/data/info/you.log
find "$1"/data/info -type f -maxdepth 1 -name "you.log.*.gz" -print0 |
sort -z -t"." -n -k3 |
xargs -r -0 zcat
' --
Use -t option with xargs to see what it's doing.
I have a bash script that counts compressed files by file extension and prints the count.
#!/bin/bash
FIND_COMPRESSED=$(find . -type f | sed -e 's/.*\.//' | sort | uniq -c | sort -rn | grep -Ei '(deb|tgz|tar|gz|zip)$')
COUNT_LINES=$($FIND_COMPRESSED | wc -l)
if [[ $COUNT_LINES -eq 0 ]]; then
echo "No archived files found!"
else
echo "$FIND_COMPRESSED"
fi
However, the script works only if there are NO files with .deb .tar .gz .tgz .zip.
If there are some, say test.zip and test.tar in the current folder, I get this error:
./arch.sh: line 5: 1: command not found
Yet, if I copy the contents of the FIND_COMPRESSED variable into the COUNT_LINES, all works fine.
#!/bin/bash
FIND_COMPRESSED=$(find . -type f | sed -e 's/.*\.//' | sort | uniq -c | sort -rn | grep -Ei '(deb|tgz|tar|gz|zip)$')
COUNT_LINES=$(find . -type f | sed -e 's/.*\.//' | sort | uniq -c | sort -rn | grep -Ei '(deb|tgz|tar|gz|zip)$'| wc -l)
if [[ $COUNT_LINES -eq 0 ]]; then
echo "No archived files found!"
else
echo "$FIND_COMPRESSED"
fi
What am I missing here?
So when you do that variable like that, it tries to execute it like a command, which is why it fails when it has contents. When it's empty, wc simply returns 0 and it marches on.
Thus, you need to change that line to this:
COUNT_LINES=$(echo $FIND_COMPRESSED | wc -l)
But, while we're at it, you can also simplify the other line with something like this:
FIND_COMPRESSED=$(find . -type f -iname "*deb" -or -iname "*tgz" -or -iname "*tar*") #etc
you can do
mapfile FIND_COMPRESSED < <(find . -type f -regextype posix-extended -regex ".*(deb|tgz|tar|gz|zip)$" -exec bash -c '[[ "$(file {})" =~ compressed ]] && echo {}' \;)
COUNT_LINES=${#FIND_COMPRESSED[#]}
I am looking for assistance in creating a bash script that will run several similar commands, sum up the totals and output that total to the screen. I want to run the following commands:
find /var/log/audit -xdev -type f -printf '%i\n' | sort -u | wc -l
find /boot -xdev -type f -printf '%i\n' | sort -u | wc -l
find /home -xdev -type f -printf '%i\n' | sort -u | wc -l
And so on. I have a few others. What I am basically doing is counting up all of the files in each mount point on my system, then I need the script to sum up all of the output from each commands "wc -l" and output the grand total to the screen. Any help is greatly appreciated.
this should work:
a=$(find /var/log/audit -xdev -type f -printf '%i\n' | sort -u | wc -l)
b=$(find /boot -xdev -type f -printf '%i\n' | sort -u | wc -l)
c=$(find /home -xdev -type f -printf '%i\n' | sort -u | wc -l)
final=$(($a+$b+$c))
echo $final
this will work without naming names, change the echo n with your scripts
awk '{sum+=$1} END{print "total: "sum}' < <(echo 4; echo 5; echo 6)
alternatively if the individual counts are not required you can pass more than one path to find
find path1 path2 path3 ...
This might be a good place for dc
{
for mnt in /var/log/audit /boot /home; do
find "$mnt" -xdev -type f -printf '%i\n' | sort -u | wc -l
done
echo "+"
echo "+"
echo "p"
} | dc
You need one less "+" than your number of mountpoints.
I would redirect each commands output to a file
your_command >> results.txt
and sum them up
awk '{ sum += $1 } END { print sum }' results.txt
I want to get the total count of the number of lines from all the files returned by the following command:
shell> find . -name *.info
All the .info files are nested in sub-directories so I can't simply do:
shell> wc -l *.info
Am sure this should be in any bash users repertoire, but am stuck!
Thanks
wc -l `find . -name *.info`
If you just want the total, use
wc -l `find . -name *.info` | tail -1
Edit: Piping to xargs also works, and hopefully can avoid the 'command line too long'.
find . -name *.info | xargs wc -l
You can use xargs like so:
find . -name *.info -print0 | xargs -0 cat | wc -l
some googling turns up
find /topleveldirectory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}'
which seems to do the trick
#!/bin/bash
# bash 4.0
shopt -s globstar
sum=0
for file in **/*.info
do
if [ -f "$file" ];then
s=$(wc -l< "$file")
sum=$((sum+s))
fi
done
echo "Total: $sum"
find . -name "*.info" -exec wc -l {} \;
Note to self - read the question
find . -name "*.info" -exec cat {} \; | wc -l
# for a speed-up use: find ... -exec ... '{}' + | ...
find . -type f -name "*.info" -exec sed -n '$=' '{}' + | awk '{total += $0} END{print total}'