How to store the grep output into a variable inside a loop? - shell

I would like to count the number of outputs produced by this loop
cd /System/Library/Extensions
find *.kext -prune -type d | while read d; do
codesign -v "$d" 2>&1 | grep "invalid signature"
done
How can I store or count the outputs? If tried with arrays, counters etc. but it seems I can't get anything outside of that loop.

To obtain the number of lines produced by the while loop, the wc word count can be used
cd /System/Library/Extensions
find *.kext -prune -type d | while read d; do
codesign -v "$d" 2>&1 | grep "invalid signature"
done | wc -l
wc -l the -l option counts the number of lines in the input, which is piped to the output of while
Now if you need to count the number of grep outputs in each iteration of the while loop, the -c option of grep would be usefull.
cd /System/Library/Extensions
find *.kext -prune -type d | while read d; do
codesign -v "$d" 2>&1 | grep -c "invalid signature"
done
-c Suppress normal output; instead print a count of matching lines
for each input file

classical difficulty in subshell is to pass variables back.
here one way to get this information back :
cd /System/Library/Extensions
RESULT=$(find *.kext -prune -type d | {
# we are in a sub-subshell ...
GCOUNT=0
DIRCOUNT=0
FAILDIR=0
while read d; do
COUNT=$(codesign -v "$d" 2>&1 | grep -c "invalid signature")
if [[ -n $COUNT ]]
then
if [[ $COUNT > 0 ]]
then
echo "[ERROR] $COUNT invalid signature found in $d" >&2
GCOUNT=$(( $GCOUNT + $COUNT ))
FAILDIR=$(( $FAILDIR + 1 ))
fi
else
echo "[ERROR] wrong invalid signature count for $d" >&2
fi
DIRCOUNT=$(( $DIRCOUNT + 1 ))
done
# this is the actual result, that's why all other output of this subshell are redirected
echo "$DIRCOUNT $FAILDIR $GCOUNT"
})
# parse result integers separated by a space.
if [[ $RESULT =~ ([0-9]+)\ ([0-9]+)\ ([0-9]+) ]]
then
DIRCOUNT=${BASH_REMATCH[1]}
FAILDIR=${BASH_REMATCH[2]}
COUNT=${BASH_REMATCH[3]}
else
echo "[ERROR] Invalid result format. Please check your script $0" >&2
fi
if [[ -n $COUNT ]]
then
echo "$COUNT errors found in $FAILDIR/$DIRCOUNT directories"
fi

Related

searching for file unix script

My script is as shown:
it searches for directories and provides info on that directory, however I am having trouble setting exceptions.
if [ -d "$1" ];
then
directories=$(find "$1" -type d | wc -l)
files=$(find "$1" -type f | wc -l)
sym=$(find "$1" -type l | wc -l)
printf "%s %'d\n" "Directories" $directories
printf "%s %'d\n" "Files" $files
printf "%s %'d\n" "Sym links" $sym
exit 0
else
echo "Must provide one argument"
exit 1
fi
How do I make it so that if I search for a file it tells me that a directory needs to be inputted? I'm stuck on it, I've tried test commands but I don't know what to do.
You're missing your shebang in the first line of your script:
#!/bin/bash
I get correct results from your script if I add it:
Directories 1,991
Files 13,363
Sym links 0
You may have to set the correct execution permissions also chmod +x scriptname.sh?
Entire script looks like this:
#!/bin/bash
if [ -z "$1" ];
then
echo "Please provide at least one argument!"
exit 1
elif [ -d "$1" ];
then
directories=$(find "$1" -type d | wc -l)
files=$(find "$1" -type f | wc -l)
sym=$(find "$1" -type l | wc -l)
printf "%s %'d\n" "Directories" $directories
printf "%s %'d\n" "Files" $files
printf "%s %'d\n" "Sym links" $sym
exit 0
else
echo "This is a file, not a directory"
exit 1
fi

How to exit/break from nested if-else statement once conditions are met

I have a script that uses nested if-else statements to search for files. I want the script to exit once the conditions are met for any one of the nested statement.
But the script still continues to run through the all the remaining if-else statements.
I have tested using exit 0 and return 0 but neither works.
Here's the script:
#!/bin/sh
PATH1=/filer1_vol1_dir1
PATH2=/filer2_vol1_dir1
PATH3=/filer3_vol1_dir2
PATTERN=fruits
find $PATH1 -type f -name "*$PATTERN*" -exec ls -l {} \; >> /tmp/${PATTERN}_search
if [[ -s /tmp/${PATTERN}_search && `grep -i apples /tmp/${PATTERN}_search` ]]
then
echo "Matching files have been found under $PATH1"
cat /tmp/${PATTERN}_search
return 0
else
echo "No matching files, proceeding to search $PATH2"
find $PATH2 -type f -name "*$PATTERN*" -exec ls -l {} \; >> /tmp/${PATTERN}_search
if [[ -s /tmp/${PATTERN}_search && `grep -i apples /tmp/${PATTERN}_search` ]]
then
echo "Matching files have been found under $PATH2"
cat /tmp/${PATTERN}_search
return 0
else
echo "No matching files, proceeding to search $PATH3"
find $PATH3 -type f -name "*$PATTERN*" -exec ls -l {} \; >> /tmp/${PATTERN}_search
if [[ -s /tmp/${PATTERN}_search && `grep -i apples /tmp/${PATTERN}_search` ]]
then
echo "Matching files have been found under $PATH3"
cat /tmp/${PATTERN}_search
return 0
else
echo "No file matches, please search elsewhere"
return 0
fi
fi
fi
exit 0
I have found that the better way is to use a while loop to iterate through each search. Within each iteration, an if-else condition will test if files matching the find pattern are found. Once this condition is true, a break statement was able to stop the script.
Sample script below:
#!/bin/sh
PATH1=/filer1_vol1_dir1
PATH2=/filer2_vol1_dir1
PATH3=/filer3_vol1_dir2
PATTERN=fruits
echo $PATH1 > /tmp/PATH.list
echo $PATH2 >> /tmp/PATH.list
echo $PATH3 >> /tmp/PATH.list
echo /tmp/PATH.list contains
cat /tmp/PATH.list
echo
cat /dev/null > /tmp/${PATTERN}_search.list
while read PATH
do
echo "Searching under the following parameters"
echo PATTERN = $PATTERN
echo PATH = $PATH
echo
/usr/bin/find $PATH -type f -name "*$PATTERN*" -exec ls -l {} \; >> /tmp/${PATTERN}_search.list
/usr/bin/grep -i apples /tmp/${PATTERN}_search.list
if [ $? -eq 0 ]
then
echo "All matching files have been found."
break
else
echo "No matches found, continuing search in next directory."
fi
done < /tmp/PATH.list
exit 0

How to find files and count them (storing the info into a variable)?

I want to have a conditional behavior depending on the number of files found:
found=$(find . -type f -name "$1")
numfiles=$(printf "%s\n" "$found" | wc -l)
if [ $numfiles -eq 0 ]; then
echo "cannot access $1: No such file" > /dev/stderr; exit 2;
elif [ $numfiles -gt 1 ]; then
echo "cannot access $1: Duplicate file found" > /dev/stderr; exit 2;
else
echo "File: $(ls $found)"
head $found
fi
EDITED CODE (to reflect more precisely what I need)
Though, numfiles isn't equal to 2(or more) when there are duplicate files found...
All the filenames are on one line, separated by a space.
On the other hand, this works correctly:
find . -type f -name "$1" | wc -l
but I don't want to do twice the recursive search in the if/then/else construct...
Adding -print0 doesn't help either.
What would?
PS- Simplifications or improvements are always welcome!
You want to find files and count the files with a name "$1":
grep -c "/${1}$" $(find . 2>/dev/null)
And store the result in a var. In one command:
numfiles=$(grep -c "/${1}$" <(find . 2>/dev/null))
Using $() to store data to a variable trims tailing whitespace. Since the final newline does not appear in the variable numfiles, wc miscounts by one. You can recover the trailing newline with:
numfiles=$(printf "%s\n" "$found" | wc -l)
This miscounts if found is empty (and if any filenames contain a newline), emphasizing the fact that this entire approach is faulty. If you really want to go this way, you can try:
numfiles=$(test -z "$numfiles" && echo 0 || printf "%s\n" "$found" | wc -l)
or pipe the output of find to a script that counts the output and prints a count along with the first filename:
find . -type f -name "$1" | tr '\n' ' ' |
awk '{c=NF; f=$1 } END {print c, f; exit c!=1}' c=0 |
while read count name; do
case $count in
0) echo no files >&2;;
1) echo 1 file $name;;
*) echo Duplicate files >&2;;
esac;
done
All of these solutions fail miserably if any pathnames contain whitespace. If that matters, you could change the awk to a perl script to make it easier to handle null separators and use -print0, but really I think you should stop worrying about special cases. (find -exec and find | xargs both fail to handle to 0 files matching case cleanly. Arguably this awk solution also doesn't handle it cleanly.)

Bash script loop through subdirectories and write to file

I have no idea I have spent a lot of hours dealing with this problem. I need to write script. Script should loop recursively through subdirectories in current directory. It should check files count in each directory. If file count is greater than 10 it should write all names of these file in file named "BigList" otherwise it should write in file "ShortList". This should look like
---<directory name>
<filename>
<filename>
<filename>
<filename>
....
---<directory name>
<filename>
<filename>
<filename>
<filename>
....
My script only works if subdirecotries don't include subdirectories in turn.
I am confused about this. Because it doesn't work as I expect. It will take less than 5 minutes to write this on any programming language for my.
Please help to solve this problem , because I have no idea how to do this.
Here is my script
#!/bin/bash
parent_dir=""
if [ -d "$1" ]; then
path=$1;
else
path=$(pwd)
fi
parent_dir=$path
loop_folder_recurse() {
local files_list=""
local cnt=0
for i in "$1"/*;do
if [ -d "$i" ];then
echo "dir: $i"
parent_dir=$i
echo before recursion
loop_folder_recurse "$i"
echo after recursion
if [ $cnt -ge 10 ]; then
echo -e "---"$parent_dir >> BigList
echo -e $file_list >> BigList
else
echo -e "---"$parent_dir >> ShortList
echo -e $file_list >> ShortList
fi
elif [ -f "$i" ]; then
echo file $i
if [ $cur_fol != $main_pwd ]; then
file_list+=$i'\n'
cnt=$((cnt + 1))
fi
fi
done
}
echo "Base path: $path"
loop_folder_recurse $path
I believe that this does what you want:
find . -type d -exec env d={} bash -c 'out=Shortlist; [ $(ls "$d" | wc -l) -ge 10 ] && out=Biglist; { echo "--$d"; ls "$d"; echo; } >>"$out"' ';'
If we don't want either to count subdirectories to the cut-off or to list them in the output, then use this version:
find . -type d -exec env d={} bash -c 'out=Shortlist; [ $(ls -p "$d" | grep -v "/$" | wc -l) -ge 10 ] && out=Biglist; { echo "--$d"; ls -p "$d"; echo; } | grep -v "/$" >>"$out"' ';'

Bash: Native way to check if an entry is one line?

I have a find script that automatically opens a file if just one file is found. The way I currently handle it is doing a word count on the number of lines of the search results. Is there an easier way to do this?
if [ "$( cat "$temp" | wc -l | xargs echo )" == "1" ]; then
edit `cat "$temp"`
fi
EDITED - here is the context of the whole script.
term="$1"
temp=".aafind.txt"
find src sql common -iname "*$term*" | grep -v 'src/.*lib' >> "$temp"
if [ ! -s "$temp" ]; then
echo "ΓΈ - including lib..." 1>&2
find src sql common -iname "*$term*" >> "$temp"
fi
if [ "$( cat "$temp" | wc -l | xargs echo )" == "1" ]; then
# just open it in an editor
edit `cat "$temp"`
else
# format output
term_regex=`echo "$term" | sed "s%\*%[^/]*%g" | sed "s%\?%[^/]%g" `
cat "$temp" | sed -E 's%//+%/%' | grep --color -E -i "$term_regex|$"
fi
rm "$temp"
Unless I'm misunderstanding, the variable $temp contains one or more filenames, one per line, and if there is only one filename it should be edited?
[ $(wc -l <<< "$temp") = "1" ] && edit "$temp"
If $temp is a file containing filenames:
[ $(wc -l < "$temp") = "1" ] && edit "$(cat "$temp")"
Several of the results here will read through an entire file, whereas one can stop and have an answer after one line and one character:
if { IFS='' read -r result && ! read -n 1 _; } <file; then
echo "Exactly one line: $result"
else
echo "Either no valid content at all, or more than one line"
fi
For safely reading from find, if you have GNU find and bash as your shell, replace <file with < <(find ...) in the above. Even better, in that case, is to use NUL-delimited names, such that filenames with newlines (yes, they're legal) don't trip you up:
if { IFS='' read -r -d '' result && ! read -r -d '' -n 1 _; } \
< <(find ... -print0); then
printf 'Exactly one file: %q\n' "$result"
else
echo "Either no results, or more than one"
fi
Well, given that you are storing these results in the file $temp this is a little easier:
[ "$( wc -l < $temp )" -eq 1 ] && edit "$( cat $temp )"
Instead of 'cat $temp' you can do '< $temp', but it might take away some readability if you are not very familiar with redirection 8)
If you want to test whether the file is empty or not, test -s does that.
if [ -s "$temp" ]; then
edit `cat "$temp"`
fi
(A non-empty file by definition contains at least one line. You should find that wc -l agrees.)
If you genuinely want a line count of exactly one, then yes, it can be simplified substantially;
if [ $( wc -l <"$temp" ) = 1 ]; then
edit `cat "$temp"`
fi
You can use arrays:
x=($(find . -type f))
[ "${#x[*]}" -eq 1 ] && echo "just one || echo "many"
But you might have problems in case of filenames with whitespace, etc.
Still, something like this would be a native way
no this is the way, though you're making it over-complicated:
if [ "`wc -l $temp | cut -d' ' -f1`" = "1" ]; then
edit "$temp";
fi
what's complicating it is:
useless use of cat,
unuseful use of xargs
and I'm not sure if you really want the editcat $temp`` which is editing the file at the content of $temp

Resources