preventing wildcard expansion in bash script - bash

I've searched here, but still can't find the answer to my globbing problems.
We have files "file.1" through "file.5", and each one should contain the string "completed" if our overnight processing went ok.
I figure it's a good thing to first check that there are some files, then I want to grep them to see if I find 5 "completed" strings. The following innocent approach doesn't work:
FILES="/mydir/file.*"
if [ -f "$FILES" ]; then
COUNT=`grep completed $FILES`
if [ $COUNT -eq 5 ]; then
echo "found 5"
else
echo "no files?"
fi
Thanks for any advice....Lyle

Per http://mywiki.wooledge.org/BashFAQ/004, the best approach to counting files is to use an array (with the nullglob option set):
shopt -s nullglob
files=( /mydir/files.* )
count=${#files[#]}
If you want to collect the names of those files, you can do it like so (assuming GNU grep):
completed_files=()
while IFS='' read -r -d '' filename; do
completed_files+=( "$filename" )
done < <(grep -l -Z completed /dev/null files.*)
(( ${#completed_files[#]} == 5 )) && echo "Exactly 5 files completed"
This approach is somewhat verbose, but guaranteed to work even with highly unusual filenames.

try this:
[[ $(grep -l 'completed' /mydir/file.* | grep -c .) == 5 ]] || echo "Something is wrong"
will print "Something is wrong" if doesn't find 5 completed lines.
Corrected the missing "-l" - the explanation
$ grep -c completed file.*
file.1:1
file.2:1
file.3:0
$ grep -l completed file.*
file.1
file.2
$ grep -l completed file.* | grep -c .
2
$ grep -l completed file.* | wc -l
2

You can do this to prevent globbing:
echo \'$FILES\'
but it seems you have a different problem

Related

Delete empty files - Improve performance of logic

I am i need to find & remove empty files. The definition of empty files in my use case is a file which has zero lines.
I did try testing the file to see if it's empty However, this behaves strangely as in even though the file is empty it doesn't detect it so.
Hence, the best thing I could write up is the below script which i way too slow given it has to test several hundred thousand files
#!/bin/bash
LOOKUP_DIR="/path/to/source/directory"
cd ${LOOKUP_DIR} || { echo "cd failed"; exit 0; }
for fname in $(realpath */*)
do
if [[ $(wc -l "${fname}" | awk '{print $1}') -eq 0 ]]
then
echo "${fname}" is empty
rm -f "${fname}"
fi
done
Is there a better way to do what I'm after or alternatively, can the above logic be re-written in a way that brings better performance please?
Your script is slow beacuse wc reads every file to the end, which is not needed for your purpose. This might be what you're looking for:
#!/bin/bash
lookup_dir='/path/to/source/directory'
cd "$lookup_dir" || exit
for file in *; do
if [[ -f "$file" && -r "$file" && ! -L "$file" ]]; then
read < "$file" || echo rm -f -- "$file"
fi
done
Drop the echo after making sure it works as intended.
Another version, calling the rm only once, could be:
#!/bin/bash
lookup_dir='/path/to/source/directory'
cd "$lookup_dir" || exit
for file in *; do
if [[ -f "$file" && -r "$file" && ! -L "$file" ]]; then
read < "$file" || files_to_be_deleted+=("$file")
fi
done
rm -f -- "${files_to_be_deleted[#]}"
Explanation:
The core logic is in the line
read < "$file" || rm -f -- "$file"
The read < "$file" command attempts to read a line from the $file. If it succeeds, that is, a line is read, then the rm command on the right-hand side of the || won't be executed (that's how the || works). If it fails then the rm command will be executed. In any case, at most one line will be read. This has great advantage over the wc command because wc would read the whole file.
if ! read < "$file"; then rm -f -- "$file"; fi
could be used instead. The two lines are equivalent.
To check a "$fname" is a file and is empty or not, use [ -s "$fname" ]:
#!/usr/bin/env sh
LOOKUP_DIR="/path/to/source/directory"
for fname in "$LOOKUP_DIR"*/*; do
if ! [ -s "$fname" ]; then
echo "${fname}" is empty
# remove echo when output is what you want
echo rm -f "${fname}"
fi
done
See: help test:
File operators:
...
-s FILE True if file exists and is not empty.
Yet another method
wc -l ~/tmp/* 2>/dev/null | awk '$1 == 0 {print $2}' | xargs echo rm
This will break if any of your files have whitespace in the name.
To work around that, with awk still
wc -l ~/tmp/* 2>/dev/null \
| awk 'sub(/^[[:blank:]]+0[[:blank:]]+/, "")' \
| xargs echo rm
This works because the sub function returns the number of substitutions made, which can be treated as a boolean zero/not-zero condition.
Remove the echo to actually delete the files.

find multiple patterns in multiple files bash

I'm trying to find multiple patterns (I have a file of them) in multiple differents files with a lot of subdirs.
I'm trying to use exit codes for not outputting all patterns found (because I need only the ones which are NOT found), but exit codes doesn't work as I understand them.
while read pattern; do
grep -q -n -r $pattern ./dir/
if [ $? -eq 0 ]; then
: #echo $pattern ' exists'
else
echo $pattern " doesn't exist"
fi
done <strings.tmp
You can use this in bash:
while read -r pattern; do
grep -F -q -r "$pattern" ./dir/ || echo $pattern " doesn't exist"
done < strings.tmp
Use read -r to safely read regex patterns
Use quoting in "$pattern" to avoid shell escaping
No need to use -n since you're using -q (quiet) flag
#anubhava's solution should work. If it doesn't for some reason, try the following
while read -r pattern; do
lines=`grep -q -r "$pattern" ./dir/ | wc -l`
if [ $lines -eq 0 ]; then
echo $pattern " doesn't exist"
else
echo $pattern "exists"
fi
done < strings.tmp

Bash: Native way to check if an entry is one line?

I have a find script that automatically opens a file if just one file is found. The way I currently handle it is doing a word count on the number of lines of the search results. Is there an easier way to do this?
if [ "$( cat "$temp" | wc -l | xargs echo )" == "1" ]; then
edit `cat "$temp"`
fi
EDITED - here is the context of the whole script.
term="$1"
temp=".aafind.txt"
find src sql common -iname "*$term*" | grep -v 'src/.*lib' >> "$temp"
if [ ! -s "$temp" ]; then
echo "ΓΈ - including lib..." 1>&2
find src sql common -iname "*$term*" >> "$temp"
fi
if [ "$( cat "$temp" | wc -l | xargs echo )" == "1" ]; then
# just open it in an editor
edit `cat "$temp"`
else
# format output
term_regex=`echo "$term" | sed "s%\*%[^/]*%g" | sed "s%\?%[^/]%g" `
cat "$temp" | sed -E 's%//+%/%' | grep --color -E -i "$term_regex|$"
fi
rm "$temp"
Unless I'm misunderstanding, the variable $temp contains one or more filenames, one per line, and if there is only one filename it should be edited?
[ $(wc -l <<< "$temp") = "1" ] && edit "$temp"
If $temp is a file containing filenames:
[ $(wc -l < "$temp") = "1" ] && edit "$(cat "$temp")"
Several of the results here will read through an entire file, whereas one can stop and have an answer after one line and one character:
if { IFS='' read -r result && ! read -n 1 _; } <file; then
echo "Exactly one line: $result"
else
echo "Either no valid content at all, or more than one line"
fi
For safely reading from find, if you have GNU find and bash as your shell, replace <file with < <(find ...) in the above. Even better, in that case, is to use NUL-delimited names, such that filenames with newlines (yes, they're legal) don't trip you up:
if { IFS='' read -r -d '' result && ! read -r -d '' -n 1 _; } \
< <(find ... -print0); then
printf 'Exactly one file: %q\n' "$result"
else
echo "Either no results, or more than one"
fi
Well, given that you are storing these results in the file $temp this is a little easier:
[ "$( wc -l < $temp )" -eq 1 ] && edit "$( cat $temp )"
Instead of 'cat $temp' you can do '< $temp', but it might take away some readability if you are not very familiar with redirection 8)
If you want to test whether the file is empty or not, test -s does that.
if [ -s "$temp" ]; then
edit `cat "$temp"`
fi
(A non-empty file by definition contains at least one line. You should find that wc -l agrees.)
If you genuinely want a line count of exactly one, then yes, it can be simplified substantially;
if [ $( wc -l <"$temp" ) = 1 ]; then
edit `cat "$temp"`
fi
You can use arrays:
x=($(find . -type f))
[ "${#x[*]}" -eq 1 ] && echo "just one || echo "many"
But you might have problems in case of filenames with whitespace, etc.
Still, something like this would be a native way
no this is the way, though you're making it over-complicated:
if [ "`wc -l $temp | cut -d' ' -f1`" = "1" ]; then
edit "$temp";
fi
what's complicating it is:
useless use of cat,
unuseful use of xargs
and I'm not sure if you really want the editcat $temp`` which is editing the file at the content of $temp

Bash - How to count C source file functions calls

I want to find for each function defined in a C source file how many times it's called and on which line.
Should I search for patterns which look like function definitions in C and then count how many times that function name occurs. If so, how can I do it? regular expressions?
Any help will be highly appreciated!
#!/bin/bash
if [ -r $1 ]; then
#??????
else
echo The file \"$1\" does NOT exist
fi
The final result is: (please report any bugs)
10 if [ -r $1 ]; then
11 functs=`grep -n -e "\(void\|double\|char\|int\) \w*(.*)" $1 | sed 's/^.*\(void\|double\|int\) \(\w*\)(.*$/\2/g'`
12 for f in $functs;do
13 echo -n $f\(\) is called:
14 grep -n $f $1 > temp.txt
15 echo -n `grep -c -v -e "\(void\|double\|int\) $f(.*)" -e"//" temp.txt`
16 echo " times"
17 echo -n on lines:
18 echo -n `grep -v -e "\(void\|double\|int\) $f(.*)" -e"//" temp.txt | sed -n 's/^\([0-9]*\)[:].*/\1/p'`
19 echo
20 echo
21 done
22 else
23 echo The file \"$1\" does not exist
24 fi
This might sort of work. The first bit finds function definitions like
<datatype> <name>(<stuff>)
and pulls out the <name>. Then grep for that string. There are loads of situations where this won't work, but it might be a good place to start if you're trying to make a simple shell script that works on some programs.
functions=`grep -e "\(void\|double\|int\) \w*(.*)$" -f input.c | sed 's/^.*\(void\|double\|int\) \(\w*\)(.*$/\2/g'`
for func in $functions
do
echo "Counting references for $func:"
grep "$func" -f input.c | wc -l
done
You can try with this regex
(^|[^\w\d])?(functionName(\s)*\()
for example to search all printf occurrences
(^|[^\w\d])?(printf(\s)*\()
to use this expression with grep you have to use the option -E, like this
grep -E "(^|[^\w\d])?(printf(\s)*\()" the_file.txt
Final note, what miss with this solution is to skip the occurrences in comment bloks.

In a unix box, I am taking a list of files as input. If it is found, return the path otherwise return a message "filename file not found"

I have used the find command for this, but it doesnt return any message when a file is not found.
And I want the search to be recursive and return a message "not found" when a file is not found.
Here's the code I have done so far. Here "input.txt" contains the list of files to be searched.
set `cat input.txt`
echo $#
for i in $#
do
find $HOME -name $i
done
Try this:
listfile=input.txt
exec 3>&1
find | \
grep -f <( sed 's|.*|/&$|' "$listfile" ) | \
tee /dev/fd/3 | \
sed 's|.*/\([^/]*\)$|\1|' | \
grep -v -f - "$listfile" | \
sed 's/$/ Not found/'
exec 3>&-
open file descriptor 3
find the files
see if they're on the list (use sed to
send a copy of the found ones to file descriptor 3
strip off the directory name
get a list of the ones that don't appear
add the "Not found" message
close file descriptor 3
Output looks like:
/path/to/file1
/path/somewhere/file2
foo Not found
bar Not found
No loops necessary.
Whats wrong with using a script. I hope this will do.
#!/bin/bash -f
for i in $#
do
var=`find $HOME -name $i`
if [ -z "$var"]
then
var="File not found"
fi
echo $var
done
You can use the shell builtin 'test' to test the existence of a file. There is also an alternative syntax using square brackets:
if [ -f $a ]; then # Don't forget the semicolon.
echo $a
else
echo 'Not Found'
fi
Here is one way - create a list of all the files to grep against. If your implementation supports
grep -q otherwise use grep [pattern] 2&>1 >/dev/null....
find $HOME -type f |
while read fname
do
echo "$(basename $fname) $fname"
done > /tmp/chk.lis
while read fname
do
grep -q "^$fname" /tmp/chk.lis
[ $? -eq 0 ] && echo "$fname found" || echo "$fname not found"
done < /tmp/chk.lis
All of this is needed because POSIX find does not return an error when a file is not found
perl -nlE'say-f$_?$_:"not found: $_"' file

Resources