grep -c kills script when no match using set -e - bash

Basic example:
#!/bin/bash
set -e
set -x
NUM_LINES=$(printf "Hello\nHi" | grep -c "How$")
echo "Number of lines: ${NUM_LINES}" # never prints 0
Output:
++ grep -c 'How$'
++ printf 'Hello\nHi'
+ NUM_LINES=0
If there are matches, it prints the correct number of lines. Also grep "How$" | wc -l works instead of using grep -c "How$".

You can suppress grep's exit code by running : when it "fails". : always succeeds.
NUM_LINES=$(printf "Hello\nHi" | grep -c "How$" || :)

Related

Prevent grep from exiting in case of nomatch

Prevent grep returning an error when input doesn't match. I would like it to keep running and not exit with exit code: 1
set -euo pipefail
numbr_match=$(find logs/log_proj | grep "$name" | wc -l);
How could I solve this?
In this individual case, you should probably use
find logs/log_proj -name "*$name*" | wc -l
More generally, you can run grep in a subshell and trap the error.
find logs/log_proj | ( grep "$name" || true) | wc -l
... though of course grep | wc -l is separately an antipattern;
find logs/log_proj | grep -c "$name" || true
I don't know why you are using -e and pipefail when you don't want to have this behaviour. If your goal is just to treat exit code 2 (by grep) as error, but exit code 1 as no-error, you could write a wrapper script around grep, which you
call instead of grep:
#!/bin/bash
# This script behaves exactly like grep, only
# that it returns exit code 0 if there are no
# matching lines
grep "$#"
rc=$?
exit $((rc == 1 ? 0 : rc))

Different output of command substitution

Why does adding | wc -l alters the result as in the following?
tst:
#!/bin/bash
pgrep tst | wc -l
echo $(pgrep tst | wc -l)
echo $(pgrep tst) | wc -l
$ ./tst
1
2
1
and even
$ bash -x tst
+ wc -l
+ pgrep tst
0
++ pgrep tst
++ wc -l
+ echo 0
0
++ pgrep tst
+ echo
pgrep and subshells can have weird interactions, but in this case that's just a red herring; the actual cause is missing double-quotes around the command substitution:
$ cat tst2
#!/bin/bash
pgrep tst | wc -l
echo "$(pgrep tst | wc -l)"
echo "$(pgrep tst)" | wc -l
$ ./tst2
1
2
2
What's going on in the original script is that in the command
echo $(pgrep tst) | wc -l
pgrep prints two process IDs (the main shell running the script, and a subshell created to handle the echo part of the pipeline). It prints each one as a separate line, something like:
11730
11736
The command substitution captures that, but since it's not in double-quotes the newline between them gets converted to an argument break, so the whole thing becomes equivalent to:
echo 11730 11736 | wc -l
As a result, echo prints both IDs as a single line, and wc -l correctly reports that.
The command substitution induces an additional process that has tst in its name, which is included in the input to wc -l.

Set a command to a variable in bash script problem

Trying to run a command as a variable but I am getting strange results
Expected result "1" :
grep -i nosuid /etc/fstab | grep -iq nfs
echo $?
1
Unexpected result as a variable command:
cmd="grep -i nosuid /etc/fstab | grep -iq nfs"
$cmd
echo $?
0
It seems it returns 0 as the command was correct not actual outcome. How to do this better ?
You can only execute exactly one command stored in a variable. The pipe is passed as an argument to the first grep.
Example
$ printArgs() { printf %s\\n "$#"; }
# Two commands. The 1st command has parameters "a" and "b".
# The 2nd command prints stdin from the first command.
$ printArgs a b | cat
a
b
$ cmd='printArgs a b | cat'
# Only one command with parameters "a", "b", "|", and "cat".
$ $cmd
a
b
|
cat
How to do this better?
Don't execute the command using variables.
Use a function.
$ cmd() { grep -i nosuid /etc/fstab | grep -iq nfs; }
$ cmd
$ echo $?
1
Solution to the actual problem
I see three options to your actual problem:
Use a DEBUG trap and the BASH_COMMAND variable inside the trap.
Enable bash's history feature for your script and use the hist command.
Use a function which takes a command string and executes it using eval.
Regarding your comment on the last approach: You only need one function. Something like
execAndLog() {
description="$1"
shift
if eval "$*"; then
info="PASSED: $description: $*"
passed+=("${FUNCNAME[1]}")
else
info="FAILED: $description: $*"
failed+=("${FUNCNAME[1]}")
done
}
You can use this function as follows
execAndLog 'Scanned system' 'grep -i nfs /etc/fstab | grep -iq noexec'
The first argument is the description for the log, the remaining arguments are the command to be executed.
using bash -x or set -x will allow you to see what bash executes:
> cmd="grep -i nosuid /etc/fstab | grep -iq nfs"
> set -x
> $cmd
+ grep -i nosuid /etc/fstab '|' grep -iq nfs
as you can see your pipe | is passed as an argument to the first grep command.

How to print number of occurances of a word in a file in unix

This is my shell script.
Given a directory, and a word, search the directory and print the absolute path of the file that has the maximum occurrences of the word and also print the number of occurrences.
I have written the following script
#!/bin/bash
if [[ -n $(find / -type d -name $1 2> /dev/null) ]]
then
echo "Directory exists"
x=` echo " $(find / -type d -name $1 2> /dev/null)"`
echo "$x"
cd $x
y=$(find . -type f | xargs grep -c $2 | grep -v ":0"| grep -o '[^/]*$' | sort -t: -k2,1 -n -r )
echo "$y"
else
echo "Directory does does not exists"
fi
result: scriptname directoryname word
output: /somedirectory/vtb/wordsearch : 4
/foo/bar: 3
Is there any option to replace xargs grep -c $2 ? Because grep -c prints the count=number of lines which contains the word but i need to print the exact occurrence of a word in the files in a given directory
Using grep's -c count feature:
grep -c "SEARCH" /path/to/files* | sort -r -t : -k 2 | head -n 1
The grep command will output each file in a /path/name:count format, the sort will numerically (-n) sort by the 2nd (-k 2) field as delimited by a colon (-t :) in reverse order (-r). We then use head to keep the first result (-n 1).
Try This:
grep -o -w 'foo' bar.txt | wc -w
OR
grep -o -w 'word' /path/to/file/ | wc -w
grep -Fwor "$word" "$dir" | sed "s/:${word}\$//" | sort | uniq -c | sort -n | tail -1

[: : bad number on the bash script

This is my bash script:
#!/usr/local/bin/bash -x
touch /usr/local/p
touch /usr/local/rec
DATA_FULL=`date +%Y.%m.%d.%H`
CHECK=`netstat -an | grep ESTAB | egrep '(13001|13002|13003|13004|13061|13099|16001|16002|16003|16004|16061|16099|18001|18002|18003|18004|18061|18099|20001|20002|20003|20004|20061|20099|13000|16000|18000|20000)' | awk '{ print $5 }' | sort -u | wc -l`
netstat -an | grep ESTAB | egrep '(13001|13002|13003|13004|13061|13099|16001|16002|16003|16004|16061|16099|18001|18002|18003|18004|18061|18099|20001|20002|20003|20004|20061|20099|13000|16000|18000|20000)' | awk '{ print $5 }' | sort -u | wc -l > /usr/local/www/p
STAT=`cat /usr/local/www/rec`
if [ "$CHECK" -gt "$STAT" ]; then
echo $CHECK"\n"$DATA_FULL > /usr/local/p
fi
Ofcourse I've runned chmod +x script.sh and then sh script.sh, then I receive the following message: [: : bad number.
Why does it happends?
Run your script using
sh -x script.sh
It'll print every line it executes and the variable output.
Run the netstat command and stat command outside and check.
If these are integer for sure, use this syntax,
if [ "0$(echo $CHECK|tr -d ' ')" -gt "0$(echo $STAT|tr -d ' ')" ];
A simple hack. Only works if $STAT is always either empty or positive number.
Are you sure that both STAT and CHECK are numbers that can be compared with -gt?
probably your /usr/local/www/rec is empty. Try
STAT=`cat /usr/local/www/rec 2>/dev/null || echo 0`
maybe.

Resources