Bash conditional on command exit code - bash

In bash, I want to say "if a file doesn't contain XYZ, then" do a bunch of things. The most natural way to transpose this into code is something like:
if [ ! grep --quiet XYZ "$MyFile" ] ; then
... do things ...
fi
But of course, that's not valid Bash syntax. I could use backticks, but then I'll be testing the output of the file. The two alternatives I can think of are:
grep --quiet XYZ "$MyFile"
if [ $? -ne 0 ]; then
... do things ...
fi
And
grep --quiet XYZ "$MyFile" ||
( ... do things ...
)
I kind of prefer the second one, it's more Lispy and the || for control flow isn't that uncommon in scripting languages. I can see arguments for the first one too, although when the person reads the first line, they don't know why you're executing grep, it looks like you're executing it for it's main effect, rather than just to control a branch in script.
Is there a third, more direct way which uses an if statement and has the grep in the condition?

Yes there is:
if grep --quiet .....
then
# If grep finds something
fi
or if the grep fails
if ! grep --quiet .....
then
# If grep doesn't find something
fi

You don't need the [ ] (test) to check the return value of a command. Just try:
if ! grep --quiet XYZ "$MyFile" ; then

This is a matter of taste since there obviously are multiple working solutions. When I deal with a problem like this, I usually apply wc -l after grep in order to count the lines that match. Then you have a single integer number that you can evaluate within a test condition. If the question only is whether there is a match at all (the number of matching lines does not matter), then applying wc probably is OTT and evaluation of grep's return code seems to be the best solution:
Normally, the exit status is 0 if selected lines are found and 1
otherwise. But the exit status is 2 if an error occurred, unless the
-q or --quiet or --silent option is used and a selected line is found. Note, however, that POSIX only mandates, for programs such as grep,
cmp, and diff, that the exit status in case of error be greater than
1; it is therefore advisable, for the sake of portability, to use
logic that tests for this general condition instead of strict equality
with 2.

Related

Constructing an If statement based on the return of the environment $PATH piped through grep (with bash)

I've answered my own question in writing this, but it might be helpful for others as I couldn't find a straightforward answer anywhere else. Please delete if inappropriate.
I'm trying to construct an if statement depending whether some <STRING> is found inside the environment $PATH.
When I pipe $PATH through grep I get a successful hit:
echo $PATH | grep -i "<STRING>"
But I was really struggling to find the syntax required to construct an if statement around this. It appears that the line below works. I know that the $(...) essentially passes the internal commands to the if statement, but I'm not sure why the [[...]] double brackets are needed:
if [[ $(echo $PATH | grep -i "<STRING>") ]]; then echo "HEY"; fi
Maybe someone could explain that for me to have a better understanding.
Thanks.
You could make better use of shell syntax. Something like this:
$ STRING="bin"
$ grep -i $STRING <<< $PATH && echo "HEY"
That is: first, save the search string in a variable (which I called STRING so it's easy to remember), and use that as the search pattern. Then, use the <<< redirection to input a "here string" - namely, the PATH variable.
Or, if you don't want a variable for the string:
$ grep -i "bin" <<< $PATH && echo "HEY"
Then, the construct && <some other command> means: IF the exit status of grep is 0 (meaning at least one successful match), THEN execute the "other command" (otherwise do nothing - just exit as soon as grep completes). This is the more common, more natural form of an "if... then..." statement, exactly of the kind you were trying to write.
Question for you though. Why the -i flag? That means "case independent matching". But in Unix and Linux file names, command names, etc. are case sensitive. Why do you care if the PATH matches the string BIN? It will, because bin is somewhere on the path, but if you then search for the BIN directory you won't find one. (The more interesting question is - how to match complete words only, so that for example to match bin, a directory name bin should be present; sbin shouldn't match bin. But that's about writing regular expressions - not quite what you were asking about.)
The following version - which doesn't even use grep - is based on the same idea, but it won't do case insensitive matching:
$ [[ $PATH == *$STRING* ]] && echo "HEY"
[[ ... ]] evaluates a Boolean expression (here, an equality using the * wildcard on the right-hand side); if true, && causes the execution of the echo command.
you don't need to use [[ ]], just:
if echo $PATH | grep -qi "<STRING>"; then echo "HEY"; fi

Multiple simultaneous patterns for grep

I need to see if user exists in /etc/passwd. I'm using grep, but I'm having a hard time passing multiple patterns to grep.
I tried
if [[ ! $( cat /etc/passwd | egrep "$name&/home" ) ]];then
#user doesn't exist, do something
fi
I used ampersand instead of | because both conditions must be true, but it's not working.
Try doing this :
$ getent passwd foo bar base
Finally :
if getent &>/dev/null passwd user_X; then
do_something...
else
do_something_else...
fi
Contrary to your assumptions, regex does not recognize & for intersection, even though it would be a logical extension.
To locate lines which match multiple patterns, try
grep -e 'pattern1.*pattern2' -e 'pattern2.*pattern1' file
to match the patterns in any order, or switch to e.g. Awk:
awk '/pattern1/ && /pattern2/' file
(though in your specific example, just "$name.*/home" ought to suffice because the matches must always occur in this order).
As an aside, your contorted if condition can be refactored to just
if grep -q pattern file; then ...
The if conditional takes as its argument a command, runs it, and examines its exit code. Any properly written Unix command is written to this specification, and returns zero on success, a nonzero exit code otherwise. (Notice also the absence of a useless cat -- almost all commands accept a file name argument, and those which don't can be handled with redirection.)

How to interrupt bash pipeline on error?

In the following example echo statement gets executed regardless of exit code of previous command in pipeline:
asemenov#cpp-01-ubuntu:~$
asemenov#cpp-01-ubuntu:~$ false|echo 123
123
asemenov#cpp-01-ubuntu:~$ true|echo 123
123
asemenov#cpp-01-ubuntu:~$
I want echo command to execute only on zero exit code of previous command, that is I want to achieve this behavior:
asemenov#cpp-01-ubuntu:~$ false|echo 123
asemenov#cpp-01-ubuntu:~$
Is it possible in bash?
Here is a more practical example:
asemenov#cpp-01-ubuntu:~$ find SomeNotExistingDir|xargs ls -1
find: `SomeNotExistingDir': No such file or directory
..
..
files list from my current directory
..
..
asemenov#cpp-01-ubuntu:~$
There is no reason to execute xargs ls -1 if find failed.
The components of a pipeline are always run unconditionally and logically in parallel; you cannot make the second (or later) processes in the pipeline only if the first (or earlier) process completes successfully.
In the specific case you show with find, you have at least two options:
find SomeNotExistingDir ... -exec ls -1 {} +
Or you can use a very useful feature of GNU xargs (not present in POSIX):
find SomeNotExistingDir ... | xargs -r ls -1
The -r option is equivalent to --no-run-if-empty option, which explains fairly precisely what it does. If you're using GNU find and GNU xargs, you should use the extensions -print0 and -0:
find SomeNotExistingDir ... -print0 | xargs -r -0 ls -1
This handles every character that can appear in a file name correctly.
In terms of command flow, the easiest way to do what you want would be to use the logical OR operator, like this:
[pierrep#DEVELOPMENT8 ~]: false || echo 123
123
[pierrep#DEVELOPMENT8 ~]: true || echo 123
[pierrep#DEVELOPMENT8 ~]:
This works since the || operator is evaluated in a lazy fashion, meaning that the right statement is only evaluated when the left statement evaluated to false or 1.
note: commands which are run successfully return exit status 0 when successful. Something other than 0 when they are not. in your example with find:
[pierrep#DEVELOPMENT8 ~]: find somedir || echo 123
find: `somedir': No such file or directory
123
[pierrep#DEVELOPMENT8 ~]: find .profile || echo 123
.profile
Using || wont redirect any kind of output from the command on the left of the ||.
If you want to run some command only when one succeeds you should just do a basic exit code check and temporarily store the output of one command in a variable in your script in order to feed it to the next command, like so:
$result=( $(find SomeNotExistingDir) )
$exit_code=$?
if [ $exit_code -eq 0 ]; then
for path in ${result[#]}; do
#do some stuff with the find results here...
echo $path;
done
fi
What this does: When find is run, it puts its results into the $result array. $? holds the exit code of the last run command, so here it is the find command. If find found SomeNotExisitingDir then loop through its results (since it might have found multiple instances of it) and do stuff with those paths. Else do nothing. Here else would be triggered when an error occurred in the execution of the find command or when the file/dir could not be found.
You can't do that with pipes because pipe creation will not wait for completion, other wise how could cat | tr 'a-z' 'A-Z' work?. Simulating pipes with test and temp files:
file1=$(mktemp)
file2=$(mktemp)
false > $file1 && (echo 123 > $file2) < $file1 && (prog3 > $file1) < $file2 && #....
rm $file1 $file2
the point is that when the first command fails there is no output for the second command and no reason to execute it - the result of this behavior becomes unexpected
If there is NO output on stdout when exit code is non-zero, then this information itself can be used for piping the data. No need to check for exit code. (Except for the optimization part off course.)
e.g. If you ignore optimization part, consider only the correctness part,
find SomeNotExistingDir|xargs ls -1
Can be changed to
find SomeNotExistingDir| while read x; do ls -1 "$x"; done
Except for while loop, the commands inside it will not be executed. The downfall of this approach is, some information (like line numbers) will be lost for commands like awk/sed/head etc. to be used in place of ls. Plus, ls will be executed N number of times, instead of 1, in case of xargs approach.
For the particular example you give, it's sufficient to simply check if there is any data on the pipe. The problem you experience is that xargs is getting no input, so it invokes ls with no arguments, and ls be default prints the contents of the current directory. anishsane's solution is almost sufficient, but it is not quite the same since it will invoke ls for each line of output, which is not at all what xargs does. However, you can do:
find /bad/path | xargs sh -c 'test $# = 0 || exec ls -1 "$#"'
Now, this pipe line will always succeed, and perhaps that is not desirable (this is the same behavior you get with just find /bad/path | xargs ls -l, though) To ensure that the pipeline fails, you can do:
find /bad/path | xargs sh -c 'test $# = 0 && exit 1; exec ls -1 "$#"'
There are some concerns however. xargs will quite happily invoke its command with many arguments (that is the point of it!), but some shells will handle a much smaller number of arguements than xargs, so it is quite possible that the shell will truncate the arguments. However, that is possibly an academic concern.

Test file existence with exit status from gnu-find

When test -e file is not flexible enough, I tend to use the following Bash idiom to check the existence of a file:
if [ -n "$(find ${FIND_ARGS} -print -quit)" ] ; then
echo "pass"
else
echo "fail"
fi
But since I am only interested in a boolean value, are there any ${FIND_ARGS} that will let me do instead:
if find ${FIND_ARGS} ; ...
I'd say no. man find...
find exits with status 0 if all files are processed successfully, greater than 0 if errors occur. This is deliberately a very broad description, but if the return value is non-zero, you should not rely on the correctness of the results of find.
Testing for output is probably fine for find. That isn't a "Bash idiom". If that's not good enough and you have Bash available then you can use extglobs and possibly globstar for file matching tests with [[. Find should only be used for complex recursive file matching, or actual searching for files, and other things that can't easily be done with Bash features.

How to prevent code/option injection in a bash script

I have written a small bash script called "isinFile.sh" for checking if the first term given to the script can be found in the file "file.txt":
#!/bin/bash
FILE="file.txt"
if [ `grep -w "$1" $FILE` ]; then
echo "true"
else
echo "false"
fi
However, running the script like
> ./isinFile.sh -x
breaks the script, since -x is interpreted by grep as an option.
So I improved my script
#!/bin/bash
FILE="file.txt"
if [ `grep -w -- "$1" $FILE` ]; then
echo "true"
else
echo "false"
fi
using -- as an argument to grep. Now running
> ./isinFile.sh -x
false
works. But is using -- the correct and only way to prevent code/option injection in bash scripts? I have not seen it in the wild, only found it mentioned in ABASH: Finding Bugs in Bash Scripts.
grep -w -- ...
prevents that interpretation in what follows --
EDIT
(I did not read the last part sorry). Yes, it is the only way. The other way is to avoid it as first part of the search; e.g. ".{0}-x" works too but it is odd., so e.g.
grep -w ".{0}$1" ...
should work too.
There's actually another code injection (or whatever you want to call it) bug in this script: it simply hands the output of grep to the [ (aka test) command, and assumes that'll return true if it's not empty. But if the output is more than one "word" long, [ will treat it as an expression and try to evaluate it. For example, suppose the file contains the line 0 -eq 2 and you search for "0" -- [ will decide that 0 is not equal to 2, and the script will print false despite the fact that it found a match.
The best way to fix this is to use Ignacio Vazquez-Abrams' suggestion (as clarified by Dennis Williamson) -- this completely avoids the parsing problem, and is also faster (since -q makes grep stop searching at the first match). If that option weren't available, another method would be to protect the output with double-quotes: if [ "$(grep -w -- "$1" "$FILE")" ]; then (note that I also used $() instead of backquotes 'cause I find them much easier to read, and quotes around $FILE just in case it contains anything funny, like whitespace).
Though not applicable in this particular case, another technique can be used to prevent filenames that start with hyphens from being interpreted as options:
rm ./-x
or
rm /path/to/-x

Resources