error check function in bash - bash

In bash scripting, you can check for errors by doing something like this:
if some_command; then
printf 'some_command succeeded\n'
else
printf 'some_command failed\n'
fi
If you have a large script, constantly having to do this becomes very tedious and long. It would be nice to write a function so all you have to do is pass the command and have it output whether it failed or succeeded and exiting if it failed.
I thought about doing something like this:
function cmdTest() {
if $($#); then
echo OK
else
echo FAIL
exit
fi
}
So then it would be called by doing something like cmdTest ls -l, but when Id o this I get:
ls -l: Command not found.
What is going on? How can I fix this?
Thank you.

Your cmdtest function is limited to simple commands (no pipes, no &&/||, no redirections) and its arguments. It's only a little more typing to use the far more robust
{ ... ;} && echo OK || { echo Fail; exit 1; }
where ... is to be replaced by any command line you might want to execute. For simple commands, the braces and trailing semi-colon are unnecessary, but in general you will need
them to ensure that your command line is treated as a single command for purposes of using &&.
If you tried to run something like
cmdTest grep "foo" file.txt || echo "foo not found"
your function would only run grep "foo" file.txt, and if your function failed, then echo "foo not found" would run. An attempt to pass the entirex || y` to your function like
cmdTest 'grep "foo" file.txt || echo "foo not found"
would fail even worse: the entire command string would be treated as a command name, which is not what you intended. Likewise, you cannot pass pipelines
cmdTest grep "foo" file.txt | grep "bar"
because the pipeline consists of two commands: cmdTest grep "foo" file.txt and grep "bar".
Shell (whether bash, zsh, POSIX sh or nearly any other kind) is simply not a language that is useful for passing arbitrary command lines as an argument to another function to be executed. The only work around, to use the eval command, is fragile at best and a security risk at worst, and cannot be recommended against enough. (And no, I will not provide an example of how to use eval, even in a limited capacity.)

Use this function like this:
function cmdTest() { if "$#"; then echo "OK"; else return $?; fi; }
OR to print Fail also:
function cmdTest() { if "$#"; then echo "OK"; else ret=$?; echo "Fail"; return $ret; fi; }

Related

bash: error handling and functions

I am trying to call a function in a loop and gracefully handle and continue when it throws.
If I omit the || handle_error it just stops the entire script as one would expect.
If I leave || handle_error there it will print foo is fine after the error and will not execute handle_error at all. This is also an expected behavior, it's just how it works.
#!/bin/bash
set -e
things=(foo bar)
function do_something {
echo "param: $1"
# just throw on first loop run
# this statement is just a way to selectively throw
# not part of a real use case scenario where the command(s)
# may or may not throw
if [[ $1 == "foo" ]]; then
throw_error
fi
# this line should not be executed when $1 is "foo"
echo "$1 is fine."
}
function handle_error {
echo "$1 failed."
}
for thing in ${things[#]}; do
do_something $thing || handle_error $thing
done
echo "done"
yields
param: foo
./test.sh: line 12: throw_error: command not found
foo is fine.
param: bar
bar is fine.
done
what I would like to have is
param: foo
./test.sh: line 12: throw_error: command not found
foo failed.
param: bar
bar is fine.
done
Edit:
do_something doesn't really have to return anything. It's just an example of a function that throws, I could potentially remove it from the example source code because I will have no control over its content nor I want to, and testing each command in it for failure is not viable.
Edit:
You are not allowed to touch do_something logic. I stated this before, it's just a function containing a set of instructions that may throw an error. It may be a typo, it may be make failing in a CI environment, it may be a network error.
The solution I found is to save the function in a separate file and execute it in a sub-shell. The downside is that we lose all locals.
do-something.sh
#!/bin/bash
set -e
echo "param: $1"
if [[ $1 == "foo" ]]; then
throw_error
fi
echo "$1 is fine."
my-script.sh
#!/bin/bash
set -e
things=(foo bar)
function handle_error {
echo "$1 failed."
}
for thing in "${things[#]}"; do
./do-something.sh "$thing" || handle_error "$thing"
done
echo "done"
yields
param: foo
./do-something.sh: line 8: throw_error: command not found
foo failed.
param: bar
bar is fine.
done
If there is a more elegant way I will mark that as correct answer. Will check again in 48h.
Edit
Thanks to #PeterCordes comment and this other answer I found another solution that doesn't require to have separate files.
#!/bin/bash
set -e
things=(foo bar)
function do_something {
echo "param: $1"
if [[ $1 == "foo" ]]; then
throw_error
fi
echo "$1 is fine."
}
function handle_error {
echo "$1 failed with code: $2"
}
for thing in "${things[#]}"; do
set +e; (set -e; do_something "$thing"); error=$?; set -e
((error)) && handle_error "$thing" $error
done
echo "done"
correctly yields
param: foo
./test.sh: line 11: throw_error: command not found
foo failed with code: 127
param: bar
bar is fine.
done
#!/bin/bash
set -e
things=(foo bar)
function do_something() {
echo "param: $1"
ret_value=0
if [[ $1 == "foo" ]]; then
ret_value=1
elif [[ $1 == "fred" ]]; then
ret_value=2
fi
echo "$1 is fine."
return $ret_value
}
function handle_error() {
echo "$1 failed."
}
for thing in ${things[#]}; do
do_something $thing || handle_error $thing
done
echo "done"
See my comment above for the explanation. You can't test for a return value without creating a return value, which should be somewhat obvious. And || tests for a return value, basically, one greater than 0. Like && tests for 0. I think that's more or less right. I believe the bash return value limit is 254? I want to say. Must be integer between 0 and 254. Can't be a string, a float, etc.
http://tldp.org/LDP/abs/html/complexfunct.html
Functions return a value, called an exit status. This is analogous to
the exit status returned by a command. The exit status may be
explicitly specified by a return statement, otherwise it is the exit
status of the last command in the function (0 if successful, and a
non-zero error code if not). This exit status may be used in the
script by referencing it as $?. This mechanism effectively permits
script functions to have a "return value" similar to C functions.
So actually, foo there would have returned a 127 command not found error. I think. I'd have to test to see for sure.
[updated}
No, echo is the last command, as you can see from your output. And the outcome of the last echo is 0, so the function returns 0. So you want to dump this notion and go to something like trap, that's assuming you can't touch the internals of the function, which is odd.
echo fred; echo reval: $?
fred
reval: 0
What does set -e mean in a bash script?
-e Exit immediately if a command exits with a non-zero status.
But it's not very reliable and considered as a bad practice, better use :
trap 'do_something' ERR
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.html see trap, that may do what you want. But not ||, unless you add returns.
try
if [[ "$1" == "foo" ]]; then
foo
fi
I wonder if it was trying to execute the command foo within the condition test?
from bash reference:
-e
Exit immediately if a pipeline (see Pipelines), which may consist of a single simple command (see Simple Commands), a list (see Lists),
or a compound command (see Compound Commands) returns a non-zero
status. The shell does not exit if the command that fails is part of
the command list immediately following a while or until keyword, part
of the test in an if statement, part of any command executed in a &&
or || list except the command following the final && or ||, any
command in a pipeline but the last, or if the command’s return status
is being inverted with !.
as you can see, if the error occurs within the test condition, then the script will continue oblivious and return 0.
--
Edit
So in response, I note that the docs continue:
If a compound command other than a subshell returns a non-zero status
because a command failed while -e was being ignored, the shell does
not exit.
Well because your for is succeeded by the echo, there's no reason for an error to be thrown!

How do I conditionally redirect output to a file, depending on whether some script argument exists?

I have a script which I would like to use to process some data and input it into a chosen file.
(do some stuff)>$(hostname)_$1
This works fine if my argument is a file name, but what should I pass (or how should I change the script) if I want to output to the terminal, i.e. stdout?
This answer assumes that the question is:
How do I conditionally redirect output to a file, depending on whether some script argument exists?
In other words, how to reduce duplication in a bash function like this:
do_it() {
if [[ -z $1 ]]; then
some very
complicated
commands
else
{ some very
complicated
commands
} > "$(logdir)/$1"
fi
}
If that were the question, a simple solution is to conditionally redirect stdout inside a subshell:
do_it() {
(
if [[ -n $1 ]]; then exec > "$(logdir)/$1"; fi
some very
complicated
commands
)
}
The subshell (created with the parentheses) is necessary in order to limit the effect of the exec command to the scope of the function; otherwise, the redirect would continue when the function returned.
This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}

Nicer one-liner for exiting bash script with error message

I'd like to achieve the test with a cleaner, less ugly one-liner:
#!/bin/bash
test -d "$1" || (echo "Argument 1: '$1' is not a directory" 1>&2 ; exit 1) || exit 1
# ... script continues if $1 is directory...
Basically I am after something which does not duplicate the exit, and preferably does not spawn a sub-shell (and as a result should also look less ugly), but still fits in one line.
Without a subshell and without duplicating exit:
test -d "$1" || { echo "Argument 1: '$1' is not a directory" 1>&2 ; exit 1; }
You may also want to refer to Grouping Commands:
{}
{ list; }
Placing a list of commands between curly braces causes the list to be executed in the current shell context. No subshell is created. The
semicolon (or newline) following list is required.

Get the exit code for a command in Bash and KornShell (ksh)

I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.

How to get exit status of piped command from inside the pipeline?

Consider I have following commandline: do-things arg1 arg2 | progress-meter "Doing things...";, where progress-meter is bash function I want to implement. It should print Doing things... before running do-things arg1 arg2 or in parallel (so, it will be printed anyway at the very beginning), and record stdout+stderr of do-things command, and check it's exit status. If exit status is 0, it should print [ OK ], otherwise it should print [FAIL] and dump recorded output.
Currently I have things done using progress-meter "Doing things..." "do-things arg1 arg2";, and evaluating second argument inside, which is clumsy and I don't like that and believe there is better solution.
The problem with pipe syntax is that I don't know how can I get do-things' exit status from inside the pipeline? $PIPESTATUS seems to be useful only after all commands in pipeline finished.
Maybe process substitution like progress-meter "Doing things..." <(do-things arg1 arg2); will be fine, but in this case I also don't know how can I get exit status of do-things.
I'll be happy to hear if there is some other neat syntax possible to achieve same task without escaping command to be executed like in my example.
I greatly hope for the help of community.
UPD1: As question seems not to be clear enough, I paraphrase it:
I want bash function that can be fed with command, that will execute in parallel to function, and bash function will receive it's stdout+stderr, wait for completion and get its exit status.
Example implementation using evals:
progress_meter() {
local output;
local errcode;
echo -n -e $1;
output=$( {
eval "${cmd}";
} 2>&1; );
errcode=$?;
if (( errcode )); then {
echo '[FAIL]';
echo "Output was: ${output}"
} else {
echo '[ OK ]';
}; fi;
}
So this can be used as progress_meter "Do things..." "do-things arg1 arg2". I want the same without eval.
Why eval things? Assuming you have one fixed argument to progress-meter, you can do something like:
#!/bin/bash
# progress meter
prompt="$1"
shift
echo "$prompt"
"$#" # this just executes a command made up of
# arguments 2, 3, ... of the script
# the real script should actually read its input,
# display progress meter etc.
and call it
$ progress-meter "Doing stuff" do-things arg1 arg2
If you insist on putting progress-meter in a pipeline, I'm afraid your best bet is something like
(do-things arg1 arg2 ; echo $?) | progress-meter "Doing stuff"
I'm not sure I understand what exactly you're trying to achieve,
but you could check the pipefail option:
pipefail
If set, the return value of a pipeline is the
value of the last (rightmost) command to exit
with a non-zero status, or zero if all commands
in the pipeline exit successfully. This option
is disabled by default.
For example:
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ok: 0
bash-4.1 $ set -o pipefail
bash-4.1 $ ls no_such_a_file 2>&- | : && echo ok: $? || echo ko: $?
ko: 2
Edit: I just read your comment on the other post. Why don't you just handle the error?
bash-4.1 $ ls -d /tmp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
/tmp
bash-4.1 $ ls -d /tmpp 2>&- || echo failed | while read; do [[ $REPLY == failed ]] && echo failed || echo "$REPLY"; done
failed
Have your scrips in the pipeline communicate by proxy (much like the Blackboard Pattern: some guy writes on the blackboard, another guy reads it):
Modify your do-things script so that it reports its exit status to a file somewhere.
Modify your progress-meter script to read that file, using command line switches if you like so as not to hardcode the name of the blackboard file, for reporting the exit status of the program that it is reporting the progress for.

Resources