Exception handling in shell scripting? - bash

I'm looking for exception handling mechanism in shell script. Is there any try,catch equivalent mechanism in shell script ?

There is not really a try/catch in bash (i assume you're using bash), but you can achieve a quite similar behaviour using && or ||.
In this example, you want to run fallback_command if a_command fails (returns a non-zero value):
a_command || fallback_command
And in this example, you want to execute second_command if a_command is successful (returns 0):
a_command && second_command
They can easily be mixed together by using a subshell, for example, the following command will execute a_command, if it succeeds it will then run other_command, but if a_command or other_command fails, fallback_command will be executed:
(a_command && other_command) || fallback_command

The if/else structure and exit codes can help you fake some of it. This should work in Bash or Bourne (sh).
if foo ; then
else
e=$? # return code from if
if [ "${e}" -eq "1"]; then
echo "Foo returned exit code 1"
elif [ "${e}" -gt "1"]; then
echo "Foo returned BAD exit code ${e}"
fi
fi

{
# command which may fail and give an error
} || {
# command which should be run instead of the above failing command
}

Here are two simple bashfunctions which enable eventhandling in bash:
You could use it for basic exceptionhandling like this:
onFoo(){
echo "onFoo() called width arg $1!"
}
foo(){
[[ -f /tmp/somefile ]] || throw EXCEPTION_FOO_OCCURED "some arg"
}
addListener EXCEPTION_FOO_OCCURED onFoo
Exceptionhandling using try/catch blocks is not supported in bash, however, you might wanna try looking at the BANGSH framework which supports it (its a bit like jquery for bash).
However, exceptionhandling without cascading try/catch-blocks is similar to eventhandling, which is possible in almost any language with array-support.
If you want to keep your code nice and tidy (without if/else verbosity), I would recommend to use events.
The suggestion which MatToufoutu recommends (using || and &&) is not recommended for functions, but ok for simple commands. (see BashPitfalls about the risks)

Use following to handle error properly where error_exit is function that accepts one argument. In case if argument is not passed then it will throw unknown error with LineNo where actually error is happening. Please experiment before actually uses for production -
#!/bin/bash
PROGNAME=$(basename $0)
error_exit()
{
echo "${PROGNAME}: ${1:-"Unknown Error"}" 1>&2
exit 1
}
echo "Example of error with line number and message"
error_exit "$LINENO: An error has occurred."

Related

How to interpret if "$#" in bash?

I'm trying to understand this bash code but I'm pretty new on this. I'm not sure how to interpreter the next snippet. In more specific way I have doubts with the if "#"; then line.
check() {
LABEL=$1
shift
echo -e "\n🧪 Testing $LABEL"
if "$#"; then
echo "✅ Passed!"
return 0
else
echoStderr "❌ $LABEL check failed."
FAILED+=("$LABEL")
return 1
fi
}
I think that is just like the python list syntax to test an empty list the if will be success when the command tested return something. But I have doubts because bash use a lot error signals and I may be missing something. The use case is:
check "distro" lsb_release -c
check "color" [ $(cat /tmp/color.txt | grep red) ]
The snippeds were taken from this repository
related question:
What does mean $# in bash script

bash one-line conditional fails when using set -e

I started using set -e in my bash scripts,
and discovered that short form of conditional expression breaks the script execution.
For example the following line should check that $var is not empty:
[ -z "$var" ] && die "result is empty"
But causes silent exit from script when $var has non-zero length.
I used this form of conditional expression in many places...
What should I do to make it run correctly? Rewrite everything with "if" construction (which would be ugly)? Or abandon "set -e"?
Edit: Everybody is asking for the code. Here is full [non]working example:
#!/bin/bash
set -e
function check_me()
{
ws="smth"
[ -z "$ws" ] && echo " fail" && exit 1
}
echo "checking wrong thing"
check_me
echo "check finished"
I'd expect it to print both echoes before and after function call.
But it silently fails in the check_me function. Output is:
checking wrong thing
Use
[ -n "$var" ] || die "result is empty"
This way, the return value of the entire statement is true if $var is non-empty, so the ERR trap is not triggered.
I'm afraid you will have to rewrite everything so no false statements occur.
The definition of set -e is clear:
-e Exit immediately if a simple command (see SHELL GRAMMAR above) exits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in an if statement, part of a && or || list, or if the command's return value is being inverted via !. A trap on ERR, if set, is executed before the shell exits.
You are using the "optimization" system of Bash: because a false statement will cause an AND (&&) statement never to be true, bash knows it doesn't have to execute the second part of the line. However, this is a clever "abuse" of the system, not intended behaviour and therefore incompatible with set -e. You will have to rewrite everything so it is using proper ifs.
You should write your script such that no command ever exits with non-zero status.
In your command [ -z "$var" ] can be true, in which case you call die, or false in which case -e does it's thing.
Either write it with if, as you say, or use something like this:
[ -z "$var" ] && die "result is empty" || true
I'd recommend if though.
What the bash help isn't very clear on is that only the last statement in an && or || chain is subject to causing an exit under set -e. foo && bar will exit if bar returns false, but not if foo returns false.
So your script should work... but it doesn't. Why?
It's not because of the failed -z test. It's because that failure makes the function return a non-zero status:
#!/bin/bash
set -e
function check_me()
{
ws="smth"
[ -z "$ws" ] && echo " fail" && exit 1
# The line above fails, setting $? to 1
# The function now returns, returning 1!
}
echo "checking wrong thing"
check_me # function returns 1, causing exit here
echo "check finished"
So there are multiple ways to fix this. You could add ||true to the conditional inside the function, or to the line that calls check_me. But as others have pointed out, using ||true has its own problems.
In this specific scenario, where the desired postcondition of check_me is "either this thing is valid or the script has exited", the straightforward thing to do is to write it like that, i.e. [[ -n "$ws" ]] || die "whatever".
But using && conditions will actually work fine with set -e in general, as long as you don't use such a conditional as the last thing in a function. You need to add an explicit true or return 0 or even : as a statement following such a conditional, unless you intend the function to return false when the condition fails.

Elegant construct to test&branch according to the previous command result

I am looking for an elegant way of testing the result of the last executed command and branch depending on the result. The construct I want to build upon is the following:
if /bin/true ; then
do_stuff
fi
This works great when we want to test for success. But how can I test for failure? When I'm doing intensive error checking, what I want to do is the equivalent of this:
if ! /bin/true ; then
do_error_handling
fi
The best solution I have found so far is to define two simple macros in my scripts:
onSuccess() { [[ $? -eq 0 ]]; }
onError () { [[ $? -ne 0 ]]; }
Then I can do stuff like this:
some_command
if onError ; then
# error handling block
fi
But I would prefer a lot more using something built in in bash so I don't have to replicate these macros in every script I write.
This "problem" of mine is a lot about the style of code I want to write. I want to escape the repetitive if [ $? -ne 0 ] ugliness.
If you want to build upon:
if /bin/true ; then
do_stuff
fi
Try:
if /bin/true ; then # success
do_nothing
else # failure / error
do_stuff
fi
Sometimes you can use this:
first_command && command_if_true || command_if_false
Refer to this for the associated pitfalls. Basically: command_if_true returning false breaks it.
Here is a workaround (I think):
( first_command && command_if_true || /bin/true ) || command_if_false
But whether it is worth it, or better looking than the straightforward tests, is subjective.
Basically,
if something
then :
# nothing (colon on previous line is a no-op command)
else
abcdef
endif
is equivalent to
( something || abcdef )
Found it. I already knew the || operator but I hadn't yet thought of combining it with braces.
/bin/false || {
echo err handling
echo more handling
}
or
/bin/true && {
echo stuff on success
echo more stuff
}
This construct does not aim to replace an if..else. It is only useful when you're interested with either success OR failure.

Get the exit code for a command in Bash and KornShell (ksh)

I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.

Return an exit code without closing shell

I'd like to return an exit code from a BASH script that is called within another script, but could also be called directly. It roughly looks like this:
#!/bin/bash
dq2-get $1
if [ $? -ne 0 ]; then
echo "ERROR: ..."
# EXIT HERE
fi
# extract, do some stuff
# ...
Now in the line EXIT HERE the script should exit and return exit code 1. The problem is that
I cannot use return, because when I forget to source the script instead of calling it, return will not exit, and the rest of the script will be executed and mess things up.
I cannot use exit, because this closes the shell.
I cannot use the nice trick kill -SIGINT $$, because this doesn't allow to return an exit code.
Is there any viable alternative that I have overlooked?
The answer to the question title (not in the body as other answers have addressed) is:
Return an exit code without closing shell
(exit 33)
If you need to have -e active and still avoid exiting the shell with a non-zero exit code, then do:
(exit 33) && true
The true command is never executed but is used to build a compound command that is not exited by the -e shell flag.
That sets the exit code without exiting the shell (nor a sourced script).
For the more complex question of exiting (with an specific exit code) either if executed or sourced:
#!/bin/bash
[ "$BASH_SOURCE" == "$0" ] &&
echo "This file is meant to be sourced, not executed" &&
exit 30
return 88
Will set an exit code of 30 (with an error message) if executed.
And an exit code of 88 if sourced.
Will exit both the execution or the sourcing without affecting the calling shell.
Use this instead of exit or return:
[ $PS1 ] && return || exit;
Works whether sourced or not.
You can use x"${BASH_SOURCE[0]}" == x"$0" to test if the script was sourced or called (false if sourced, true if called) and return or exit accordingly.
Another option is to use a function and put the return values in that and then simply either source the script (source processStatus.sh) or call the script (./processStatus.sh) . For example consider the processStatus.sh script that needs to return a value to the stopProcess.sh script but also needs to be called separately from say the command line without using source (only relevant parts included)
Eg:
check_process ()
{
if [ $1 -eq "50" ]
then
return 1
else
return 0
fi
}
and
source processStatus.sh $1
RET_VALUE=$?
if [ $RET_VALUE -ne "0" ]
then
exit 0
fi
You can use return if you use set -e in the beginning of the script.
If you just want to check if the function returned no errors, I'd rather suggest rewriting your code like this:
#!/bin/bash
set -e # exit program if encountered errors
dq2-get ()
{
# define the function here
# ...
if [ $1 -eq 0 ]
then
return 0
else
return 255
# Note that nothing will execute from this point on,
# because `return` terminates the function.
}
# ...
# lots of code ...
# ...
# Now, the test:
# This won't exit the program.
if $(dq2-get $1); then
echo "No errors, everything's fine"
else
echo "ERROR: ..."
fi
# These commands execute anyway, no matter what
# `dq2-get $1` returns (i.e. {0..255}).
# extract, do some stuff
# ...
Now, the code above won't leave the program if the function dq2-get $1 returns errors. But, implementing the function all by itself will exit the program because of the set -e. The code below describes this situation:
# The function below will stop the program and exit
# if it returns anything other than `0`
# since `set -e` means stop if encountered any errors.
$(dq2-get $1)
# These commands execute ONLY if `dq2-get $1` returns `0`
# extract, do some stuff
# ...
Thanks for the question, my case was to source a file for some setup, but end the script and skip the setup actions if certain conditions were not met.
I had hit the issue of an attempt to use exit() actually causing the closing of my terminal, and found myself here :D
After reviewing the options for the specific solution i just went with something like the below, I also think Deepaks answer is worth reviewing if this approach works in your case.
if [ -z "$REQUIRED_VAR" ]; then
echo "please check/set \$REQUIRED_VAR ..."
echo "skipping logic"
else
echo "starting logic"
doStuff()
echo "completed logic"
fi

Resources