Determine if a function exists in bash - bash

Currently I'm doing some unit tests which are executed from bash. Unit tests are initialized, executed and cleaned up in a bash script. This script usualy contains an init(), execute() and cleanup() functions. But they are not mandatory. I'd like to test if they are or are not defined.
I did this previously by greping and seding the source, but it seemed wrong. Is there a more elegant way to do this?
Edit: The following sniplet works like a charm:
fn_exists()
{
LC_ALL=C type $1 | grep -q 'shell function'
}

Like this: [[ $(type -t foo) == function ]] && echo "Foo exists"
The built-in type command will tell you whether something is a function, built-in function, external command, or just not defined.
Additional examples:
$ LC_ALL=C type foo
bash: type: foo: not found
$ LC_ALL=C type ls
ls is aliased to `ls --color=auto'
$ which type
$ LC_ALL=C type type
type is a shell builtin
$ LC_ALL=C type -t rvm
function
$ if [ -n "$(LC_ALL=C type -t rvm)" ] && [ "$(LC_ALL=C type -t rvm)" = function ]; then echo rvm is a function; else echo rvm is NOT a function; fi
rvm is a function

The builtin bash command declare has an option -F that displays all defined function names. If given name arguments, it will display which of those functions exist, and if all do it will set status accordingly:
$ fn_exists() { declare -F "$1" > /dev/null; }
$ unset f
$ fn_exists f && echo yes || echo no
no
$ f() { return; }
$ fn_exist f && echo yes || echo no
yes

If declare is 10x faster than test, this would seem the obvious answer.
Edit: Below, the -f option is superfluous with BASH, feel free to leave it out. Personally, I have trouble remembering which option does which, so I just use both. -f shows functions, and -F shows function names.
#!/bin/sh
function_exists() {
declare -f -F $1 > /dev/null
return $?
}
function_exists function_name && echo Exists || echo No such function
The "-F" option to declare causes it to only return the name of the found function, rather than the entire contents.
There shouldn't be any measurable performance penalty for using /dev/null, and if it worries you that much:
fname=`declare -f -F $1`
[ -n "$fname" ] && echo Declare -f says $fname exists || echo Declare -f says $1 does not exist
Or combine the two, for your own pointless enjoyment. They both work.
fname=`declare -f -F $1`
errorlevel=$?
(( ! errorlevel )) && echo Errorlevel says $1 exists || echo Errorlevel says $1 does not exist
[ -n "$fname" ] && echo Declare -f says $fname exists || echo Declare -f says $1 does not exist

Borrowing from other solutions and comments, I came up with this:
fn_exists() {
# appended double quote is an ugly trick to make sure we do get a string -- if $1 is not a known command, type does not output anything
[ `type -t $1`"" == 'function' ]
}
Used as ...
if ! fn_exists $FN; then
echo "Hey, $FN does not exist ! Duh."
exit 2
fi
It checks if the given argument is a function, and avoids redirections and other grepping.

Dredging up an old post ... but I recently had use of this and tested both alternatives described with :
test_declare () {
a () { echo 'a' ;}
declare -f a > /dev/null
}
test_type () {
a () { echo 'a' ;}
type a | grep -q 'is a function'
}
echo 'declare'
time for i in $(seq 1 1000); do test_declare; done
echo 'type'
time for i in $(seq 1 100); do test_type; done
this generated :
real 0m0.064s
user 0m0.040s
sys 0m0.020s
type
real 0m2.769s
user 0m1.620s
sys 0m1.130s
declare is a helluvalot faster !

Testing different solutions:
#!/bin/bash
test_declare () {
declare -f f > /dev/null
}
test_declare2 () {
declare -F f > /dev/null
}
test_type () {
type -t f | grep -q 'function'
}
test_type2 () {
[[ $(type -t f) = function ]]
}
funcs=(test_declare test_declare2 test_type test_type2)
test () {
for i in $(seq 1 1000); do $1; done
}
f () {
echo 'This is a test function.'
echo 'This has more than one command.'
return 0
}
post='(f is function)'
for j in 1 2 3; do
for func in ${funcs[#]}; do
echo $func $post
time test $func
echo exit code $?; echo
done
case $j in
1) unset -f f
post='(f unset)'
;;
2) f='string'
post='(f is string)'
;;
esac
done
outputs e.g.:
test_declare (f is function)
real 0m0,055s user 0m0,041s sys 0m0,004s exit code 0
test_declare2 (f is function)
real 0m0,042s user 0m0,022s sys 0m0,017s exit code 0
test_type (f is function)
real 0m2,200s user 0m1,619s sys 0m1,008s exit code 0
test_type2 (f is function)
real 0m0,746s user 0m0,534s sys 0m0,237s exit code 0
test_declare (f unset)
real 0m0,040s user 0m0,029s sys 0m0,010s exit code 1
test_declare2 (f unset)
real 0m0,038s user 0m0,038s sys 0m0,000s exit code 1
test_type (f unset)
real 0m2,438s user 0m1,678s sys 0m1,045s exit code 1
test_type2 (f unset)
real 0m0,805s user 0m0,541s sys 0m0,274s exit code 1
test_declare (f is string)
real 0m0,043s user 0m0,034s sys 0m0,007s exit code 1
test_declare2 (f is string)
real 0m0,039s user 0m0,035s sys 0m0,003s exit code 1
test_type (f is string)
real 0m2,394s user 0m1,679s sys 0m1,035s exit code 1
test_type2 (f is string)
real 0m0,851s user 0m0,554s sys 0m0,294s exit code 1
So declare -F f seems to be the best solution.

It boils down to using 'declare' to either check the output or exit code.
Output style:
isFunction() { [[ "$(declare -Ff "$1")" ]]; }
Usage:
isFunction some_name && echo yes || echo no
However, if memory serves, redirecting to null is faster than output substitution (speaking of, the awful and out-dated `cmd` method should be banished and $(cmd) used instead.) And since declare returns true/false if found/not found, and functions return the exit code of the last command in the function so an explicit return is usually not necessary, and since checking the error code is faster than checking a string value (even a null string):
Exit status style:
isFunction() { declare -Ff "$1" >/dev/null; }
That's probably about as succinct and benign as you can get.

From my comment on another answer (which I keep missing when I come back to this page)
$ fn_exists() { test x$(type -t $1) = xfunction; }
$ fn_exists func1 && echo yes || echo no
no
$ func1() { echo hi from func1; }
$ func1
hi from func1
$ fn_exists func1 && echo yes || echo no
yes

Invocation of a function if defined.
Known function name. Let's say the name is my_function, then use
[[ "$(type -t my_function)" == 'function' ]] && my_function;
# or
[[ "$(declare -fF my_function)" ]] && my_function;
Function's name is stored in a variable. If we declare func=my_function, then we can use
[[ "$(type -t $func)" == 'function' ]] && $func;
# or
[[ "$(declare -fF $func)" ]] && $func;
The same results with || instead of &&
(Such a logic inversion could be useful during coding)
[[ "$(type -t my_function)" != 'function' ]] || my_function;
[[ ! "$(declare -fF my_function)" ]] || my_function;
func=my_function
[[ "$(type -t $func)" != 'function' ]] || $func;
[[ ! "$(declare -fF $func)" ]] || $func;
Strict mode and precondition checks
We have set -e as a strict mode.
We use || return in our function in a precondition.
This forces our shell process to be terminated.
# Set a strict mode for script execution. The essence here is "-e"
set -euf +x -o pipefail
function run_if_exists(){
my_function=$1
[[ "$(type -t $my_function)" == 'function' ]] || return;
$my_function
}
run_if_exists non_existing_function
echo "you will never reach this code"
The above is an equivalent of
set -e
function run_if_exists(){
return 1;
}
run_if_exists
which kills your process.
Use || { true; return; } instead of || return; in preconditions to fix this.
[[ "$(type -t my_function)" == 'function' ]] || { true; return; }

This tells you if it exists, but not that it's a function
fn_exists()
{
type $1 >/dev/null 2>&1;
}

fn_exists()
{
[[ $(type -t $1) == function ]] && return 0
}
update
isFunc ()
{
[[ $(type -t $1) == function ]]
}
$ isFunc isFunc
$ echo $?
0
$ isFunc dfgjhgljhk
$ echo $?
1
$ isFunc psgrep && echo yay
yay
$

I would improve it to:
fn_exists()
{
type $1 2>/dev/null | grep -q 'is a function'
}
And use it like this:
fn_exists test_function
if [ $? -eq 0 ]; then
echo 'Function exists!'
else
echo 'Function does not exist...'
fi

I particularly liked solution from Grégory Joseph
But I've modified it a little bit to overcome "double quote ugly trick":
function is_executable()
{
typeset TYPE_RESULT="`type -t $1`"
if [ "$TYPE_RESULT" == 'function' ]; then
return 0
else
return 1
fi
}

It is possible to use 'type' without any external commands, but you have to call it twice, so it still ends up about twice as slow as the 'declare' version:
test_function () {
! type -f $1 >/dev/null 2>&1 && type -t $1 >/dev/null 2>&1
}
Plus this doesn't work in POSIX sh, so it's totally worthless except as trivia!

You can check them in 4 ways
fn_exists() { type -t $1 >/dev/null && echo 'exists'; }
fn_exists() { declare -F $1 >/dev/null && echo 'exists'; }
fn_exists() { typeset -F $1 >/dev/null && echo 'exists'; }
fn_exists() { compgen -A function $1 >/dev/null && echo 'exists'; }

Related

Bash script checkpoints

I am developing a big script which skeleton, looks like below:
#!/bin/bash
load_variables()
function_1()
function_2()
function_3()
[...]
function_n()
During each take-off, user flags are first loaded in load_variables() function.
Then script continue execution function_1() => function_2() => [...] => function_n()
I need to implement checkpoints which will be stored in log.txt.
Let's say, that script has been stoped or crashed at the function_2().
I want to save progress before each function start, store it in the log.txt and when I re-run the script again, I want to load_variables() and then jump to the crash point/checkpoint stored in log.txt.
How can I achieve that using bash?
I want to save progress before each function start, store it in the log.txt and when I re-run the script again, I want to load_variables() and then jump to the crash point/checkpoint stored in log.txt.
Do exactly that. But you can't "jump" in bash scripts - instead of "jump", just skip already run functions, that you can track. Basically, in pseudocode:
load_variables() {
if [[ -e log.txt ]]; then
. log.txt
fi
}
already_run_functions=()
checkpoint() {
# if the function was already run
if "$1" in already_run_functions[#]; then
# skip this function
return
fi
{
# save state with function name
declare -f
declare -p
} > log.txt
# run it
if ! "$#"; then
# handle errors
exit 1
fi
already_run_functions+=("$1")
}
load_variables
checkpoint function1
checkpoint function2
checkpoint function3
Overall, it's shell, it is simple. It's way better to use a build system. Simple Make is more than enough to track dependencies of multiple shell scripts and parallelize the work. Store the result in files after each task and distribute functions to multiple files.
So some real example:
rm -f log.txt
script() {
load_variables="
if [[ -e log.txt ]]; then
. log.txt
cd \"\$PWD\"
else
already_run_functions=()
fi
"
oneof() {
local i
for i in "${#:2}"; do
if [[ "$1" = "$i" ]]; then
return 0
fi
done
return 1
}
checkpoint() {
# if the function was already run
if oneof "$1" "${already_run_functions[#]}"; then
# skip this function
return
fi
{
# save state
declare -f
declare -p $(compgen -v | grep -Ev '^(BASH.*|EUID|FUNCNAME|GROUPS|PPID|SHELLOPTS|UID|SHELL|SHLVL|USER|TERM|RANDOM|PIPESTATUS|LINENO|COLUMN|LC_.*|LANG)$')
} > log.txt
# run it
if ! "$#"; then
# handle errors
echo "checkpoint: $1 failed"
exit 1
fi
already_run_functions+=("$1")
}
function1() {
echo function1
}
function2() {
echo function2
if [[ -e file ]]; then
return 1
fi
}
function3() {
echo function3
}
eval "$load_variables"
checkpoint function1
checkpoint function2
checkpoint function3
}
touch file
( script )
rm file
( script )
outputs:
function1
function2
checkpoint: function2 failed
function2
function3
This is an example with trap.
Saving function names on error with trap and $LINENO
Logfile will be removed if everything is ok
#!/bin/bash
trap 'clear_log' EXIT
trap 'log_checkpoint $LINENO' ERR
CHECKLOG=checkpoints.log
clear_log() {
if [ $? -eq 0 ] ; then
if [ -f "$CHECKLOG" ]; then
rm "$CHECKLOG"
fi
fi
}
log_checkpoint() {
func=$(sed -n $1p $0)
echo "Error on line $1: $func"
echo "$func" > $CHECKLOG
exit 1
}
retry(){
[ ! -f $CHECKLOG ] && return 0
if grep -q "$1" "$CHECKLOG"; then
echo retry "$1"; rm "$CHECKLOG"; return 0
else
echo skip "$1"; return 1
fi
}
func1(){
retry ${FUNCNAME[0]} || return 0
echo hello | grep hello
}
func2(){
retry ${FUNCNAME[0]} || return 0
echo hello |grep hello
}
func3(){
retry ${FUNCNAME[0]} || return 0
echo hello | grep foo
}
func1
func2
func3
exit 0
Here is my proposal, based on the above answers.
It can be helpful for people who need to trap CTRL + C and other crashes except for errors:
#!/bin/bash
### Catch crash in trap and save the function name in anchor.log
trap 'echo $anchor > anchor.log && exit 1' SIGINT
trap 'echo $anchor > anchor.log && exit 1' SIGHUP
trap 'echo $anchor > anchor.log && exit 1' SIGKILL
trap 'echo $anchor > anchor.log && exit 1' SIGTERM
### ---
clear_log() {
### REMOVING LOG IF PROGRAM EXIT NORMALLY
if [ -f "anchor.log" ]; then
rm "anchor.log"
fi
}
anchor_check(){
### RETRY FUNCTION IF IT'S NOT IN anchor.log
anchor=$1
# Check if function name is inside "anchor.log"
[ ! -f "anchor.log" ] && return 0
if grep -q "$1" "anchor.log"; then
rm "anchor.log"; return 0
else
return 1
fi
}
### EACH FUNCTION START WITH anchor_check
function_1() {
anchor_check "${FUNCNAME[0]}" || return 0
echo "test $anchor"
sleep 2
}
function_2() {
anchor_check "${FUNCNAME[0]}" || return 0
echo "test $anchor"
sleep 2
}
function_3() {
anchor_check "${FUNCNAME[0]}" || return 0
echo "test $anchor"
}
function_1
function_2
function_3
clear_log
Thank you guys for your help!

Shorthand for declaring a boolean value in bash

Say I have this bash function:
foobar_force(){
foorbar "$#" --force
}
foobar () {
local is_force=$(test "$2" == "--force");
}
I am looking for two things - I would like to only test if the last argument is "--force" not just the second argument, if possible.
Also I am looking for some shorthand I can use to declare is_force as true or false.
I guess the only way I know how to do that is to use:
if [[ "$2" == "--force" ]]; then
local is_force="yes" ;
fi
Try this...
$ cat example.sh
foobar () {
[[ "${#: -1}" == "--force" ]] && local is_force="true" || local is_force="false";
}
foobar_force(){
foobar "$#" --force
}

python3 argparse autocomplete fail

i find some strange thing when i use arparse in python3.
#!/usr/bin/env python3
import argparse
def create_parser():
p = argparse.ArgumentParser(add_help=True)
p.add_argument('-i', help='i parameter', required=True)
p.add_argument('-m', help='m parameter', required=True)
return p
if __name__ == '__main__':
p = create_parser()
n = p.parse_args()
print(n)
when i try launch it with
python3 ./script.py -i ./some_folder/some_file -m ./
bash autocomplete work with '-i' parameter, but not work with '-m'. If i rename '-m' to '-me' for example, all works good.
In bash i try launch other commands with '-m' parameter, but it not work only with argparse. Where can there be a mistake here?
What happens here is that the autocompletion for the python3 comand kicks in:
$ complete | grep python
complete -F _python python2
complete -F _python python3
complete -F _python python
The function _python that handles it should look like this:
$ type _python
_python is a function
_python ()
{
local cur prev words cword;
_init_completion || return;
case $prev in
-'?' | -h | --help | -V | --version | -c)
return 0
;;
-m)
_python_modules "$1";
return 0
;;
-Q)
COMPREPLY=($( compgen -W "old new warn warnall" -- "$cur" ));
return 0
;;
-W)
COMPREPLY=($( compgen -W "ignore default all module once error" -- "$cur" ));
return 0
;;
!(?(*/)python*([0-9.])|-?))
[[ $cword -lt 2 || ${words[cword-2]} != -#(Q|W) ]] && _filedir
;;
esac;
local i;
for ((i=0; i < ${#words[#]}-1; i++ ))
do
if [[ ${words[i]} == -c ]]; then
_filedir;
fi;
done;
if [[ "$cur" != -* ]]; then
_filedir 'py?([co])';
else
COMPREPLY=($( compgen -W '$( _parse_help "$1" -h )' -- "$cur" ));
fi;
return 0
}
The completion function treats the -m flag the same wheather it shows up as argument to python or to the script, so it tries to complete with a list of module names.
One way around this would be to use an alias for the python3 command that does not trigger the completion, e.g:
$ alias py3=python3
To make this persistent you can put it in your ~/.bashrc. Then you can use
$ py3 ./script.py -i ./some_folder/some_file -m ./[TAB]
which will use filename completion.
Or rename the -m flag to something else.

BASH: Why `printf` returns 1?

I have a problem that gaslights me.
Here comes my bash script "foo" reduced to the problem:
#!/bin/bash
function Args()
{
[[ "$1" == "-n" ]] && [[ -d "$2" ]] && printf "%s\n" "new ${3}"
[[ "$1" == "-p" ]] && [[ -d "$2" ]] && printf "%s\n" "page ${3}"
}
[[ $# -eq 3 ]] && Args "$#"
echo $?
Now when I execute this code, the following happens:
$ ./foo -n / bar
new bar
1
This, however, works:
$ ./foo -p / bar
page bar
0
Please, can anybody explain?
Sorry if this is a known "thing" and my googleing skills must be improved...
It is returning 1 in first case only because 2nd condition:
[[ "$1" == "-p" ]] && [[ -d "$2" ]] && printf "%s\n" "page ${3}"
won't match/apply when you call your script as:
./foo -n / bar
And due to non-matching of 2nd set of conditions it will return 1 to you since $? represents most recent command's exit status which is actually exit status of 2nd set of conditions.
When you call your script as:
./foo -p / bar
It returns status 0 to you since 2nd line gets executed and that is also the most recently executed one.

Checking Bash exit status of several commands efficiently

Is there something similar to pipefail for multiple commands, like a 'try' statement but within bash. I would like to do something like this:
echo "trying stuff"
try {
command1
command2
command3
}
And at any point, if any command fails, drop out and echo out the error of that command. I don't want to have to do something like:
command1
if [ $? -ne 0 ]; then
echo "command1 borked it"
fi
command2
if [ $? -ne 0 ]; then
echo "command2 borked it"
fi
And so on... or anything like:
pipefail -o
command1 "arg1" "arg2" | command2 "arg1" "arg2" | command3
Because the arguments of each command I believe (correct me if I'm wrong) will interfere with each other. These two methods seem horribly long-winded and nasty to me so I'm here appealing for a more efficient method.
You can write a function that launches and tests the command for you. Assume command1 and command2 are environment variables that have been set to a command.
function mytest {
"$#"
local status=$?
if (( status != 0 )); then
echo "error with $1" >&2
fi
return $status
}
mytest "$command1"
mytest "$command2"
What do you mean by "drop out and echo the error"? If you mean you want the script to terminate as soon as any command fails, then just do
set -e # DON'T do this. See commentary below.
at the start of the script (but note warning below). Do not bother echoing the error message: let the failing command handle that. In other words, if you do:
#!/bin/sh
set -e # Use caution. eg, don't do this
command1
command2
command3
and command2 fails, while printing an error message to stderr, then it seems that you have achieved what you want. (Unless I misinterpret what you want!)
As a corollary, any command that you write must behave well: it must report errors to stderr instead of stdout (the sample code in the question prints errors to stdout) and it must exit with a non-zero status when it fails.
However, I no longer consider this to be a good practice. set -e has changed its semantics with different versions of bash, and although it works fine for a simple script, there are so many edge cases that it is essentially unusable. (Consider things like: set -e; foo() { false; echo should not print; } ; foo && echo ok The semantics here are somewhat reasonable, but if you refactor code into a function that relied on the option setting to terminate early, you can easily get bitten.) IMO it is better to write:
#!/bin/sh
command1 || exit
command2 || exit
command3 || exit
or
#!/bin/sh
command1 && command2 && command3
I have a set of scripting functions that I use extensively on my Red Hat system. They use the system functions from /etc/init.d/functions to print green [ OK ] and red [FAILED] status indicators.
You can optionally set the $LOG_STEPS variable to a log file name if you want to log which commands fail.
Usage
step "Installing XFS filesystem tools:"
try rpm -i xfsprogs-*.rpm
next
step "Configuring udev:"
try cp *.rules /etc/udev/rules.d
try udevtrigger
next
step "Adding rc.postsysinit hook:"
try cp rc.postsysinit /etc/rc.d/
try ln -s rc.d/rc.postsysinit /etc/rc.postsysinit
try echo $'\nexec /etc/rc.postsysinit' >> /etc/rc.sysinit
next
Output
Installing XFS filesystem tools: [ OK ]
Configuring udev: [FAILED]
Adding rc.postsysinit hook: [ OK ]
Code
#!/bin/bash
. /etc/init.d/functions
# Use step(), try(), and next() to perform a series of commands and print
# [ OK ] or [FAILED] at the end. The step as a whole fails if any individual
# command fails.
#
# Example:
# step "Remounting / and /boot as read-write:"
# try mount -o remount,rw /
# try mount -o remount,rw /boot
# next
step() {
echo -n "$#"
STEP_OK=0
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
}
try() {
# Check for `-b' argument to run command in the background.
local BG=
[[ $1 == -b ]] && { BG=1; shift; }
[[ $1 == -- ]] && { shift; }
# Run the command.
if [[ -z $BG ]]; then
"$#"
else
"$#" &
fi
# Check if command failed and update $STEP_OK if so.
local EXIT_CODE=$?
if [[ $EXIT_CODE -ne 0 ]]; then
STEP_OK=$EXIT_CODE
[[ -w /tmp ]] && echo $STEP_OK > /tmp/step.$$
if [[ -n $LOG_STEPS ]]; then
local FILE=$(readlink -m "${BASH_SOURCE[1]}")
local LINE=${BASH_LINENO[0]}
echo "$FILE: line $LINE: Command \`$*' failed with exit code $EXIT_CODE." >> "$LOG_STEPS"
fi
fi
return $EXIT_CODE
}
next() {
[[ -f /tmp/step.$$ ]] && { STEP_OK=$(< /tmp/step.$$); rm -f /tmp/step.$$; }
[[ $STEP_OK -eq 0 ]] && echo_success || echo_failure
echo
return $STEP_OK
}
For what it's worth, a shorter way to write code to check each command for success is:
command1 || echo "command1 borked it"
command2 || echo "command2 borked it"
It's still tedious but at least it's readable.
An alternative is simply to join the commands together with && so that the first one to fail prevents the remainder from executing:
command1 &&
command2 &&
command3
This isn't the syntax you asked for in the question, but it's a common pattern for the use case you describe. In general the commands should be responsible for printing failures so that you don't have to do so manually (maybe with a -q flag to silence errors when you don't want them). If you have the ability to modify these commands, I'd edit them to yell on failure, rather than wrap them in something else that does so.
Notice also that you don't need to do:
command1
if [ $? -ne 0 ]; then
You can simply say:
if ! command1; then
And when you do need to check return codes use an arithmetic context instead of [ ... -ne:
ret=$?
# do something
if (( ret != 0 )); then
Instead of creating runner functions or using set -e, use a trap:
trap 'echo "error"; do_cleanup failed; exit' ERR
trap 'echo "received signal to stop"; do_cleanup interrupted; exit' SIGQUIT SIGTERM SIGINT
do_cleanup () { rm tempfile; echo "$1 $(date)" >> script_log; }
command1
command2
command3
The trap even has access to the line number and the command line of the command that triggered it. The variables are $BASH_LINENO and $BASH_COMMAND.
Personally I much prefer to use a lightweight approach, as seen here;
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exit 111; }
try() { "$#" || die "cannot $*"; }
asuser() { sudo su - "$1" -c "${*:2}"; }
Example usage:
try apt-fast upgrade -y
try asuser vagrant "echo 'uname -a' >> ~/.profile"
I've developed an almost flawless try & catch implementation in bash, that allows you to write code like:
try
echo 'Hello'
false
echo 'This will not be displayed'
catch
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
You can even nest the try-catch blocks inside themselves!
try {
echo 'Hello'
try {
echo 'Nested Hello'
false
echo 'This will not execute'
} catch {
echo "Nested Caught (# $__EXCEPTION_LINE__)"
}
false
echo 'This will not execute too'
} catch {
echo "Error in $__EXCEPTION_SOURCE__ at line: $__EXCEPTION_LINE__!"
}
The code is a part of my bash boilerplate/framework. It further extends the idea of try & catch with things like error handling with backtrace and exceptions (plus some other nice features).
Here's the code that's responsible just for try & catch:
set -o pipefail
shopt -s expand_aliases
declare -ig __oo__insideTryCatch=0
# if try-catch is nested, then set +e before so the parent handler doesn't catch us
alias try="[[ \$__oo__insideTryCatch -gt 0 ]] && set +e;
__oo__insideTryCatch+=1; ( set -e;
trap \"Exception.Capture \${LINENO}; \" ERR;"
alias catch=" ); Exception.Extract \$? || "
Exception.Capture() {
local script="${BASH_SOURCE[1]#./}"
if [[ ! -f /tmp/stored_exception_source ]]; then
echo "$script" > /tmp/stored_exception_source
fi
if [[ ! -f /tmp/stored_exception_line ]]; then
echo "$1" > /tmp/stored_exception_line
fi
return 0
}
Exception.Extract() {
if [[ $__oo__insideTryCatch -gt 1 ]]
then
set -e
fi
__oo__insideTryCatch+=-1
__EXCEPTION_CATCH__=( $(Exception.GetLastException) )
local retVal=$1
if [[ $retVal -gt 0 ]]
then
# BACKWARDS COMPATIBILE WAY:
# export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-1)]}"
# export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[(${#__EXCEPTION_CATCH__[#]}-2)]}"
export __EXCEPTION_SOURCE__="${__EXCEPTION_CATCH__[-1]}"
export __EXCEPTION_LINE__="${__EXCEPTION_CATCH__[-2]}"
export __EXCEPTION__="${__EXCEPTION_CATCH__[#]:0:(${#__EXCEPTION_CATCH__[#]} - 2)}"
return 1 # so that we may continue with a "catch"
fi
}
Exception.GetLastException() {
if [[ -f /tmp/stored_exception ]] && [[ -f /tmp/stored_exception_line ]] && [[ -f /tmp/stored_exception_source ]]
then
cat /tmp/stored_exception
cat /tmp/stored_exception_line
cat /tmp/stored_exception_source
else
echo -e " \n${BASH_LINENO[1]}\n${BASH_SOURCE[2]#./}"
fi
rm -f /tmp/stored_exception /tmp/stored_exception_line /tmp/stored_exception_source
return 0
}
Feel free to use, fork and contribute - it's on GitHub.
run() {
$*
if [ $? -ne 0 ]
then
echo "$* failed with exit code $?"
return 1
else
return 0
fi
}
run command1 && run command2 && run command3
Sorry that I can not make a comment to the first answer
But you should use new instance to execute the command: cmd_output=$($#)
#!/bin/bash
function check_exit {
cmd_output=$($#)
local status=$?
echo $status
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
function run_command() {
exit 1
}
check_exit run_command
For fish shell users who stumble on this thread.
Let foo be a function that does not "return" (echo) a value, but it sets the exit code as usual.
To avoid checking $status after calling the function, you can do:
foo; and echo success; or echo failure
And if it's too long to fit on one line:
foo; and begin
echo success
end; or begin
echo failure
end
You can use #john-kugelman 's awesome solution found above on non-RedHat systems by commenting out this line in his code:
. /etc/init.d/functions
Then, paste the below code at the end. Full disclosure: This is just a direct copy & paste of the relevant bits of the above mentioned file taken from Centos 7.
Tested on MacOS and Ubuntu 18.04.
BOOTUP=color
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \\033[0;39m"
echo_success() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_SUCCESS
echo -n $" OK "
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 0
}
echo_failure() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_FAILURE
echo -n $"FAILED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_passed() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"PASSED"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
echo_warning() {
[ "$BOOTUP" = "color" ] && $MOVE_TO_COL
echo -n "["
[ "$BOOTUP" = "color" ] && $SETCOLOR_WARNING
echo -n $"WARNING"
[ "$BOOTUP" = "color" ] && $SETCOLOR_NORMAL
echo -n "]"
echo -ne "\r"
return 1
}
When I use ssh I need to distinct between problems caused by connection issues and error codes of remote command in errexit (set -e) mode. I use the following function:
# prepare environment on calling site:
rssh="ssh -o ConnectionTimeout=5 -l root $remote_ip"
function exit255 {
local flags=$-
set +e
"$#"
local status=$?
set -$flags
if [[ $status == 255 ]]
then
exit 255
else
return $status
fi
}
export -f exit255
# callee:
set -e
set -o pipefail
[[ $rssh ]]
[[ $remote_ip ]]
[[ $( type -t exit255 ) == "function" ]]
rjournaldir="/var/log/journal"
if exit255 $rssh "[[ ! -d '$rjournaldir/' ]]"
then
$rssh "mkdir '$rjournaldir/'"
fi
rconf="/etc/systemd/journald.conf"
if [[ $( $rssh "grep '#Storage=auto' '$rconf'" ) ]]
then
$rssh "sed -i 's/#Storage=auto/Storage=persistent/' '$rconf'"
fi
$rssh systemctl reenable systemd-journald.service
$rssh systemctl is-enabled systemd-journald.service
$rssh systemctl restart systemd-journald.service
sleep 1
$rssh systemctl status systemd-journald.service
$rssh systemctl is-active systemd-journald.service
Checking status in functional manner
assert_exit_status() {
lambda() {
local val_fd=$(echo $# | tr -d ' ' | cut -d':' -f2)
local arg=$1
shift
shift
local cmd=$(echo $# | xargs -E ':')
local val=$(cat $val_fd)
eval $arg=$val
eval $cmd
}
local lambda=$1
shift
eval $#
local ret=$?
$lambda : <(echo $ret)
}
Usage:
assert_exit_status 'lambda status -> [[ $status -ne 0 ]] && echo Status is $status.' lls
Output
Status is 127
suppose
alias command1='grep a <<<abc'
alias command2='grep x <<<abc'
alias command3='grep c <<<abc'
either
{ command1 1>/dev/null || { echo "cmd1 fail"; /bin/false; } } && echo "cmd1 succeed" &&
{ command2 1>/dev/null || { echo "cmd2 fail"; /bin/false; } } && echo "cmd2 succeed" &&
{ command3 1>/dev/null || { echo "cmd3 fail"; /bin/false; } } && echo "cmd3 succeed"
or
{ { command1 1>/dev/null && echo "cmd1 succeed"; } || { echo "cmd1 fail"; /bin/false; } } &&
{ { command2 1>/dev/null && echo "cmd2 succeed"; } || { echo "cmd2 fail"; /bin/false; } } &&
{ { command3 1>/dev/null && echo "cmd3 succeed"; } || { echo "cmd3 fail"; /bin/false; } }
yields
cmd1 succeed
cmd2 fail
Tedious it is. But the readability isn't bad.

Resources