Bash exit and process substitution - bash

I would like to exit from a script if a function fails. Normally this is not a problem, but it becomes a problem if you use process substituion.
$ cat test.sh
#!/bin/bash
foo(){
[ "$1" ] && echo pass || exit
}
read < <(foo 123)
read < <(foo)
echo 'SHOULD NOT SEE THIS'
$ ./test.sh
SHOULD NOT SEE THIS
Based on CodeGnome’s answer this seems to work
$ cat test.sh
#!/bin/bash
foo(){
[ "$1" ] && echo pass || exit
}
read < <(foo 123) || exit
echo 'SHOULD SEE THIS'
read < <(foo) || exit
echo 'SHOULD NOT SEE THIS'
$ ./test.sh
SHOULD SEE THIS

You can use set -e to exit the script on any failure. This is often sufficient on smaller scripts, but lacks granularity.
You can also check the return status of any command or function directly, or inspect the $? variable. For example:
foo () {
return 1
}
foo || { echo "Return status: $?"; exit 1; }

Related

Parse multiple echo values in bash script

I am trying to return a value from one script to another. However, in the child script there are multiple echos, so am not sure how to retrieve a specific one in the parent scrip as if I try to do return_val = $(./script.sh) then return_val will have multiple arguments. Any solution here?
script 1:
status=$(script2.sh)
if [ $status == "hi" ]; then
echo "success"
fi
script 2:
echo "blah"
status="hi"
echo $status
Solution 1) for this specific case, you could get the last line that was printed by the script 2, using the tail -1 command. Like this:
script1.sh
#!/bin/bash
status=$( ./script2.sh | tail -1 )
if [ $status == "hi" ]; then
echo "success"
fi
script2.sh
#!/bin/bash
echo "blah"
status="hi"
echo $status
The restriction is that it will only work for the cases where you need to check the last string printed by the called script.
Solution 2) If the previous solution doesn't apply for your case, you could also use an identifier and prefix the specific string that you want to check with that. Like you can see below:
script1.sh
#!/bin/bash
status=$( ./script2.sh | grep "^IDENTIFIER: " | cut -d':' -f 2 )
if [ $status == "hi" ]; then
echo "success"
fi
script2.sh
#!/bin/bash
echo "blah"
status="hi"
echo "IDENTIFIER: $status"
The grep "^IDENTIFIER: " command will filter the strings from the called script, and the cut -d':' -f 2 will split the "IDENTIFIER: hi" string and get the second field, separated by the ':' character.
You might capture the output of script2 into a bash array and access the element in the array you are interested in.
Contents of script2:
#!/bin/bash
echo "blah"
status="hi"
echo $status
Contents of script1:
#!/bin/bash
output_arr=( $(./script2) )
if [[ "${output_arr[1]}" == "hi" ]]; then
echo "success"
fi
Output:
$ ./script1
success
Script1:-
#!/bin/sh
cat > ed1 <<EOF
1p
q
EOF
next () {
[[ -z $(ed -s status < ed1 | grep "hi") ]] && main
[[ -n $(ed -s status < ed1 | grep "hi") ]] && end
}
main () {
sleep 1
next
}
end () {
echo $(ed -s status < ed1)
exit 0
}
Script2:-
#!/bin/sh
echo "blah"
echo "hi" >> status

Bash script checkpoints

I am developing a big script which skeleton, looks like below:
#!/bin/bash
load_variables()
function_1()
function_2()
function_3()
[...]
function_n()
During each take-off, user flags are first loaded in load_variables() function.
Then script continue execution function_1() => function_2() => [...] => function_n()
I need to implement checkpoints which will be stored in log.txt.
Let's say, that script has been stoped or crashed at the function_2().
I want to save progress before each function start, store it in the log.txt and when I re-run the script again, I want to load_variables() and then jump to the crash point/checkpoint stored in log.txt.
How can I achieve that using bash?
I want to save progress before each function start, store it in the log.txt and when I re-run the script again, I want to load_variables() and then jump to the crash point/checkpoint stored in log.txt.
Do exactly that. But you can't "jump" in bash scripts - instead of "jump", just skip already run functions, that you can track. Basically, in pseudocode:
load_variables() {
if [[ -e log.txt ]]; then
. log.txt
fi
}
already_run_functions=()
checkpoint() {
# if the function was already run
if "$1" in already_run_functions[#]; then
# skip this function
return
fi
{
# save state with function name
declare -f
declare -p
} > log.txt
# run it
if ! "$#"; then
# handle errors
exit 1
fi
already_run_functions+=("$1")
}
load_variables
checkpoint function1
checkpoint function2
checkpoint function3
Overall, it's shell, it is simple. It's way better to use a build system. Simple Make is more than enough to track dependencies of multiple shell scripts and parallelize the work. Store the result in files after each task and distribute functions to multiple files.
So some real example:
rm -f log.txt
script() {
load_variables="
if [[ -e log.txt ]]; then
. log.txt
cd \"\$PWD\"
else
already_run_functions=()
fi
"
oneof() {
local i
for i in "${#:2}"; do
if [[ "$1" = "$i" ]]; then
return 0
fi
done
return 1
}
checkpoint() {
# if the function was already run
if oneof "$1" "${already_run_functions[#]}"; then
# skip this function
return
fi
{
# save state
declare -f
declare -p $(compgen -v | grep -Ev '^(BASH.*|EUID|FUNCNAME|GROUPS|PPID|SHELLOPTS|UID|SHELL|SHLVL|USER|TERM|RANDOM|PIPESTATUS|LINENO|COLUMN|LC_.*|LANG)$')
} > log.txt
# run it
if ! "$#"; then
# handle errors
echo "checkpoint: $1 failed"
exit 1
fi
already_run_functions+=("$1")
}
function1() {
echo function1
}
function2() {
echo function2
if [[ -e file ]]; then
return 1
fi
}
function3() {
echo function3
}
eval "$load_variables"
checkpoint function1
checkpoint function2
checkpoint function3
}
touch file
( script )
rm file
( script )
outputs:
function1
function2
checkpoint: function2 failed
function2
function3
This is an example with trap.
Saving function names on error with trap and $LINENO
Logfile will be removed if everything is ok
#!/bin/bash
trap 'clear_log' EXIT
trap 'log_checkpoint $LINENO' ERR
CHECKLOG=checkpoints.log
clear_log() {
if [ $? -eq 0 ] ; then
if [ -f "$CHECKLOG" ]; then
rm "$CHECKLOG"
fi
fi
}
log_checkpoint() {
func=$(sed -n $1p $0)
echo "Error on line $1: $func"
echo "$func" > $CHECKLOG
exit 1
}
retry(){
[ ! -f $CHECKLOG ] && return 0
if grep -q "$1" "$CHECKLOG"; then
echo retry "$1"; rm "$CHECKLOG"; return 0
else
echo skip "$1"; return 1
fi
}
func1(){
retry ${FUNCNAME[0]} || return 0
echo hello | grep hello
}
func2(){
retry ${FUNCNAME[0]} || return 0
echo hello |grep hello
}
func3(){
retry ${FUNCNAME[0]} || return 0
echo hello | grep foo
}
func1
func2
func3
exit 0
Here is my proposal, based on the above answers.
It can be helpful for people who need to trap CTRL + C and other crashes except for errors:
#!/bin/bash
### Catch crash in trap and save the function name in anchor.log
trap 'echo $anchor > anchor.log && exit 1' SIGINT
trap 'echo $anchor > anchor.log && exit 1' SIGHUP
trap 'echo $anchor > anchor.log && exit 1' SIGKILL
trap 'echo $anchor > anchor.log && exit 1' SIGTERM
### ---
clear_log() {
### REMOVING LOG IF PROGRAM EXIT NORMALLY
if [ -f "anchor.log" ]; then
rm "anchor.log"
fi
}
anchor_check(){
### RETRY FUNCTION IF IT'S NOT IN anchor.log
anchor=$1
# Check if function name is inside "anchor.log"
[ ! -f "anchor.log" ] && return 0
if grep -q "$1" "anchor.log"; then
rm "anchor.log"; return 0
else
return 1
fi
}
### EACH FUNCTION START WITH anchor_check
function_1() {
anchor_check "${FUNCNAME[0]}" || return 0
echo "test $anchor"
sleep 2
}
function_2() {
anchor_check "${FUNCNAME[0]}" || return 0
echo "test $anchor"
sleep 2
}
function_3() {
anchor_check "${FUNCNAME[0]}" || return 0
echo "test $anchor"
}
function_1
function_2
function_3
clear_log
Thank you guys for your help!

Shell Script: Exit call not working on using back quotes

I am using exit 1 to stop a shell script execution when error occured.
Shell Script
test() {
mod=$(($1 % 10))
if [ "$mod" = "0" ]
then
echo "$i";
exit 1;
fi
}
for i in `seq 100`
do
val=`test "$i"`
echo "$val"
done
echo "It's still running"
Why it's not working?. How can I stop the shell script execution?
The shell that exit is exiting is the one started by the command substitution, not the shell that starts the command substitution.
Try this:
for i in `seq 100`
do
val=`test "$i"` || exit
echo "$val"
done
echo "It's still running"
You need to explicitly check the exit code of the command substitution (which is passed through by the variable assignment) and call exit again if it is non-zero.
Incidentally, you probably want to use return in a function rather than exit. Let the function caller decide what to do, unless the error is so severe that there is no logical alternative to exiting the shell:
test () {
if (( $1 % 10 == 0 )); then
echo "$i"
return 1
fi
}
The exit command terminates only the (sub)shell in which it is executed.
If you want to terminate the entire script, you have to check the exit status
($?) of the function and react accordingly:
#!/bin/bash
test() {
mod=$(($1 % 10))
if [ "$mod" -eq "0" ]
then
echo "$i";
exit 1;
fi
}
for i in `seq 100`
do
val=`test "$i"`
if [[ $? -eq 1 ]]
then
exit 1;
fi
echo "$val"
done
echo "It's still running"

Two styles of check return value in ksh

In ksh shell, I wanna to check the return value after running a command, I've wrote two styles:
if [ $? -ne 0 ] ; then
echo "failed!"
exit 1
else
exit 0
fi
[ $? -ne 0 ] && echo "failed!" && exit 1
Are they equivalent? If not, what could I do if I wanna to write it in one line?
They're close, but not the same. First, the if will execute the exit 1 even if the echo failed for some reason; the chained expression won't. Also, the chained version lacks an equivalent of the else exit 0.
A better equivalent would be this:
[ $? -ne 0 ] && { echo "failed!"; exit 1; } || exit 0
This is tagged ksh, so you might find the numeric expression syntax cleaner:
(( $? )) && { echo "failed!"; exit 1; } || exit 0
But you can also write an if on one line, if you like:
if (( $? )); then echo "failed!"; exit 1; else exit 0; fi
If the command that you just ran above this expression in order to set $? is short, you may want to just use it directly as the if expression - with reversed clauses, since exit code 0 is true:
if grep -q "pattern" /some/filename; then exit 0; else echo "failed!"; exit 1; fi
It doesn't matter for this simple case, but in general you probably want to avoid echo. Instead, use printf - or if you don't mind being ksh-only, you can use print. The problem with echo is that it doesn't provide a portable way to deal with weird strings in variables:
$ x=-n
$ echo "$x"
$
While both printf and print do:
$ printf '%s\n' "$x"
-n
$ print - "$x"
-n
Again, not a problem here, or any time you're just printing out a literal string, but I found it was easier to train myself out of the echo habit entirely.

Determine if a function exists in bash

Currently I'm doing some unit tests which are executed from bash. Unit tests are initialized, executed and cleaned up in a bash script. This script usualy contains an init(), execute() and cleanup() functions. But they are not mandatory. I'd like to test if they are or are not defined.
I did this previously by greping and seding the source, but it seemed wrong. Is there a more elegant way to do this?
Edit: The following sniplet works like a charm:
fn_exists()
{
LC_ALL=C type $1 | grep -q 'shell function'
}
Like this: [[ $(type -t foo) == function ]] && echo "Foo exists"
The built-in type command will tell you whether something is a function, built-in function, external command, or just not defined.
Additional examples:
$ LC_ALL=C type foo
bash: type: foo: not found
$ LC_ALL=C type ls
ls is aliased to `ls --color=auto'
$ which type
$ LC_ALL=C type type
type is a shell builtin
$ LC_ALL=C type -t rvm
function
$ if [ -n "$(LC_ALL=C type -t rvm)" ] && [ "$(LC_ALL=C type -t rvm)" = function ]; then echo rvm is a function; else echo rvm is NOT a function; fi
rvm is a function
The builtin bash command declare has an option -F that displays all defined function names. If given name arguments, it will display which of those functions exist, and if all do it will set status accordingly:
$ fn_exists() { declare -F "$1" > /dev/null; }
$ unset f
$ fn_exists f && echo yes || echo no
no
$ f() { return; }
$ fn_exist f && echo yes || echo no
yes
If declare is 10x faster than test, this would seem the obvious answer.
Edit: Below, the -f option is superfluous with BASH, feel free to leave it out. Personally, I have trouble remembering which option does which, so I just use both. -f shows functions, and -F shows function names.
#!/bin/sh
function_exists() {
declare -f -F $1 > /dev/null
return $?
}
function_exists function_name && echo Exists || echo No such function
The "-F" option to declare causes it to only return the name of the found function, rather than the entire contents.
There shouldn't be any measurable performance penalty for using /dev/null, and if it worries you that much:
fname=`declare -f -F $1`
[ -n "$fname" ] && echo Declare -f says $fname exists || echo Declare -f says $1 does not exist
Or combine the two, for your own pointless enjoyment. They both work.
fname=`declare -f -F $1`
errorlevel=$?
(( ! errorlevel )) && echo Errorlevel says $1 exists || echo Errorlevel says $1 does not exist
[ -n "$fname" ] && echo Declare -f says $fname exists || echo Declare -f says $1 does not exist
Borrowing from other solutions and comments, I came up with this:
fn_exists() {
# appended double quote is an ugly trick to make sure we do get a string -- if $1 is not a known command, type does not output anything
[ `type -t $1`"" == 'function' ]
}
Used as ...
if ! fn_exists $FN; then
echo "Hey, $FN does not exist ! Duh."
exit 2
fi
It checks if the given argument is a function, and avoids redirections and other grepping.
Dredging up an old post ... but I recently had use of this and tested both alternatives described with :
test_declare () {
a () { echo 'a' ;}
declare -f a > /dev/null
}
test_type () {
a () { echo 'a' ;}
type a | grep -q 'is a function'
}
echo 'declare'
time for i in $(seq 1 1000); do test_declare; done
echo 'type'
time for i in $(seq 1 100); do test_type; done
this generated :
real 0m0.064s
user 0m0.040s
sys 0m0.020s
type
real 0m2.769s
user 0m1.620s
sys 0m1.130s
declare is a helluvalot faster !
Testing different solutions:
#!/bin/bash
test_declare () {
declare -f f > /dev/null
}
test_declare2 () {
declare -F f > /dev/null
}
test_type () {
type -t f | grep -q 'function'
}
test_type2 () {
[[ $(type -t f) = function ]]
}
funcs=(test_declare test_declare2 test_type test_type2)
test () {
for i in $(seq 1 1000); do $1; done
}
f () {
echo 'This is a test function.'
echo 'This has more than one command.'
return 0
}
post='(f is function)'
for j in 1 2 3; do
for func in ${funcs[#]}; do
echo $func $post
time test $func
echo exit code $?; echo
done
case $j in
1) unset -f f
post='(f unset)'
;;
2) f='string'
post='(f is string)'
;;
esac
done
outputs e.g.:
test_declare (f is function)
real 0m0,055s user 0m0,041s sys 0m0,004s exit code 0
test_declare2 (f is function)
real 0m0,042s user 0m0,022s sys 0m0,017s exit code 0
test_type (f is function)
real 0m2,200s user 0m1,619s sys 0m1,008s exit code 0
test_type2 (f is function)
real 0m0,746s user 0m0,534s sys 0m0,237s exit code 0
test_declare (f unset)
real 0m0,040s user 0m0,029s sys 0m0,010s exit code 1
test_declare2 (f unset)
real 0m0,038s user 0m0,038s sys 0m0,000s exit code 1
test_type (f unset)
real 0m2,438s user 0m1,678s sys 0m1,045s exit code 1
test_type2 (f unset)
real 0m0,805s user 0m0,541s sys 0m0,274s exit code 1
test_declare (f is string)
real 0m0,043s user 0m0,034s sys 0m0,007s exit code 1
test_declare2 (f is string)
real 0m0,039s user 0m0,035s sys 0m0,003s exit code 1
test_type (f is string)
real 0m2,394s user 0m1,679s sys 0m1,035s exit code 1
test_type2 (f is string)
real 0m0,851s user 0m0,554s sys 0m0,294s exit code 1
So declare -F f seems to be the best solution.
It boils down to using 'declare' to either check the output or exit code.
Output style:
isFunction() { [[ "$(declare -Ff "$1")" ]]; }
Usage:
isFunction some_name && echo yes || echo no
However, if memory serves, redirecting to null is faster than output substitution (speaking of, the awful and out-dated `cmd` method should be banished and $(cmd) used instead.) And since declare returns true/false if found/not found, and functions return the exit code of the last command in the function so an explicit return is usually not necessary, and since checking the error code is faster than checking a string value (even a null string):
Exit status style:
isFunction() { declare -Ff "$1" >/dev/null; }
That's probably about as succinct and benign as you can get.
From my comment on another answer (which I keep missing when I come back to this page)
$ fn_exists() { test x$(type -t $1) = xfunction; }
$ fn_exists func1 && echo yes || echo no
no
$ func1() { echo hi from func1; }
$ func1
hi from func1
$ fn_exists func1 && echo yes || echo no
yes
Invocation of a function if defined.
Known function name. Let's say the name is my_function, then use
[[ "$(type -t my_function)" == 'function' ]] && my_function;
# or
[[ "$(declare -fF my_function)" ]] && my_function;
Function's name is stored in a variable. If we declare func=my_function, then we can use
[[ "$(type -t $func)" == 'function' ]] && $func;
# or
[[ "$(declare -fF $func)" ]] && $func;
The same results with || instead of &&
(Such a logic inversion could be useful during coding)
[[ "$(type -t my_function)" != 'function' ]] || my_function;
[[ ! "$(declare -fF my_function)" ]] || my_function;
func=my_function
[[ "$(type -t $func)" != 'function' ]] || $func;
[[ ! "$(declare -fF $func)" ]] || $func;
Strict mode and precondition checks
We have set -e as a strict mode.
We use || return in our function in a precondition.
This forces our shell process to be terminated.
# Set a strict mode for script execution. The essence here is "-e"
set -euf +x -o pipefail
function run_if_exists(){
my_function=$1
[[ "$(type -t $my_function)" == 'function' ]] || return;
$my_function
}
run_if_exists non_existing_function
echo "you will never reach this code"
The above is an equivalent of
set -e
function run_if_exists(){
return 1;
}
run_if_exists
which kills your process.
Use || { true; return; } instead of || return; in preconditions to fix this.
[[ "$(type -t my_function)" == 'function' ]] || { true; return; }
This tells you if it exists, but not that it's a function
fn_exists()
{
type $1 >/dev/null 2>&1;
}
fn_exists()
{
[[ $(type -t $1) == function ]] && return 0
}
update
isFunc ()
{
[[ $(type -t $1) == function ]]
}
$ isFunc isFunc
$ echo $?
0
$ isFunc dfgjhgljhk
$ echo $?
1
$ isFunc psgrep && echo yay
yay
$
I would improve it to:
fn_exists()
{
type $1 2>/dev/null | grep -q 'is a function'
}
And use it like this:
fn_exists test_function
if [ $? -eq 0 ]; then
echo 'Function exists!'
else
echo 'Function does not exist...'
fi
I particularly liked solution from Grégory Joseph
But I've modified it a little bit to overcome "double quote ugly trick":
function is_executable()
{
typeset TYPE_RESULT="`type -t $1`"
if [ "$TYPE_RESULT" == 'function' ]; then
return 0
else
return 1
fi
}
It is possible to use 'type' without any external commands, but you have to call it twice, so it still ends up about twice as slow as the 'declare' version:
test_function () {
! type -f $1 >/dev/null 2>&1 && type -t $1 >/dev/null 2>&1
}
Plus this doesn't work in POSIX sh, so it's totally worthless except as trivia!
You can check them in 4 ways
fn_exists() { type -t $1 >/dev/null && echo 'exists'; }
fn_exists() { declare -F $1 >/dev/null && echo 'exists'; }
fn_exists() { typeset -F $1 >/dev/null && echo 'exists'; }
fn_exists() { compgen -A function $1 >/dev/null && echo 'exists'; }

Resources