I have a bash script that runs a series of python scripts. It always runs all of them, but exits with a failure code if any script exited with a failure code. At least that's what I hope it does. Here it is ...
#!/bin/bash
res=0
for f in scripts/*.py
do
python "$f";
res=$(( $res | $? ))
done
exit $res
I'd like to run this as a bash command in the terminal, but i can't work out how to replace exit so that the command fails like a failed script, rather than exits the terminal. How do I do that?
Replace your last line exit $res with
$(exit ${res})
This exits the spawned subshell with the exit value of ${res} and because it is the last statement, this is also the exit value of your script.
Bash doesn't have a concept of anonymous functions (e.g. Go) which you can defined inline and get the return value, you need to do it explicitly. Wrap the whole code in a function say f()
f() {
local res=0
for f in scripts/*.py
do
python "$f";
res=$(( $res | $? ))
done
return $res
}
and use the exit code in command line.
if ! f; then
printf '%s\n' "one more python scripts failed"
fi
Is it true, that the value of the error code doesn't matter. Then I have another solution:
#!/bin/bash
total=0
errcount=0
for f in scripts/*.py
do
let total++
python "$f" || let errcount++
done
if ((errcount))
then
printf '%u of %u scripts failed\n' $errcount $total >&2
exit 1
fi
exit 0
#!/bin/bash
for f in scripts/*.py
do
python "$f" && echo "1" >> stack || echo "0" >> stack
done
[ $(grep -o stack) -eq 0 ] && rm -v ./stack && exit 1
I am rather stoned at the moment, so I apologise if I am misinterpreting, but I believe that this will do what you need. Every time the python script returns an error code of 0, (which means it works) a 1 is echoed into a stack file. At the end of the loop, the stack is checked for the presence of a single 0, and if it finds one, exits with an error code of 1, which is for general errors.
Related
I am trying to detect whenever the following script (random_fail.sh) fails --which happens rarely-- by running it inside a while loop in the second script (catch_error.sh):
#!/usr/bin/env bash
# random_fail.sh
n=$(( RANDOM % 100 ))
if [[ n -eq 42 ]]; then
echo "Something went wrong"
>&2 echo "The error was using magic numbers"
exit 1
fi
echo "Everything went according to plan"
#!/usr/bin/env bash
# catch_error.sh
count=0 # The number of times before failing
error=0 # assuming everything initially ran fine
while [ "$error" != 1 ]; do
# running till non-zero exit
# writing the error code from the radom_fail script into /tmp/error
bash ./random_fail.sh 1>/tmp/msg 2>/tmp/error
# reading from the file, assuming 0 written inside most of the times
error="$(cat /tmp/error)"
echo "$error"
# updating the count
count=$((count + 1))
done
echo "random_fail.sh failed!: $(cat /tmp/msg)"
echo "Error code: $(cat /tmp/error)"
echo "Ran ${count} times, before failing"
I was expecting that the catch_error.sh will read from /tmp/error and come out of the loop once a particular run of random_fail.sh exits with 1.
Instead, the catch script seems to be running forever. I think this is because the error code is not being redirected to the /tmp/error file at all.
Please help.
You aren't catching the error code in the proper/usual manner. Also, no need to prefix the execution with the "bash" command, when it already contains the shebang. Lastly, curious why you don't simply use #!/bin/bash instead of #!/usr/bin/env bash .
Your second script should be modified to look like this:
#!/usr/bin/env bash
# catch_error.sh
count=0 # The number of times before failing
error=0 # assuming everything initially ran fine
while [ "$error" != 1 ]; do
# running till non-zero exit
# writing the error code from the radom_fail script into /tmp/error
./random_fail.sh 1>/tmp/msg 2>/tmp/error
error=$?
echo "$error"
# updating the count
count=$((count + 1))
done
echo "random_fail.sh failed!: $(cat /tmp/msg)"
echo "Error code: ${error}"
echo "Ran ${count} times, before failing"
[ "$error" != 1 ] is true if random_fail.sh prints a lone digit 1 to stderr. As long as this doesn't happen, your script will loop. You could instead test whether there has been written anything to stderr. There are several possibilities to achieve this:
printf '' >/tmp/error
while [[ ! -s /tmp/error ]]
or
error=
while (( $#error == 0 ))
or
error=
while [[ -z $error ]]
/tmp/error will always be either empty or will contain the line "The error was using magic numbers". It will never contain 0 or 1. If you want to know the exit value of the script, just check it directly:
if ./random_fail.sh 1>/tmp/msg 2>/tmp/error; then error=1; else error=0; fi
Or, you can do:
./random_fail.sh 1>/tmp/msg 2>/tmp/error
error=$?
But don't do either of those. Just do:
while ./random_fail.sh; do ...; done
As long as random_fail.sh (please read https://www.talisman.org/~erlkonig/documents/commandname-extensions-considered-harmful/ and stop naming your scripts with a .sh suffix) returns 0, the loop body will be entered. When it returns non-zero, the loop terminates.
The following script calls another program reading its output in a while loop (see Bash - How to pipe input to while loop and preserve variables after loop ends):
while read -r col0 col1; do
# [...]
done < <(other_program [args ...])
How can I check for the exit code of other_program to see if the loop was executed properly?
Note: ls -d / /nosuch is used as an example command below, because it fails (exit code 1) while still producing stdout output (/) (in addition to stderr output).
Bash v4.2+ solution:
ccarton's helpful answer works well in principle, but by default the while loop runs in a subshell, which means that any variables created or modified in the loop will not be visible to the current shell.
In Bash v4.2+, you can change this by turning the lastpipe option on, which makes the last segment of a pipeline run in the current shell;
as in ccarton's answer, the pipefail option must be set to have $? reflect the exit code of the first failing command in the pipeline:
shopt -s lastpipe # run the last segment of a pipeline in the current shell
shopt -so pipefail # reflect a pipeline's first failing command's exit code in $?
ls -d / /nosuch | while read -r line; do
result=$line
done
echo "result: [$result]; exit code: $?"
The above yields (stderr output omitted):
result: [/]; exit code: 1
As you can see, the $result variable, set in the while loop, is available, and the ls command's (nonzero) exit code is reflected in $?.
Bash v3+ solution:
ikkachu's helpful answer works well and shows advanced techniques, but it is a bit cumbersome.
Here is a simpler alternative:
while read -r line || { ec=$line && break; }; do # Note the `|| { ...; }` part.
result=$line
done < <(ls -d / /nosuch; printf $?) # Note the `; printf $?` part.
echo "result: [$result]; exit code: $ec"
By appending the value of $?, the ls command's exit code, to the output without a trailing \n (printf $?), read reads it in the last loop operation, but indicates failure (exit code 1), which would normally exit the loop.
We can detect this case with ||, and assign the exit code (that was still read into $line) to variable $ec and exit the loop then.
On the off chance that the command's output doesn't have a trailing \n, more work is needed:
while read -r line ||
{ [[ $line =~ ^(.*)/([0-9]+)$ ]] && ec=${BASH_REMATCH[2]} && line=${BASH_REMATCH[1]};
[[ -n $line ]]; }
do
result=$line
done < <(printf 'no trailing newline'; ls /nosuch; printf "/$?")
echo "result: [$result]; exit code: $ec"
The above yields (stderr output omitted):
result: [no trailing newline]; exit code: 1
At least one way would be to redirect the output of the background process through a named pipe. This would allow to pick up its PID and then get the exit status through waiting on the PID.
#!/bin/bash
mkfifo pipe || exit 1
(echo foo ; exit 19) > pipe &
pid=$!
while read x ; do echo "read: $x" ; done < pipe
wait $pid
echo "exit status of bg process: $?"
rm pipe
If you can use a direct pipe (i.e. don't mind the loop being run in a subshell), you could use Bash's PIPESTATUS, which contains the exit codes of all commands in the pipeline:
(echo foo ; exit 19) | while read x ; do
echo "read: $x" ; done;
echo "status: ${PIPESTATUS[0]}"
A simple way is to use the bash pipefail option to propagate the first error code from a pipeline.
set -o pipefail
other_program | while read x; do
echo "Read: $x"
done || echo "Error: $?"
Another way is to use coproc (requires 4.0+).
coproc other_program [args ...]
while read -r -u ${COPROC[0]} col0 col1; do
# [...]
done
wait $COPROC_PID || echo "Error exit status: $?"
coproc frees you from having to setup asynchronicity and stdin/stdout redirection that you'd otherwise need to do in an equivalent mkfifo.
I am new to shell scripting and stuck with a problem. In my shell method if I saw any validation issue then rest of the programm will not execute and will show user a message. Till validation it's done but when I used exit 0 then only it comes out of the validation loop not from full method.
config_wuigm_parameters () {
echo "Starting to config parameters for WUIGM....." | tee -a $log
prepare_wuigm_conf_file
echo "Configing WUIGM parameters....." | tee -a $log
local parafile=`dirname $0`/wuigm.conf
local pname=""
local pvalue=""
create_preference_template
cat ${parafile} |while read -r line;do
pname=`echo $line | egrep -e "^([^#]*)=(.*)" | cut -d '=' -f 1`
if [ -n "$pname" ] ; then
lsearch=`echo $line | grep "[<|>|\"]" `
if [ -n "$lsearch" ] ; then
echo validtion=$lsearch
echo "< or > character present , Replace < with < and > with >"
exit 1;
else
pvalue=`echo $line | egrep -e "^([^#]*)=(.*)" | cut -d '=' -f 2- `
echo "<entry key=\"$pname\" value=\"$pvalue\"/>" >> $prefs
echo "Configured : ${pname} = ${pvalue} " | tee -a $log
fi
fi
done
echo $validtion
echo "</map>" >> $prefs
# Copy the file to the original location
cp -f $prefs /root/.java/.userPrefs/com/ericsson/pgm/xwx
# removing the local temp file
rm -f $prefs
reboot_server
}
Any help would be great
It is because the construction
cat file | while read ...
starts a new (sub)shell.
In the next you can see the difference:
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
exit 1
done
echo "Still here after the exit"
}
echoline $#
and compare with this
echoline() {
while read -r line
do
echo ==$line==
exit 1
done < "$1"
echo "This is not printed after the exit"
}
echoline $#
Using the return doesn't helps too, (because of subshell). The
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
return 1
done
echo "Still here"
}
echoline $#
will still prints the "Still here".
So, if you want exit the script, use the
while read ...
do
...
done < input #this not starts a new subshell
if want exit just the method (return from it) must check the exit startus of the previous command, like:
echoline() {
cat "$1" | while read -r line
do
echo ==$line==
exit 1
done || return 1
echo "In case of exit (or return), this is not printed"
}
echoline $#
echo "After the function call"
Instead of || or you can use the
[ $? != 0 ] && return 1
just after the while.
You use the return instruction to exit a function with a value.
return [n]
Causes a function to exit with the return value specified by n. If n is omitted, the return status is that of the last command executed in the function body. If used outside a function, but during execution of a script by the . (source) command, it causes the shell to stop executing that script and return either n or the exit status of the last command executed within the script as the exit status of the script. If used out‐side a function and not during execution of a script by ., the return status is false. Any command associated with the RETURN trap is executed before execution resumes after the function or script.
If you want to exit a loop, use the break instruction instead:
break [n]
Exit from within a for, while, until, or select loop. If n is specified, break n levels. n must be ≥ 1. If n is greater than the number of enclosing loops, all enclosing loops are exited. The return value is 0 unless n is not greater than or equal to 1.
The exit instruction exits the current shell instead, so the current program as a whole. If you use sub-shells, code written between parenthesis, then only that sub-shell exits.
I generally have -e set in my Bash scripts, but occasionally I would like to run a command and get the return value.
Without doing the set +e; some-command; res=$?; set -e dance, how can I do that?
From the bash manual:
The shell does not exit if the command that fails is [...] part of any command executed in a && or || list [...].
So, just do:
#!/bin/bash
set -eu
foo() {
# exit code will be 0, 1, or 2
return $(( RANDOM % 3 ))
}
ret=0
foo || ret=$?
echo "foo() exited with: $ret"
Example runs:
$ ./foo.sh
foo() exited with: 1
$ ./foo.sh
foo() exited with: 0
$ ./foo.sh
foo() exited with: 2
This is the canonical way of doing it.
as an alternative
ans=0
some-command || ans=$?
Maybe try running the commands in question in a subshell, like this?
res=$(some-command > /dev/null; echo $?)
Given behavior of shell described at this question it's possible to use following construct:
#!/bin/sh
set -e
{ custom_command; rc=$?; } || :
echo $rc
Another option is to use simple if. It is a bit longer, but fully supported by bash, i.e. that the command can return non-zero value, but the script doesn't exit even with set -e. See it in this simple script:
#! /bin/bash -eu
f () {
return 2
}
if f;then
echo Command succeeded
else
echo Command failed, returned: $?
fi
echo Script still continues.
When we run it, we can see that script still continues after non-zero return code:
$ ./test.sh
Command failed, returned: 2
Script still continues.
Use a wrapper function to execute your commands:
function __e {
set +e
"$#"
__r=$?
set -e
}
__e yourcommand arg1 arg2
And use $__r instead of $?:
if [[ __r -eq 0 ]]; then
echo "success"
else
echo "failed"
fi
Another method to call commands in a pipe, only that you have to quote the pipe. This does a safe eval.
function __p {
set +e
local __A=() __I
for (( __I = 1; __I <= $#; ++__I )); do
if [[ "${!__I}" == '|' ]]; then
__A+=('|')
else
__A+=("\"\$$__I\"")
fi
done
eval "${__A[#]}"
__r=$?
set -e
}
Example:
__p echo abc '|' grep abc
And I actually prefer this syntax:
__p echo abc :: grep abc
Which I could do with
...
if [[ ${!__I} == '::' ]]; then
...
I want to write code like this:
command="some command"
safeRunCommand $command
safeRunCommand() {
cmnd=$1
$($cmnd)
if [ $? != 0 ]; then
printf "Error when executing command: '$command'"
exit $ERROR_CODE
fi
}
But this code does not work the way I want. Where did I make the mistake?
Below is the fixed code:
#!/bin/ksh
safeRunCommand() {
typeset cmnd="$*"
typeset ret_code
echo cmnd=$cmnd
eval $cmnd
ret_code=$?
if [ $ret_code != 0 ]; then
printf "Error: [%d] when executing command: '$cmnd'" $ret_code
exit $ret_code
fi
}
command="ls -l | grep p"
safeRunCommand "$command"
Now if you look into this code, the few things that I changed are:
use of typeset is not necessary, but it is a good practice. It makes cmnd and ret_code local to safeRunCommand
use of ret_code is not necessary, but it is a good practice to store the return code in some variable (and store it ASAP), so that you can use it later like I did in printf "Error: [%d] when executing command: '$command'" $ret_code
pass the command with quotes surrounding the command like safeRunCommand "$command". If you don’t then cmnd will get only the value ls and not ls -l. And it is even more important if your command contains pipes.
you can use typeset cmnd="$*" instead of typeset cmnd="$1" if you want to keep the spaces. You can try with both depending upon how complex is your command argument.
'eval' is used to evaluate so that a command containing pipes can work fine
Note: Do remember some commands give 1 as the return code even though there isn't any error like grep. If grep found something it will return 0, else 1.
I had tested with KornShell and Bash. And it worked fine. Let me know if you face issues running this.
Try
safeRunCommand() {
"$#"
if [ $? != 0 ]; then
printf "Error when executing command: '$1'"
exit $ERROR_CODE
fi
}
It should be $cmd instead of $($cmd). It works fine with that on my box.
Your script works only for one-word commands, like ls. It will not work for "ls cpp". For this to work, replace cmd="$1"; $cmd with "$#". And, do not run your script as command="some cmd"; safeRun command. Run it as safeRun some cmd.
Also, when you have to debug your Bash scripts, execute with '-x' flag. [bash -x s.sh].
There are several things wrong with your script.
Functions (subroutines) should be declared before attempting to call them. You probably want to return() but not exit() from your subroutine to allow the calling block to test the success or failure of a particular command. That aside, you don't capture 'ERROR_CODE' so that is always zero (undefined).
It's good practice to surround your variable references with curly braces, too. Your code might look like:
#!/bin/sh
command="/bin/date -u" #...Example Only
safeRunCommand() {
cmnd="$#" #...insure whitespace passed and preserved
$cmnd
ERROR_CODE=$? #...so we have it for the command we want
if [ ${ERROR_CODE} != 0 ]; then
printf "Error when executing command: '${command}'\n"
exit ${ERROR_CODE} #...consider 'return()' here
fi
}
safeRunCommand $command
command="cp"
safeRunCommand $command
The normal idea would be to run the command and then use $? to get the exit code. However, sometimes you have multiple cases in which you need to get the exit code. For example, you might need to hide its output, but still return the exit code, or print both the exit code and the output.
ec() { [[ "$1" == "-h" ]] && { shift && eval $* > /dev/null 2>&1; ec=$?; echo $ec; } || eval $*; ec=$?; }
This will give you the option to suppress the output of the command you want the exit code for. When the output is suppressed for the command, the exit code will directly be returned by the function.
I personally like to put this function in my .bashrc file.
Below I demonstrate a few ways in which you can use this:
# In this example, the output for the command will be
# normally displayed, and the exit code will be stored
# in the variable $ec.
$ ec echo test
test
$ echo $ec
0
# In this example, the exit code is output
# and the output of the command passed
# to the `ec` function is suppressed.
$ echo "Exit Code: $(ec -h echo test)"
Exit Code: 0
# In this example, the output of the command
# passed to the `ec` function is suppressed
# and the exit code is stored in `$ec`
$ ec -h echo test
$ echo $ec
0
Solution to your code using this function
#!/bin/bash
if [[ "$(ec -h 'ls -l | grep p')" != "0" ]]; then
echo "Error when executing command: 'grep p' [$ec]"
exit $ec;
fi
You should also note that the exit code you will be seeing will be for the grep command that's being run, as it is the last command being executed. Not the ls.