I believe I am calling exit in a subshell that causes my program to continue:
#!/bin/bash
grep str file | while read line
do
exit 0
done
echo "String that should not really show up!"
Any idea how I can get out of the main program?
You can trivially restructure to avoid the subshell -- or, rather, to run the grep inside the subshell rather than the while read loop.
#!/bin/bash
while read line; do
exit 1
done < <(grep str file)
Note that <() is bash-only syntax, and does not work with /bin/sh.
In general, you can check the return code of the spawned subshell to see whether the main main should continue or not.
For instance:
#!/bin/bash
grep str file | while read line
do
exit 1
done
if [[ $? == 1 ]]; then
exit 1
fi
echo "String that should not really show up!"
Will not print the message because the subshell exited with code 1.
You can "exit" your shell by sending a signal to it form your subshell:replace exit 0 with kill -1 $PPID
But i don't recommend this approach.I suggest your subshell to return a special meaning value,like exit 1
#!/bin/bash
grep str file | while read line
do
exit 1
done
exit 0
then your can check your subshell's return value by $?
like subshell.sh ;if [[ $? == 1 ]]; then exit 1 ;fi
or simply subshell.sh || exit
Related
Here's a sample from .sh script:
#!/bin/sh
........
if [ "$(tail -n 1 log_file.txt)" = *"FAIL"* ]; then
exit 1
else
# some command here
exit 0
fi
It must match last line of some file with pattern "FAIL" and if result of matching is true return exit code 1 otherwise 0.
In this case script is always terminated with exit code 0 even for those strings that don't contain FAIL substring.
Please help me to fix if statement.
P.S. shebang must be #!/bin/sh not #!/bin/bash
If you want fnmatch/glob-style matches in sh, use case, not if.
case "$(tail -n 1 log_file.txt)" in
*"FAIL"*) exit 1;;
*) : "some command here"; exit 0;;
esac
You don't need an if statement, as you want to exit immediately in either case. Just make the check the last command of the script.
In this case, the exit status of the script would be the negation of the exit status of grep (which exits 0 if it finds a match and 1 if not).
! tail -n 1 log_file.txt | grep '.*FAIL.*'
The following script calls another program reading its output in a while loop (see Bash - How to pipe input to while loop and preserve variables after loop ends):
while read -r col0 col1; do
# [...]
done < <(other_program [args ...])
How can I check for the exit code of other_program to see if the loop was executed properly?
Note: ls -d / /nosuch is used as an example command below, because it fails (exit code 1) while still producing stdout output (/) (in addition to stderr output).
Bash v4.2+ solution:
ccarton's helpful answer works well in principle, but by default the while loop runs in a subshell, which means that any variables created or modified in the loop will not be visible to the current shell.
In Bash v4.2+, you can change this by turning the lastpipe option on, which makes the last segment of a pipeline run in the current shell;
as in ccarton's answer, the pipefail option must be set to have $? reflect the exit code of the first failing command in the pipeline:
shopt -s lastpipe # run the last segment of a pipeline in the current shell
shopt -so pipefail # reflect a pipeline's first failing command's exit code in $?
ls -d / /nosuch | while read -r line; do
result=$line
done
echo "result: [$result]; exit code: $?"
The above yields (stderr output omitted):
result: [/]; exit code: 1
As you can see, the $result variable, set in the while loop, is available, and the ls command's (nonzero) exit code is reflected in $?.
Bash v3+ solution:
ikkachu's helpful answer works well and shows advanced techniques, but it is a bit cumbersome.
Here is a simpler alternative:
while read -r line || { ec=$line && break; }; do # Note the `|| { ...; }` part.
result=$line
done < <(ls -d / /nosuch; printf $?) # Note the `; printf $?` part.
echo "result: [$result]; exit code: $ec"
By appending the value of $?, the ls command's exit code, to the output without a trailing \n (printf $?), read reads it in the last loop operation, but indicates failure (exit code 1), which would normally exit the loop.
We can detect this case with ||, and assign the exit code (that was still read into $line) to variable $ec and exit the loop then.
On the off chance that the command's output doesn't have a trailing \n, more work is needed:
while read -r line ||
{ [[ $line =~ ^(.*)/([0-9]+)$ ]] && ec=${BASH_REMATCH[2]} && line=${BASH_REMATCH[1]};
[[ -n $line ]]; }
do
result=$line
done < <(printf 'no trailing newline'; ls /nosuch; printf "/$?")
echo "result: [$result]; exit code: $ec"
The above yields (stderr output omitted):
result: [no trailing newline]; exit code: 1
At least one way would be to redirect the output of the background process through a named pipe. This would allow to pick up its PID and then get the exit status through waiting on the PID.
#!/bin/bash
mkfifo pipe || exit 1
(echo foo ; exit 19) > pipe &
pid=$!
while read x ; do echo "read: $x" ; done < pipe
wait $pid
echo "exit status of bg process: $?"
rm pipe
If you can use a direct pipe (i.e. don't mind the loop being run in a subshell), you could use Bash's PIPESTATUS, which contains the exit codes of all commands in the pipeline:
(echo foo ; exit 19) | while read x ; do
echo "read: $x" ; done;
echo "status: ${PIPESTATUS[0]}"
A simple way is to use the bash pipefail option to propagate the first error code from a pipeline.
set -o pipefail
other_program | while read x; do
echo "Read: $x"
done || echo "Error: $?"
Another way is to use coproc (requires 4.0+).
coproc other_program [args ...]
while read -r -u ${COPROC[0]} col0 col1; do
# [...]
done
wait $COPROC_PID || echo "Error exit status: $?"
coproc frees you from having to setup asynchronicity and stdin/stdout redirection that you'd otherwise need to do in an equivalent mkfifo.
I need to run several child processes in background and pipe data between them. When the script exits, I want to kill any remaining of them, so I added
trap cleanup EXIT
cleanup()
{
echo "Cleaning up!"
pkill -TERM -P $$
}
Since I need to react if one of the processes reports an error, I created wrapper functions. Anything that ends with fd is a previously opened file descriptor, connected to a FIFO pipe.
run_gui()
{
"$GAME_BIN" $args <&$gui_infd >&$gui_outfd # redirecting IO to some file descriptors
if [[ $? == 0 ]]; then
echo exiting ok
exit $OK_EXITCODE
else
exit $ERROR_EXITCODE
fi
}
The functions run_ai1(), run_ai2 are analogous.
run_ai1()
{
"$ai1" <&$ai1_infd >&$ai1_outfd
if [[ $? == 0 || $? == 1 || $? == 2 ]]; then
exit $OK_EXITCODE
else
exit $ERROR_EXITCODE
fi
}
run_ai2()
{
"$ai2" <&$ai2_infd >&$ai2_outfd
if [[ $? == 0 || $? == 1 || $? == 2 ]]; then
exit $OK_EXITCODE
else
exit $ERROR_EXITCODE
fi
}
Then I run the functions and do the needed piping
printinit 1 >&$ai1_infd
printinit 2 >&$ai2_infd
run_gui &
run_ai1 &
run_ai2 &
while true; do
echo "Started the loop"
while true; do
read -u $ai1_outfd line || echo "Nothing read"
echo $line
if [[ $line ]]; then
echo "$line" >&$gui_infd
echo "$line" >&$ai2_infd
if [[ "$line" == "END_TURN" ]]; then
break
fi
fi
done
sleep $turndelay
while true; do
read -u $ai2_outfd line || echo "nothing read"
echo $line
if [[ $line ]]; then
echo "$line" >&$gui_infd
echo "$line" >&$ai1_infd
if [[ "$line" == "END_TURN" ]]; then
break
fi
fi
done
sleep $turndelay
done
When $GAME_BIN exits, i.e. the GUI is closed by the close button, I can see the exiting ok message on the stdout, but the cleanup function is not called at all. When I add a manual call to cleanup before calling exit $OK_EXITCODE, although the processes are killed:
./game.sh: line 309: 9193 Terminated run_gui
./game.sh: line 309: 9194 Terminated run_ai1
./game.sh: line 309: 9195 Terminated run_ai2
./game.sh: line 309: 9203 Terminated sleep $turndelay
the loop runs anyway and the script doesn't exit, as it should (exit $OK_EXITCODE). The AI scripts are simple:
#!/bin/sh
while true; do
echo END_TURN
done
There is no wait call anywhere in my script. What am I doing wrong?
What's interesting: when I call jobs -p right after run_ai2 &, then I get 3 pids listed. On the other hand, when I invoke this command from the cleanup function - the output is empty.
Besides, why is the sleep $turndelay process terminated? It's not a child invoked process.
An EXIT trap fires when the trapping script exits. Your toplevel script isn't exiting here.
The trap isn't inherited by the sub-shell that your run_* functions are running under (from being run in the background) so it never triggers when the sub-shell's exit.
What you want is most likely what you did manually (though slightly incorrectly it sounded like).
You want the cleanup function called from run_gui when $GAME has exited. Something like this.
run_gui() {
"$GAME_BIN" $args <&$gui_infd >&$gui_outfd # redirecting IO to some file descriptors
ret=$?
cleanup
exit $ret
}
Then you'll just need to make sure that cleanup gets the right value of $$ (Which in bash it will, for your usage, even in a sub-shell since $$ in a sub-shell is the parent process ID but you might want to make that more explicit by setting up a handler in your main script for a signal and signalling the main script when run_gui terminates instead.)
I'd guess you are getting some child processes kicked off by a child process. Do this: in another window do a ps -ft pts/1 or whatever your tty is. Verify.
Also change the pkill to a kill $(jobs -p) and see if that works.
Say I have two scripts that just print back the return code from a useless subscript:
script1
(echo; exit 0)
echo $?
script2
(echo)
echo $?
Both give back 0. But is there a way to tell that the first subscript explicitly uses the exit command?
After some research I got some breakthrough. Namely you can setup an exit_handler that can tell if there was an exit call by simply examining the last command.
#! /bin/bash
exit_handler () {
ret=$?
if echo "$BASH_COMMAND" | grep -e "^exit " >> /dev/null
then
echo "it was an explicit exit"
else
echo "it was an implicit exit"
fi
exit $ret
}
trap "exit_handler" EXIT
exit 22
This will print
it was an explicit exit
Now in order to tell the parent, instead of echoing, we can rather write to a file, a named pipe or whatever.
As per noting of choroba, exit without an argument will give implicit call, which is admittedly wrong since exit (without argument) is the same as exit $?. For that reason the regex has to take that into consideration:
#! /bin/bash
exit_handler () {
ret=$?
if echo "$BASH_COMMAND" | grep -e "^exit \|^exit$" >> /dev/null
then
echo "it was an explicit exit"
else
echo "it was an implicit exit"
fi
exit $ret
}
trap "exit_handler" EXIT
exit 22
This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 1 year ago.
I have a shell script that executes a number of commands. How do I make the shell script exit if any of the commands exit with a non-zero exit code?
After each command, the exit code can be found in the $? variable so you would have something like:
ls -al file.ext
rc=$?; if [[ $rc != 0 ]]; then exit $rc; fi
You need to be careful of piped commands since the $? only gives you the return code of the last element in the pipe so, in the code:
ls -al file.ext | sed 's/^/xx: /"
will not return an error code if the file doesn't exist (since the sed part of the pipeline actually works, returning 0).
The bash shell actually provides an array which can assist in that case, that being PIPESTATUS. This array has one element for each of the pipeline components, that you can access individually like ${PIPESTATUS[0]}:
pax> false | true ; echo ${PIPESTATUS[0]}
1
Note that this is getting you the result of the false command, not the entire pipeline. You can also get the entire list to process as you see fit:
pax> false | true | false; echo ${PIPESTATUS[*]}
1 0 1
If you wanted to get the largest error code from a pipeline, you could use something like:
true | true | false | true | false
rcs=${PIPESTATUS[*]}; rc=0; for i in ${rcs}; do rc=$(($i > $rc ? $i : $rc)); done
echo $rc
This goes through each of the PIPESTATUS elements in turn, storing it in rc if it was greater than the previous rc value.
If you want to work with $?, you'll need to check it after each command, since $? is updated after each command exits. This means that if you execute a pipeline, you'll only get the exit code of the last process in the pipeline.
Another approach is to do this:
set -e
set -o pipefail
If you put this at the top of the shell script, it looks like Bash will take care of this for you. As a previous poster noted, "set -e" will cause Bash to exit with an error on any simple command. "set -o pipefail" will cause Bash to exit with an error on any command in a pipeline as well.
See here or here for a little more discussion on this problem. Here is the Bash manual section on the set builtin.
"set -e" is probably the easiest way to do this. Just put that before any commands in your program.
If you just call exit in Bash without any parameters, it will return the exit code of the last command. Combined with OR, Bash should only invoke exit, if the previous command fails. But I haven't tested this.
command1 || exit;
command2 || exit;
Bash will also store the exit code of the last command in the variable $?.
[ $? -eq 0 ] || exit $?; # Exit for nonzero return code
http://cfaj.freeshell.org/shell/cus-faq-2.html#11
How do I get the exit code of cmd1 in cmd1|cmd2
First, note that cmd1 exit code could be non-zero and still don't mean an error. This happens for instance in
cmd | head -1
You might observe a 141 (or 269 with ksh93) exit status of cmd1, but it's because cmd was interrupted by a SIGPIPE signal when head -1 terminated after having read one line.
To know the exit status of the elements of a pipeline
cmd1 | cmd2 | cmd3
a. with Z shell (zsh):
The exit codes are provided in the pipestatus special array.
cmd1 exit code is in $pipestatus[1], cmd3 exit code in
$pipestatus[3], so that $? is always the same as
$pipestatus[-1].
b. with Bash:
The exit codes are provided in the PIPESTATUS special array.
cmd1 exit code is in ${PIPESTATUS[0]}, cmd3 exit code in
${PIPESTATUS[2]}, so that $? is always the same as
${PIPESTATUS: -1}.
...
For more details see Z shell.
For Bash:
# This will trap any errors or commands with non-zero exit status
# by calling function catch_errors()
trap catch_errors ERR;
#
# ... the rest of the script goes here
#
function catch_errors() {
# Do whatever on errors
#
#
echo "script aborted, because of errors";
exit 0;
}
In Bash this is easy. Just tie them together with &&:
command1 && command2 && command3
You can also use the nested if construct:
if command1
then
if command2
then
do_something
else
exit
fi
else
exit
fi
#
#------------------------------------------------------------------------------
# purpose: to run a command, log cmd output, exit on error
# usage:
# set -e; do_run_cmd_or_exit "$cmd" ; set +e
#------------------------------------------------------------------------------
do_run_cmd_or_exit(){
cmd="$#" ;
do_log "DEBUG running cmd or exit: \"$cmd\""
msg=$($cmd 2>&1)
export exit_code=$?
# If occurred during the execution, exit with error
error_msg="Failed to run the command:
\"$cmd\" with the output:
\"$msg\" !!!"
if [ $exit_code -ne 0 ] ; then
do_log "ERROR $msg"
do_log "FATAL $msg"
do_exit "$exit_code" "$error_msg"
else
# If no errors occurred, just log the message
do_log "DEBUG : cmdoutput : \"$msg\""
fi
}