false | true; echo $? [duplicate] - bash

I currently have a script that does something like
./a | ./b | ./c
I want to modify it so that if any of a, b, or c exit with an error code I print an error message and stop instead of piping bad output forward.
What would be the simplest/cleanest way to do so?

In bash you can use set -e and set -o pipefail at the beginning of your file. A subsequent command ./a | ./b | ./c will fail when any of the three scripts fails. The return code will be the return code of the first failed script.
Note that pipefail isn't available in standard sh.

You can also check the ${PIPESTATUS[]} array after the full execution, e.g. if you run:
./a | ./b | ./c
Then ${PIPESTATUS} will be an array of error codes from each command in the pipe, so if the middle command failed, echo ${PIPESTATUS[#]} would contain something like:
0 1 0
and something like this run after the command:
test ${PIPESTATUS[0]} -eq 0 -a ${PIPESTATUS[1]} -eq 0 -a ${PIPESTATUS[2]} -eq 0
will allow you to check that all commands in the pipe succeeded.

If you really don't want the second command to proceed until the first is known to be successful, then you probably need to use temporary files. The simple version of that is:
tmp=${TMPDIR:-/tmp}/mine.$$
if ./a > $tmp.1
then
if ./b <$tmp.1 >$tmp.2
then
if ./c <$tmp.2
then : OK
else echo "./c failed" 1>&2
fi
else echo "./b failed" 1>&2
fi
else echo "./a failed" 1>&2
fi
rm -f $tmp.[12]
The '1>&2' redirection can also be abbreviated '>&2'; however, an old version of the MKS shell mishandled the error redirection without the preceding '1' so I've used that unambiguous notation for reliability for ages.
This leaks files if you interrupt something. Bomb-proof (more or less) shell programming uses:
tmp=${TMPDIR:-/tmp}/mine.$$
trap 'rm -f $tmp.[12]; exit 1' 0 1 2 3 13 15
...if statement as before...
rm -f $tmp.[12]
trap 0 1 2 3 13 15
The first trap line says 'run the commands 'rm -f $tmp.[12]; exit 1' when any of the signals 1 SIGHUP, 2 SIGINT, 3 SIGQUIT, 13 SIGPIPE, or 15 SIGTERM occur, or 0 (when the shell exits for any reason).
If you're writing a shell script, the final trap only needs to remove the trap on 0, which is the shell exit trap (you can leave the other signals in place since the process is about to terminate anyway).
In the original pipeline, it is feasible for 'c' to be reading data from 'b' before 'a' has finished - this is usually desirable (it gives multiple cores work to do, for example). If 'b' is a 'sort' phase, then this won't apply - 'b' has to see all its input before it can generate any of its output.
If you want to detect which command(s) fail, you can use:
(./a || echo "./a exited with $?" 1>&2) |
(./b || echo "./b exited with $?" 1>&2) |
(./c || echo "./c exited with $?" 1>&2)
This is simple and symmetric - it is trivial to extend to a 4-part or N-part pipeline.
Simple experimentation with 'set -e' didn't help.

Unfortunately, the answer by Johnathan requires temporary files and the answers by Michel and Imron requires bash (even though this question is tagged shell). As pointed out by others already, it is not possible to abort the pipe before later processes are started. All processes are started at once and will thus all run before any errors can be communicated. But the title of the question was also asking about error codes. These can be retrieved and investigated after the pipe finished to figure out whether any of the involved processes failed.
Here is a solution that catches all errors in the pipe and not only errors of the last component. So this is like bash's pipefail, just more powerful in the sense that you can retrieve all the error codes.
res=$( (./a 2>&1 || echo "1st failed with $?" >&2) |
(./b 2>&1 || echo "2nd failed with $?" >&2) |
(./c 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi
To detect whether anything failed, an echo command prints on standard error in case any command fails. Then the combined standard error output is saved in $res and investigated later. This is also why standard error of all processes is redirected to standard output. You can also send that output to /dev/null or leave it as yet another indicator that something went wrong. You can replace the last redirect to /dev/null with a file if yo uneed to store the output of the last command anywhere.
To play more with this construct and to convince yourself that this really does what it should, I replaced ./a, ./b and ./c by subshells which execute echo, cat and exit. You can use this to check that this construct really forwards all the output from one process to another and that the error codes get recorded correctly.
res=$( (sh -c "echo 1st out; exit 0" 2>&1 || echo "1st failed with $?" >&2) |
(sh -c "cat; echo 2nd out; exit 0" 2>&1 || echo "2nd failed with $?" >&2) |
(sh -c "echo start; cat; echo end; exit 0" 2>&1 || echo "3rd failed with $?" >&2) > /dev/null 2>&1)
if [ -n "$res" ]; then
echo pipe failed
fi

This answer is in the spirit of the accepted answer, but using shell variables instead of temporary files.
if TMP_A="$(./a)"
then
if TMP_B="$(echo "TMP_A" | ./b)"
then
if TMP_C="$(echo "TMP_B" | ./c)"
then
echo "$TMP_C"
else
echo "./c failed"
fi
else
echo "./b failed"
fi
else
echo "./a failed"
fi

Related

Run set of commands and return error code if any failed

In a nodejs project I have a shortcut yarn lint that runs couple of linters in such way:
lint_1 && lint_2 && lint_3
If any of these find an error it return an error code, as a result yarn lint itself returns error code, as a result - build fails.
It works somewhat fine, catches all the errors though there is a small issue: If a linter fails with error code - rest of the linters wont be executed.
What I would like - execute all of them (so they all print all errors) and only then fail.
I know that I can create a bash script (that I will run in yarn lint), run each of the linters one by one collecting return codes and then check if any of codes is non-zero - exit 1 and it will fail the yarn lint. But I am wondering is there more elegant way to do it?
You could trap on ERR and set a flag. This would run each of the linters and exit with failure if any one of them fails:
#!/bin/bash
result=0
trap 'result=1' ERR
lint_1
lint_2
lint_3
exit "$result"
What I would like - execute all of them (so they all print all errors) and only then fail
Basically we have a list of exit codes to catch. If any of them is nonzero, we need to set a variable to have nonzero value. Expanding that to a list, would look like this:
result=0
if ! lint_1; then result=1; fi
if ! lint_2; then result=1; fi
if ! lint_3; then result=1; fi
exit "$result"
As a programmer, I see that we have a pattern here. So we can go with an array, but bash does not have 2d arrays. It would be a workaround with eval to get around quoted parameters. It is doable. You have to use eval, to double evaulate the array "pointer"/name, but works. Note that eval is evil.
cmds_1=(lint_1 "arg with spaces you pass to lint_1")
cmds_2=(lint_2)
cmds_3=(lint_3)
result=0
# compgen results list of variables starting with `cmds_`
# so naming is important
for i in $(compgen -v cmds_); do
# at first, `$i` is only expanded
# then the array is expanded `"${cmds_?[#]}"`
if ! eval "\"\${$i[#]}\""; then
result=1
fi
done
exit "$result"
We can also go with xargs. From manual EXIT STATUS is 123 if __any__ invocation of the command exited with status 1-125. If you know that your programs will exit between 1-125 exit status you can (usually xargs handles different exit statuses correctly anyway (returns 123), but let's stay conforming):
xargs -l1 -- bash -c '"$#"' -- <<EOF
lint_1 "arg with spaces you pass to lint_1"
lint_2
lint_3
EOF
result=$? # or just exit "$?"
exit "$result"
which looks strangely clean. On a side note, by passing just -P <number of jobs> to xargs you can execute all the command in parallel. You can accommodate for the 1-125 error range, by handling the error inside the bash script, ie.
xargs -l1 -- bash -c '"$#" || exit 1' -- <<EOF
lint_1 "arg with spaces you pass to lint_1"
lint_2
lint_3
EOF
result=$?
exit "$result"
And I have another idea. After each command we can output the return status on a dedicated file descriptor. Then from all return statuses filter zeros and check if there are any other statuses on the stream. If they are, we should exit with nonzero status. This feels like a work-done-around and is basically the same as the first code snipped, but the if ! ....; then result=1; fi is simplified to ; echo $? >&10.
tmp=$(mktemp)
(
lint_1 "arg with spaces you pass to lint_1"; echo $? >&10
lint_2; echo $? >&10
lint_3; echo $? >&10
) 10> >(
[ -z "$(grep -v 0)" ]
echo $? > "$tmp"
)
result="$(cat "$tmp"; rm "$tmp")"
exit "$result"
From the options presented, I would go with the other answer ;) or with the xargs second snipped.

posix shell: stdout to file, exitcode to a variable and last line of stderr to another variable

I implemented the following in POSIX shell (not bash):
fail.sh:
#!/bin/sh
echo something useful
echo warning 1 >&2
echo warning 2 >&2
echo an error message >&2
exit 100
The command prints something I want to use on stdout, some warnings on stderr and an error message on stderr as well before failing with exit code 100.
success.sh:
#!/bin/sh
echo something useful
echo warning 1 >&2
echo warning 2 >&2
exit 0
This command prints something to stdout and some warnings to stderr but finishes successfully with exit code 0.
test.sh:
#!/bin/sh -e
script=$1
rm -f success
msg=$({ $script > useful; touch success; } 2>&1 | tail -1;)
if [ -f success ]; then
echo success
else
echo failure
echo last error was: $msg
fi
In this script I want to run either of those two scripts and provide the following functionality:
the output of the scripts must be redirected to a file
the last line of stderr must be saved to a variable so that I can print that last line later in case the command didnt exit successfully
I want to detect whether or not the command exited successfully by checking its exit status
My script test.sh achieves all of that but it uses an external file. Since I use -e the touch will only be executed if $script executed successfully. Can I capture the exit code of $script without this technique?
The script must be written in POSIX shell and must use -e.
#!/bin/sh -e
script=$1
if msg=$($script 2>&1 >useful); then
echo success
else
echo failure
msg=$(echo "$msg" | tail -1)
echo last error was: $msg
fi

Catching errors in Bash with glassfish commands [return code in pipes]

I am writing a bash script to manage deployments to a GF server for several environments. What I would like to know is how can I get the result of a GF command and then determine whether to continue or exit.
For example
Say I want to redeploy, I have this script
$GF_ASADMIN --port $GF_PORT redeploy --name $EAR_FILE_NAME --keepstate=true $EAR_FILE | tee -a $LOG
The variables are already defined. So GF will start to redeploy and either suceed or fail. I want to check if it does and act accordingly. I have this right after it.
RC=$?
if [[ $RC -eq 0 ]];
then echoInfo "Application Successfully redeployed!" | tee -a $LOG;
else
echoError "Failed to redeploy application!"
exit 1
fi;
However, it doesnt really seem to work .
The problem is the pipe
$GF_ASADMIN ... | tee -a $LOG
$? reflects the return code of tee.
Your are looking for PIPESTATUS. See man bash:
PIPESTATUS
An array variable (see Arrays below) containing a list of exit
status values from the processes in the most-recently-executed
foreground pipeline (which may contain only a single command).
See also this example to clarify the PIPESTATUS
false | true
echo ${PIPESTATUS[#]}
Output is: 1 0
The corrected code is:
RC=${PIPESTATUS[0]}
Or try using a code block redirect, for example:
{
if "$GF_ASADMIN" --port $GF_PORT redeploy --name "$EAR_FILE_NAME" --keepstate=true "$EAR_FILE"
then
echo Info "Application Successfully redeployed!"
else
echo Error "Failed to redeploy application!" >&2
exit 1
fi
} | tee -a "$LOG"

Pipe command output, but keep the error code [duplicate]

This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 5 years ago.
How do I get the correct return code from a unix command line application after I've piped it through another command that succeeded?
In detail, here's the situation :
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file} -- when only the tar command fails $?=0
$ echo $?
0
And, what I'd like to see is:
$ tar -cEvhf - -I ${sh_tar_inputlist} 2>${sh_tar_error_file} | gzip -5 -c > ${sh_tar_file}
$ echo $?
1
Does anyone know how to accomplish this?
Use ${PIPESTATUS[0]} to get the exit status of the first command in the pipe.
For details, see http://tldp.org/LDP/abs/html/internalvariables.html#PIPESTATUSREF
See also http://cfajohnson.com/shell/cus-faq-2.html for other approaches if your shell does not support $PIPESTATUS.
Look at $PIPESTATUS which is an array variable holding exit statuses. So ${PIPESTATUS[0]} holds the exit status of the first command in the pipe, ${PIPESTATUS[1]} the exit status of the second command, and so on.
For example:
$ tar -cEvhf - -I ${sh_tar_inputlist} | gzip -5 -c > ${sh_tar_file}
$ echo ${PIPESTATUS[0]}
To print out all statuses use:
$ echo ${PIPESTATUS[#]}
Here is a general solution using only POSIX shell and no temporary files:
Starting from the pipeline:
foo | bar | baz
exec 4>&1
error_statuses=`((foo || echo "0:$?" >&3) |
(bar || echo "1:$?" >&3) |
(baz || echo "2:$?" >&3)) 3>&1 >&4`
exec 4>&-
$error_statuses contains the status codes of any failed processes, in random order, with indexes to tell which command emitted each status.
# if "bar" failed, output its status:
echo $error_statuses | grep '1:' | cut -d: -f2
# test if all commands succeeded:
test -z "$error_statuses"
# test if the last command succeeded:
echo $error_statuses | grep '2:' >/dev/null
As others have pointed out, some modern shells provide PIPESTATUS to get this info. In classic sh, it's a bit more difficult, and you need to use a fifo:
#!/bin/sh
trap 'rm -rf $TMPDIR' 0
TMPDIR=$( mktemp -d )
mkfifo ${FIFO=$TMPDIR/fifo}
cmd1 > $FIFO &
cmd2 < $FIFO
wait $!
echo The return value of cmd1 is $?
(Well, you don't need to use a fifo. You can have the commands early in the pipe echo a status variable and eval that in the main shell, redirecting file descriptors all over the place and basically bending over backwards to check things, but using a fifo is much, much easier.)

Bash script not exiting immediately when `exit` is called

I have the following bash script:
tail -F -n0 /private/var/log/system.log | while read line
do
if [ ! `echo $line | grep -c 'launchd'` -eq 0 ]; then
echo 'launchd message'
exit 0
fi
done
For some reason, it is echoing launchd message, waiting for a full 5 seconds, and then exiting.
Why is this happening and how do I make it exit immediately after it echos launchd message?
Since you're using a pipe, the while loop is being run in a subshell. Run it in the main shell instead.
#!/bin/bash
while ...
do
...
done < <(tail ...)
As indicated by Ignacio, your tail | while creates a subshell. The delay is because it's waiting for the next line to be written to the log file before everything closes.
You can add this line immediately before your exit command if you'd prefer not using process substitution:
kill -SIGPIPE $$
Unfortunately, I don't know of any way to control the exit code using this method. It will be 141 which is 128 + 13 (the signal number of SIGPIPE).
If you're trying to make the startup of a daemon dependent on another one having started, there's probably a better way to do that.
By the way, if you're really writing a Bash script (which you'd have to be to use <() process substitution), you can write your if like this: if [[ $line == *launchd* ]].
You can also exit the subshell with a tell-tale exit code and then test the value of "$?" to get the same effect you're looking for:
tail -F -n0 /private/var/log/system.log | while read line
do
if [ ! `echo $line | grep -c 'launchd'` -eq 0 ]; then
echo 'launchd message'
exit 10
fi
done
if [ $? -eq 10 ]; then exit 0; fi

Resources