bash set -eo pipefail not immediately exiting - bash

#!/usr/bin/env bash
set -eo pipefail
sha256sum \
Dockerfile-ci \
frontend/pnpm-lock.yaml \
| sha256sum
If frontend/pnpm-lock.yaml does not exist, and the script above is run
sha256sum: frontend/pnpm-lock.yaml: No such file or directory
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
It correctly logs that the file doesn't exist, but it continues piping that into the next sha256sum. Shouldn't set -eo pipefail immediately exit the script on the first sha256sum command and not pipe into the second sha256sum?

pipefail doesn't cause the pipeline to abort early if a command fails. The pipeline still runs to completion, until all the commands have exited. That's true with or without pipefail.
What pipefail does do is ensure the return status is a failure if any of the commands fail. Without pipefail the pipeline fails only if the final command fails.
From the bash manual (emphasis added):
The exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (see The Set Builtin). If pipefail is enabled, the pipeline’s return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully. If the reserved word ! precedes the pipeline, the exit status is the logical negation of the exit status as described above. The shell waits for all commands in the pipeline to terminate before returning a value.

Related

Difference of -e, -u and -o pipefail in bash exit status

Being trying to properly guard for non-zero exits on a bash script.
What is the difference between -e, -u and -o pipefail?
-o pipefail is not sufficient to exit with an error code?
set -e: Exit immediately if a command exits with a non-zero status.
set -u: If you try to access an undefined variable, that is an error.
set -o pipefail: If any command in a pipeline returns a non-zero exit code, the return code of the entire pipeline is the exit code of the last failed command.

How do I capture the exit codes from $PIPESTATUS and use it to exit the script?

I am using $PIPESTATUS to print the exit code for each pipe command. I now know that pipes run in parallel but once the exit code is <>0, how do I get the script to exit instead of progressing to the next command? Thanks.
I can't put set -e at the top because once the error is detected, the script exits but $PIPESTATUS isn't displayed because that echo command is after any failed command in the pipeline.
set -o pipefail
true | false | true || { declare -p PIPESTATUS; exit; }
echo "whoops"
output:
declare -a PIPESTATUS=([0]="0" [1]="1" [2]="0")
From Bash Reference Manual:
pipefail
If set, the return value of a pipeline is the value of the
last (rightmost) command to exit with a non-zero status, or zero if
all commands in the pipeline exit successfully. This option is
disabled by default.

How to cause a Linux pipeline to fail?

Recently I'm learning the set -e of POSIX shell on Ubuntu 14.04. My reference material is the "IEEE Std 1003.1-2008, 2016 Edition", "Shell & Utilities" chapter. From this section I see -e doesn't cause the script to quit when the command fails in a pipeline (unless the failed command is the last one in the pipeline):
The failure of any individual command in a multi-command pipeline shall not cause the shell to exit. Only the failure of the pipeline itself shall be considered.
I then wrote a simple script to confirm this behavior:
(
set -e
false | true | false | true
echo ">> Multi-command pipeline: Last command succeeded."
)
(
set -e
false | true | false
echo ">> Multi-command pipeline: Last command failed."
)
The "Last command succeeded" message is printed out, while the "Last command failed" message is not.
My questions are:
The chained commands false | true | false don't seem to be a failure of the pipeline. It's just the failure of the last command. The pipeline itself still succeeds. Am I right??
Is there a way to simulate a pipeline failure?? We can use false to simulate the failure of a command. Is there a similar command for a pipeline?
By default in bash, the success or failure of a pipeline is determined solely by the last command in the pipeline.
You may however enable the pipefail option (set -o pipefail) and the pipeline will return failure if any command in the pipeline fails.
Example
This pipeline succeeds:
$ false | true | false | true ; echo $?
0
This pipeline fails:
$ set -o pipefail
$ false | true | false | true ; echo $?
1
Documentation
From man bash:
The return status of a pipeline is the exit status of the last
command, unless the pipefail option is enabled. If pipefail is
enabled, the pipeline's return status is the value of the last
(rightmost) command to exit with a non-zero status, or zero if all
commands exit successfully.
The chained commands false | true | false don't seem to be a failure of the pipeline. It's just the failure of the last command. The pipeline itself still succeeds. Am I right?
The success of a pipeline is specified to be the success of the last command. They are the same thing.
From §2.9.2 Pipelines:
If the pipeline does not begin with the ! reserved word, the exit status shall be the exit status of the last command specified in the pipeline. Otherwise, the exit status shall be the logical NOT of the exit status of the last command. That is, if the last command returns zero, the exit status shall be 1; if the last command returns greater than zero, the exit status shall be zero.
In bash and ksh this you can use set -o pipefail to cause the pipeline to fail if any command in it fails. This is not a POSIX option, unfortunately. It ought to be, but it isn't.

How to check command status while redirect standard error output to a file?

I have a bash script having the following command
rm ${thefile}
In order to ensure the command is execute successfully, I use $? variable to check on the status, but this variable doesn't show the exact error? To do this, I redirect the standard error output to a log file using following command:
rm ${file} >> ${LOG_FILE} 2>&1
With this command I can't use $? variable to check status on the rm command because the command behind the rm command is executed successfully, thus $? variable is kind of useless here.
May I know is there a solution that could combine both features where I'm able to check on the status of rm command and at mean time I'm allow to redirect the output?
With this command I can't use $? variable to check status on the rm command because the command behind the rm command is executed successfully, thus $? variable is kind of useless here.
That is simply not true. All of the redirections are part of a single command, and $? contains its exit status.
What you may be thinking of is cases where you have multiple commands arranged in a pipeline:
command-1 | command-2
When you do that, $? is set to the exit status of the last command in the pipeline (in this case command-2), and you need to use the PIPESTATUS array to get the exit status of other commands. (In this example ${PIPESTATUS[0]} is the exit status of command-1 and ${PIPESTATUS[1]} is equivalent to $?.)
What you probably need is the shell option pipefail in bash (from man bash):
The return status of a pipeline is the exit status of the last command, unless
the pipefail option is enabled. If pipefail is enabled, the pipeline's return
status is the value of the last (rightmost) command to exit with a non-zero sta‐
tus, or zero if all commands exit successfully. If the reserved word ! precedes
a pipeline, the exit status of that pipeline is the logical negation of the exit
status as described above. The shell waits for all commands in the pipeline to
terminate before returning a value.
> shopt -s -o pipefail
> true | false
> echo $?
1
> false | true
> echo $?
1
true | true
echo $?
0

Combining wget and zenity/yad

I'm trying to provide some kind of a GUI for wget download process by using zenity/yad. I have come up with this:
wget http://example.com/ 2>&1 | \
sed -u 's/^[a-zA-Z\-].*//; s/.* \{1,2\}\([0-9]\{1,3\}\)%.*/\1\n#Downloading... \1%/; s/^20[0-9][0-9].*/#Done./' | \
zenity --progress --percentage=0 --title=Download dialog --text=Starting... --auto-close --auto-kill
Now, suppose wget runs into an error. I need to inform the user that the download failed. Since the $? variable seems to have a value of 0 regardless of success or failure (perhaps because $? is storing zenity's exit status?), I can't tell if the download failed or succeeded.
How can I rectify the above described problem?
You can say:
set -o pipefail
Saying so would cause $? to report the exit code of the last command in the pipeline to exit with a non-zero status.
Quoting from The Set Builtin:
pipefail
If set, the return value of a pipeline is the value of the last (rightmost) command to exit with a non-zero status, or zero if all
commands in the pipeline exit successfully. This option is disabled by
default.
Additionally, the array PIPESTATUS would report the return code of all the commands in the pipeline. Saying:
echo "${PIPESTATUS[#]}"
would list all those. For your example, it'd display 3 numbers, e.g.
1 0 0
if wget failed.
Quoting from the manual:
PIPESTATUS
An array variable (see Arrays) containing a list of exit status values from the processes in the most-recently-executed foreground
pipeline (which may contain only a single command).

Resources