Bash piped commands and its returns - bash

Is there any way to a piped commands to replicate its previous command exit status?
For example:
#/bin/bash
(...)
function customizedLog() {
# do something with the piped command output
exit <returned value from the last piped comand/script (script.sh)>
}
script.sh | customizedLog
echo ${?} # here I wanna show the script exit value
(...)
I know I could simply check the return using ${PIPESTATUS[0]}, but I really want to do this like the customizedLog function wasn't there.
Any thoughts?

In bash:
set -o pipefail
This will return the last non-zero exit status in a pipeline, or zero if all commands in the pipeline succeed.
set -o pipefail
script.sh | customizedLog
echo ${?}
Just make sure customizedLog succeeds (return 0), and you should pick up the exit status of script.sh. Test with false | customizedLog and true | customizedLog.

script.sh | customizedLog
The above will run in two separate processes (or 3, actually -- customizedLog will run in a bash fork as you can verify with something like ps -T --forest). As far as I know, with the UNIX process model, the only process that has access to a process's return information is its parent so there's no way customized log will be able to retrieve it.
So no, unless the previous command is run from a wrapper command that passes the exit status through the pipe (e.g., as the last line):
( command ; echo $? ) | piped_command_that_is_aware_of_such_an_arrangement

Related

`grep` cause bash script stop [duplicate]

I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.

How do I capture the exit codes from $PIPESTATUS and use it to exit the script?

I am using $PIPESTATUS to print the exit code for each pipe command. I now know that pipes run in parallel but once the exit code is <>0, how do I get the script to exit instead of progressing to the next command? Thanks.
I can't put set -e at the top because once the error is detected, the script exits but $PIPESTATUS isn't displayed because that echo command is after any failed command in the pipeline.
set -o pipefail
true | false | true || { declare -p PIPESTATUS; exit; }
echo "whoops"
output:
declare -a PIPESTATUS=([0]="0" [1]="1" [2]="0")
From Bash Reference Manual:
pipefail
If set, the return value of a pipeline is the value of the
last (rightmost) command to exit with a non-zero status, or zero if
all commands in the pipeline exit successfully. This option is
disabled by default.

Tee resets exit status is always 0

I have a short script like this:
#!/bin/bash
<some_process> | tee -a /tmp/some.log &
wait $(pidof <some_process_name>)
echo $?
The result is always 0, irrespective of the exit status of some_process.
I know PIPESTATUS can be used here, but why does tee break wait?
Well, this is something that, for some peculiar reason, the docs don't mention. The code, however, does:
int wait_for (pid) { /*...*/
/* If this child is part of a job, then we are really waiting for the
job to finish. Otherwise, we are waiting for the child to finish. [...] */
if (job == NO_JOB)
job = find_job (pid, 0, NULL);
So it's actually waiting for the whole job, which, as we know, normally yields the exit status of the last command in the chain.
To make matters worse, $PIPESTATUS can only be used with the last foreground command.
You can, however, utilize $PIPESTATUS in a subshell job, like this:
(<some_process> | tee -a /tmp/some.log; exit ${PIPESTATUS[0]}) &
# somewhere down the line:
wait %%<some_process>
The trick here is to use $PIPESTATUS but also wait -n:
Check this example:
#!/bin/bash
# start background task
(sleep 5 | false | tee -a /tmp/some.log ; exit ${PIPESTATUS[1]}) &
# foreground code comes here
echo "foo"
# wait for background task to finish
wait -n
echo $?
The reason you always get an exit status of 0 is that the exit status returned is that of the last command in the pipeline, which is tee. By using a pipe, you eliminate the normal detection of exit status of <some_command>.
From the bash man page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
So... The following might be what you want:
#!/usr/bin/env bash
set -o pipefail
<some_command> | tee -a /tmp/some.log &
wait %1
echo $?

How to check command status while redirect standard error output to a file?

I have a bash script having the following command
rm ${thefile}
In order to ensure the command is execute successfully, I use $? variable to check on the status, but this variable doesn't show the exact error? To do this, I redirect the standard error output to a log file using following command:
rm ${file} >> ${LOG_FILE} 2>&1
With this command I can't use $? variable to check status on the rm command because the command behind the rm command is executed successfully, thus $? variable is kind of useless here.
May I know is there a solution that could combine both features where I'm able to check on the status of rm command and at mean time I'm allow to redirect the output?
With this command I can't use $? variable to check status on the rm command because the command behind the rm command is executed successfully, thus $? variable is kind of useless here.
That is simply not true. All of the redirections are part of a single command, and $? contains its exit status.
What you may be thinking of is cases where you have multiple commands arranged in a pipeline:
command-1 | command-2
When you do that, $? is set to the exit status of the last command in the pipeline (in this case command-2), and you need to use the PIPESTATUS array to get the exit status of other commands. (In this example ${PIPESTATUS[0]} is the exit status of command-1 and ${PIPESTATUS[1]} is equivalent to $?.)
What you probably need is the shell option pipefail in bash (from man bash):
The return status of a pipeline is the exit status of the last command, unless
the pipefail option is enabled. If pipefail is enabled, the pipeline's return
status is the value of the last (rightmost) command to exit with a non-zero sta‐
tus, or zero if all commands exit successfully. If the reserved word ! precedes
a pipeline, the exit status of that pipeline is the logical negation of the exit
status as described above. The shell waits for all commands in the pipeline to
terminate before returning a value.
> shopt -s -o pipefail
> true | false
> echo $?
1
> false | true
> echo $?
1
true | true
echo $?
0

Bash process substitution and exit codes

I'd like to turn the following:
git status --short && (git status --short | xargs -Istr test -z str)
which gets me the desired result of mirroring the output to stdout and doing a zero length check on the result into something closer to:
git status --short | tee >(xargs -Istr test -z str)
which unfortunately returns the exit code of tee (always zero).
Is there any way to get at the exit code of the substituted process elegantly?
[EDIT]
I'm going with the following for now, it prevents running the same command twice but seems to beg for something better:
OUT=$(git status --short) && echo "${OUT}" && test -z "${OUT}"
Look here:
$ echo xxx | tee >(xargs test -n); echo $?
xxx
0
$ echo xxx | tee >(xargs test -z); echo $?
xxx
0
and look here:
$echo xxx | tee >(xargs test -z; echo "${PIPESTATUS[*]}")
xxx
123
$echo xxx | tee >(xargs test -n; echo "${PIPESTATUS[*]}")
xxx
0
Is it?
See also Pipe status after command substitution
I've been working on this for a while, and it seems that there is no way to do that with process substitution, except for resorting to inline signalling, and that can really be used only for input pipes, so I'm not going to expand on it.
However, bash-4.0 provides coprocesses which can be used to replace process substitution in this context and provide clean reaping.
The following snippet provided by you:
git status --short | tee >(xargs -Istr test -z str)
can be replaced by something alike:
coproc GIT_XARGS { xargs -Istr test -z str; }
{ git status --short | tee; } >&${GIT_XARGS[1]}
exec {GIT_XARGS[1]}>&-
wait ${GIT_XARGS_PID}
Now, for some explanation:
The coproc call creates a new coprocess, naming it GIT_XARGS (you can use any name you like), and running the command in braces. A pair of pipes is created for the coprocess, redirecting its stdin and stdout.
The coproc call sets two variables:
${GIT_XARGS[#]} containing pipes to process' stdin and stdout, appropriately ([0] to read from stdout, [1] to write to stdin),
${GIT_XARGS_PID} containing the coprocess' PID.
Afterwards, your command is run and its output is directed to the second pipe (i.e. coprocess' stdin). The cryptically looking >&${GIT_XARGS[1]} part is expanded to something like >&60 which is regular output-to-fd redirection.
Please note that I needed to put your command in braces. This is because a pipeline causes subprocesses to be spawned, and they don't inherit file descriptors from the parent process. In other words, the following:
git status --short | tee >&${GIT_XARGS[1]}
would fail with invalid file descriptor error, since the relevant fd exists in parent process and not the spawned tee process. Putting it in brace causes bash to apply the redirection to the whole pipeline.
The exec call is used to close the pipe to your coprocess. When you used process substitution, the process was spawned as part of output redirection and the pipe to it was closed immediately after the redirection no longer had effect. Since coprocess' pipe's lifetime extends beyond a single redirection, we need to close it explicitly.
Closing the output pipe should cause the process to get EOF condition on stdin and terminate gracefully. We use wait to wait for its termination and reap it. wait returns the coprocess' exit status.
As a last note, please note that in this case, you can't use kill to terminate the coprocess since that would alter its exit status.
#!/bin/bash
if read q < <(git status -s)
then
echo $q
exit
fi

Resources