Tee resets exit status is always 0 - bash

I have a short script like this:
#!/bin/bash
<some_process> | tee -a /tmp/some.log &
wait $(pidof <some_process_name>)
echo $?
The result is always 0, irrespective of the exit status of some_process.
I know PIPESTATUS can be used here, but why does tee break wait?

Well, this is something that, for some peculiar reason, the docs don't mention. The code, however, does:
int wait_for (pid) { /*...*/
/* If this child is part of a job, then we are really waiting for the
job to finish. Otherwise, we are waiting for the child to finish. [...] */
if (job == NO_JOB)
job = find_job (pid, 0, NULL);
So it's actually waiting for the whole job, which, as we know, normally yields the exit status of the last command in the chain.
To make matters worse, $PIPESTATUS can only be used with the last foreground command.
You can, however, utilize $PIPESTATUS in a subshell job, like this:
(<some_process> | tee -a /tmp/some.log; exit ${PIPESTATUS[0]}) &
# somewhere down the line:
wait %%<some_process>

The trick here is to use $PIPESTATUS but also wait -n:
Check this example:
#!/bin/bash
# start background task
(sleep 5 | false | tee -a /tmp/some.log ; exit ${PIPESTATUS[1]}) &
# foreground code comes here
echo "foo"
# wait for background task to finish
wait -n
echo $?

The reason you always get an exit status of 0 is that the exit status returned is that of the last command in the pipeline, which is tee. By using a pipe, you eliminate the normal detection of exit status of <some_command>.
From the bash man page:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
So... The following might be what you want:
#!/usr/bin/env bash
set -o pipefail
<some_command> | tee -a /tmp/some.log &
wait %1
echo $?

Related

bashs's command behavior about exit pipe exit. `exit 1 | exit 2`

I am curious about bash's behavior and the exit status of the situation when I enter the command
exit [exit status] | exit [exit status] | .. [repetition of exit and exit status]
it gives me output below. and, then doesn't exits.
Is this an undefined behavior?
bash-3.2$ exit 1 | exit 2
bash-3.2$ echo $?
2
From the bash man page:
Each command in a pipeline is executed as a separate process (i.e., in a subshell).
So, even the first exit will not exit your shell, as it only exits the subshell.
As for the exit codes:
The return status of a pipeline is the exit status of the last command, unless the pipefail option is enabled. If pipefail is enabled, the pipeline's return status is the value of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
You can activate pipefail like this:
$ set -o pipefail
$ exit 1 | exit 2 | exit 0
$ echo $?
2
exit 1 | exit 2 is not sequential but concurrent.
Even if the last command takes STDOUT output from the first command.
What is a simple explanation for how pipes work in Bash?
Moreover, each one of those commands is executed in a subshell.
So your main shell, where you type commands is not exited.
A piped command is like a whole composition of commands instead of one command after another.
If you want to exit, you can make it sequential exit 1 || exit 2.
Finally, by default, $? is the most recent foreground pipeline exit status.
What are the special dollar sign shell variables?

How do I capture the exit codes from $PIPESTATUS and use it to exit the script?

I am using $PIPESTATUS to print the exit code for each pipe command. I now know that pipes run in parallel but once the exit code is <>0, how do I get the script to exit instead of progressing to the next command? Thanks.
I can't put set -e at the top because once the error is detected, the script exits but $PIPESTATUS isn't displayed because that echo command is after any failed command in the pipeline.
set -o pipefail
true | false | true || { declare -p PIPESTATUS; exit; }
echo "whoops"
output:
declare -a PIPESTATUS=([0]="0" [1]="1" [2]="0")
From Bash Reference Manual:
pipefail
If set, the return value of a pipeline is the value of the
last (rightmost) command to exit with a non-zero status, or zero if
all commands in the pipeline exit successfully. This option is
disabled by default.

Bash piped commands and its returns

Is there any way to a piped commands to replicate its previous command exit status?
For example:
#/bin/bash
(...)
function customizedLog() {
# do something with the piped command output
exit <returned value from the last piped comand/script (script.sh)>
}
script.sh | customizedLog
echo ${?} # here I wanna show the script exit value
(...)
I know I could simply check the return using ${PIPESTATUS[0]}, but I really want to do this like the customizedLog function wasn't there.
Any thoughts?
In bash:
set -o pipefail
This will return the last non-zero exit status in a pipeline, or zero if all commands in the pipeline succeed.
set -o pipefail
script.sh | customizedLog
echo ${?}
Just make sure customizedLog succeeds (return 0), and you should pick up the exit status of script.sh. Test with false | customizedLog and true | customizedLog.
script.sh | customizedLog
The above will run in two separate processes (or 3, actually -- customizedLog will run in a bash fork as you can verify with something like ps -T --forest). As far as I know, with the UNIX process model, the only process that has access to a process's return information is its parent so there's no way customized log will be able to retrieve it.
So no, unless the previous command is run from a wrapper command that passes the exit status through the pipe (e.g., as the last line):
( command ; echo $? ) | piped_command_that_is_aware_of_such_an_arrangement

How can I get both the process id and the exit code from a bash script?

I need a bash script that does the following:
Starts a background process with all output directed to a file
Writes the process's exit code to a file
Returns the process's pid (right away, not when process exits).
The script must exit
I can get the pid but not the exit code:
$ executable >>$log 2>&1 &
pid=`jobs -p`
Or, I can capture the exit code but not the pid:
$ executable >>$log;
# blocked on previous line until process exits
echo $0 >>$log;
How can I do all of these at the same time?
The pid is in $!, no need to run jobs. And the return status is returned by wait:
$executable >> $log 2>&1 &
pid=$!
wait $!
echo $? # return status of $executable
EDIT 1
If I understand the additional requirement as stated in a comment, and you want the script to return immediately (without waiting for the command to finish), then it will not be possible to have the initial script write the exit status of the command. But it is easy enough to have an intermediary write the exit status as soon as the child finishes. Something like:
sh -c "$executable"' & echo pid=$! > pidfile; wait $!; echo $? > exit-status' &
should work.
EDIT 2
As pointed out in the comments, that solution has a race condition: the main script terminates before the pidfile is written. The OP solves this by doing a polling sleep loop, which is an abomination and I fear I will have trouble sleeping at night knowing that I may have motivated such a travesty. IMO, the correct thing to do is to wait until the child is done. Since that is unacceptable, here is a solution that blocks on a read until the pid file exists instead of doing the looping sleep:
{ sh -c "$executable > $log 2>&1 &"'
echo $! > pidfile
echo # Alert parent that the pidfile has been written
wait $!
echo $? > exit-status
' & } | read

How do I check the exit code of a command executed by flock?

Greetings all. I'm setting up a cron job to execute a bash script, and I'm worried that the next one may start before the previous one ends. A little googling reveals that a popular way to address this is the flock command, used in the following manner:
flock -n lockfile myscript.sh
if [ $? -eq 1 ]; then
echo "Previous script is still running! Can't execute!"
fi
This works great. However, what do I do if I want to check the exit code of myscript.sh? Whatever exit code it returns will be overwritten by flock's, so I have no way of knowing if it executed successfully or not.
It looks like you can use the alternate form of flock, flock <fd>, where <fd> is a file descriptor. If you put this into a subshell, and redirect that file descriptor to your lock file, then flock will wait until it can write to that file (or error out if it can't open it immediately and you've passed -n). You can then do everything in your subshell, including testing the return value of scripts you run:
(
if flock -n 200
then
myscript.sh
echo $?
fi
) 200>lockfile
According to the flock man page, flock has a -E or --exit-conflict-code flag you can use to set what the exit code of flock should be in the case a conflict occurs:
-E, --conflict-exit-code number
The exit status used when the -n option is in use, and the conflicting lock exists, or the -w option is in use, and the timeout is reached. The default value is 1. The number has to be in the range of 0 to 255.
The man page also states:
EXIT STATUS
The command uses sysexits.h exit status values for everything, except when using either of the options -n or -w which report a failure to acquire the lock with a exit status given by the -E option, or 1 by default. The exit status given by -E has to be in the range of 0 to 255.
When using the command variant, and executing the child worked, then the exit status is that of the child command.
So, in the case of the -n or -w flags while using the "command" variant, you can see both exit statuses.
Example:
$ flock --exclusive /tmp/flock.lock bash -c 'exit 42'; echo $?
42
$ flock --exclusive /tmp/flock.lock flock --exclusive --nonblock --conflict-exit-code 100 /tmp/flock.lock bash -c 'exit 42'; echo $?
100
In the first example, we see that we get back the exit status of the process we're running with flock. In the second example, we are creating contention for the lock. In that case, flock itself returns the status code we tell it (100). If you do not specify a value with the --conflict-exit-code flag, it will return 1 instead. However, I prefer setting less common values to prevent confusion from other processess/scripts which also might return a value of 1.
#!/bin/bash
if ! pgrep myscript.sh; then
flock -n lockfile myscript.sh
fi
If I understand you right, you want to make sure 'myscript.sh' is not running before cron attempts to run your command again. Assuming that's right, we check to see if pgrep failed to find myscript.sh in the processes list and if so we run the flock command again.
Perhaps something like this would work for you.
#!/bin/bash
RETVAL=0
lockfailed()
{
echo "cannot flock"
exit 1
}
(
flock -w 2 42 || lockfailed
false
RETVAL=$?
echo "original retval $RETVAL"
exit $RETVAL
) 42>|/tmp/flocker
RETVAL=$?
echo "returned $RETVAL"
exit $RETVAL

Resources