In some Bash scripts I am executing some commands preserving the real-time output in this way:
exec 5>&1
output=$(ls -1 2>&1 |tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
I moved this piece of code in a function to make it reusable like this:
execute() {
# 1 - Execute backup
echo "Executing command 'very_long_command'..."
exec 5>&1
cmd="very_long_command"
output=$($cmd 2>&1 |tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
echo $output
echo "very_long_command exited with status $status."
return $status
}
When I call the function with exec_output="$(execute)" I can of course get its output, but what I still need is to get the output of very_long_command during its execution, and not at the end in a unique output.
Could you help me to achieve this?
Thanks to #Charles Duffy I solved my problem, redirecting FD 5 to stderr:
execute() {
exec 5>&2
output=$(ls -1 2>&1 | tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
echo "$output"
return $status
}
output="$(execute)"
echo "Function output:"
printf "%s\n" "$output"
Related
Have the below in a bash script -
python3 run_tests.py 2>&1 | tee tests.log
If I run python3 run_tests.py alone, I can do the below to exit the script:
python3 run_tests.py
if [ $? -ne 0 ]; then
echo 'ERROR: pytest failed, exiting ...'
exit $?
However, the above working code doesn't write the output of pytest to a file.
When I run python3 run_tests.py 2>&1 | tee tests.log, the output of pytest will output to the file, but this always returns status 0, since the output job successfully ran.
I need a way to somehow capture the returned code of the python script tests prior to writing to the file. Either that, or something that accomplishes the same end result of quitting the job if a test fails while also getting the failures in the output file.
Any help would be appreciated! :)
The exit status of a pipeline is the status of the last command, so $? is the status of tee, not pytest.
In bash you can use the $PIPESTATUS array to get the status of each command in the pipeline.
python3 run_tests.py 2>&1 | tee tests.log
status=${PIPESTATUS[0]} # status of run_tests.py
if [ $status -ne 0 ]; then
echo 'ERROR: pytest failed, exiting ...'
exit $status
fi
Note that you need to save the status in another variable, because $? and $PIPESTATUS are updated after each command.
I don't have python on my system so using awk to produce output and a specific exit status instead:
$ { awk 'BEGIN{print "foo"; exit 1}'; ret="$?"; } > >(tee tests.log)
foo
$ echo "$ret"
1
$ cat tests.log
foo
or if you want a script:
$ cat tst.sh
#!/usr/bin/env bash
#########
exec 3>&1 # save fd 1 (stdout) in fd 3 to restore later
exec > >(tee tests.log) # redirect stdout of this script to go to tee
awk 'BEGIN{print "foo"; exit 1}' # run whatever command you want
ret="$?" # save that command's exit status
exec 1>&3; 3>&- # restore stdout and close fd 3
#########
echo "here's the result:"
echo "$ret"
cat tests.log
$ ./tst.sh
foo
here's the result:
1
foo
Obviously just test the value of ret to exit or not, e.g.:
if (( ret != 0 )); then
echo 'the sky is falling' >&2
exit "$ret"
fi
You could also wrap the command call in a function if you like:
$ cat tst.sh
#!/usr/bin/env bash
doit() {
local ret=0
exec 3>&1 # save fd 1 (stdout) in fd 3 to restore later
exec > >(tee tests.log) # redirect stdout of this script to go to tee
awk 'BEGIN{print "foo"; exit 1}' # run whatever command you want
ret="$?" # save that command's exit status
exec 1>&3; 3>&- # restore stdout and close fd 3
return "$ret"
}
doit
echo "\$?=$?"
cat tests.log
$ ./tst.sh
foo
$?=1
foo
I'm trying to understand why whenever I'm using function 2>&1 | tee -a $LOG tee creates a subshell in function that can't be exited by simple exit 1 (and if I'm not using tee it works fine). Below the example:
#!/bin/bash
LOG=/root/log.log
function first()
{
echo "Function 1 - I WANT to see this."
exit 1
}
function second()
{
echo "Function 2 - I DON'T WANT to see this."
exit 1
}
first 2>&1 | tee -a $LOG
second 2>&1 | tee -a $LOG
Output:
[root#linuxbox ~]# ./1.sh
Function 1 - I WANT to see this.
Function 2 - I DON'T WANT to see this.
So. if I remove | tee -a $LOG part, it's gonna work as expected (script will be exited in the first function).
Can you, please, explain how to overcome this and exit properly in the function while being able to tee output?
If you create a pipeline, the function is run in a subshell, and if you exit from a subshell, only the subshell will be affected, not the parent shell.
printPid(){ echo $BASHPID; }
printPid #some value
printPid #same value
printPid | tee #an implicit subshell -- different value
( printPid ) #an explicit subshell -- also a different value
If, instead of aFunction | tee you do:
aFunction > >(tee)
it'll be essential the same, except aFunction won't run in a subshell, and thus will be able to affect the current environment (set variables, call exit, etc.).
Use PIPESTATUS to retrieve the exit status of the first command in the pipeline.
first 2>&1 | tee -a $LOG; test ${PIPESTATUS[0]} -eq 0 || exit ${PIPESTATUS[0]}
second 2>&1 | tee -a $LOG; test ${PIPESTATUS[0]} -eq 0 || exit ${PIPESTATUS[0]}
You can tell bash to fail if anything in the pipeline fails with set -e -o pipefail:
$ cat test.sh
#!/bin/bash
LOG=~/log.log
set -e -o pipefail
function first()
{
echo "Function 1 - I WANT to see this."
exit 1
}
function second()
{
echo "Function 2 - I DON'T WANT to see this."
exit 1
}
first 2>&1 | tee -a $LOG
second 2>&1 | tee -a $LOG
$ ./test.sh
Function 1 - I WANT to see this.
The group command { list; } should execute list in the current shell environment.
This allows things like variable assignments to be visible outside of the command group (http://mywiki.wooledge.org/BashGuide/CompoundCommands).
I use it to send output to a logfile as well as terminal:
{ { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; } 3>&1 1>&2 2>&3 | tee -a stderr.txt;
On the topic "pipe stdout and stderr to two different processes in shell script?" read here: pipe stdout and stderr to two different processes in shell script?.
{ echo "Result is 13"; echo "ERROR: division by 0" 1>&2; }
simulates a command with output to stdout and stderr.
I want to evaluate the exit status also. /bin/true and /bin/false simulate a command that may succeed or fail. So I try to save $? to a variable r:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
~$ r=init; { /bin/true; r=$?; } 2>/dev/null; echo $r;
0
As you can see the above pipeline construct does not set variable r while the second command line leads to the expected result. Is it a bug or is it my fault? Thanks.
I tested Ubuntu 12.04.2 LTS (~$) and Debian GNU/Linux 7.0 (wheezy) (~#) with the following versions of bash:
~$ echo $BASH_VERSION
4.2.25(1)-release
~# echo $BASH_VERSION
4.2.37(1)-release
I think, you miss that /bin/true returs 0 and /bin/false returns 1
$ r='res:'; { /bin/true; r+=$?; } 2>/dev/null; echo $r;
res:0
And
$ r='res:'; { /bin/false; r+=$?; } 2>/dev/null; echo $r;
res:1
I tried a test program:
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; }
echo $x
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; } | cat
echo $x
And indeed - it looks like the pipe forces the prior code into another process, but without reinitialising bash - so $BASHPID changes but $$ does.
See Difference between bash pid and $$ for more details of the different between $$ and $BASHPID.
Also outputting $BASH_SUBSHELL shows that the second bit is running in a subshell (level 1), and the first is at level 0.
bash executes all elements of a pipeline as subprocesses; if they're shell builtins or command groups, that means they execute in subshells and so any variables they set don't propagate to the parent shell. This can be tricky to work around in general, but if all you need is the exit status of the command group, you can use the $PIPESTATUS array to get it:
$ { false; } | cat; echo "${PIPESTATUS[#]}"
1 0
$ { false; } | cat; r=${PIPESTATUS[0]}; echo $r
1
$ { true; } | cat; r=${PIPESTATUS[0]}; echo $r
0
Note that this only works for getting the exit status of the last command in the group:
$ { false; true; false; uselessvar=$?; } | cat; r=${PIPESTATUS[0]}; echo $r
0
... because uselessvar=$? succeeded.
Using a variable to hold the exit status is no appropriate method with pipelines:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
The pipeline creates a subshell. In the pipe the exit status is assigned to a (local) copy of variable r whose value is dropped.
So I want to add my solution to the orginating challenge to send output to a logfile as well as terminal while keeping track of exit status. I decided to use another file descriptor. Formatting in a single line may be a bit confusing ...
{ { r=$( { { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; /bin/false; echo $? 1>&4; } | tee stdout.txt; } 3>&1 1>&2 2>&3 | tee stderr.txt; } 4>&1 1>&2 2>&3 ); } 3>&1; } 1>stdout.term 2>stderr.term; echo r=$r
... so I apply some indentation:
{
{
: # no operation
r=$( {
{
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
} 3>&1 1>&2 2>&3 | tee stderr.txt;
} 4>&1 1>&2 2>&3 );
} 3>&1;
} 1>stdout.term 2>stderr.term; echo r=$r
Do not mind the line "no operation". It showed up that the forum's formatting checker relies on it and otherwise would insist: "Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon."
If executed it yields the following output:
r=1
For demonstration purposes I redirected terminal output to the files stdout.term and stderr.term.
root#voipterm1:~# cat stdout.txt
Result is 13
root#voipterm1:~# cat stderr.txt
ERROR: division by 0
root#voipterm1:~# cat stdout.term
Result is 13
root#voipterm1:~# cat stderr.term
ERROR: division by 0
Let me explain:
The following group command simulates some command that yields an error code of 1 along with some error message. File descriptor 4 is declared in step 3:
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
By the following code stdout and stderr streams are swapped using file descriptor 3 as a dummy. This way error messages are sent to the file stderr.txt:
{
...
} 3>&1 1>&2 2>&3 | tee stderr.txt;
Exit status has been sent to file descriptor 4 in step 1. It is now redirected to file descriptor 1 which defines the value of variable r. Error messages are redirected to file descriptor 2 while normal output ("Result is 13") is attached to file descriptor 3:
r=$( {
...
} 4>&1 1>&2 2>&3 );
Finally file descriptor 3 is redirected to file descriptor 1. This controls the output "Result is 13":
{
...
} 3>&1;
The outermost curly brace just shows how the command behaves.
Gordon Davisson suggested to exploit the array variable PIPESTATUS containing a list of exit status values from the processes in the most-recently-executed foreground pipeline. This may be an promising approach but leads to the question how to hand over its value to the enclosing pipeline.
~# r=init; { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; r=${PIPESTATUS[0]}; } 3>&1 1>&2 2>&3 | tee -a stderr.txt; echo "Can you tell me the exit status? $r"
ERROR: division by 0
Result is 13
Can you tell me the exit status? init
To redirect (and append) stdout and stderr to a file, while also displaying it on the terminal, I do this:
command 2>&1 | tee -a file.txt
However, is there another way to do this such that I get an accurate value for the exit status?
That is, if I test $?, I want to see the exit status of command, not the exit status of tee.
I know that I can use ${PIPESTATUS[0]} here instead of $?, but I am looking for another solution that would not involve having to check PIPESTATUS.
Perhaps you could put the exit value from PIPESTATUS into $?
command 2>&1 | tee -a file.txt ; ( exit ${PIPESTATUS} )
Another possibility, with some bash flavours, is to turn on the pipefail option:
pipefail
If set, the return value of a pipeline is
the value of the last (rightmost)
command to exit with a non-zero
status, or zero if all commands in the
pipeline exit successfully. This
option is disabled by default.
set -o pipefail
...
command 2>&1 | tee -a file.txt || echo "Command (or tee?) failed with status $?"
This having been said, the only way of achieving PIPESTATUS functionality portably (e.g. so it'd also work with POSIX sh) is a bit convoluted, i.e. it requires a temp file to propagate a pipe exit status back to the parent shell process:
{ command 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a file.txt
if [ "`cat \"/tmp/~pipestatus.$$\"`" -ne 0 ] ; then
...
fi
or, encapsulating for reuse:
log2file() {
LOGFILE="$1" ; shift
{ "$#" 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a "$LOGFILE"
MYPIPESTATUS="`cat \"/tmp/~pipestatus.$$\"`"
rm -f "/tmp/~pipestatus.$$"
return $MYPIPESTATUS
}
log2file file.txt command param1 "param 2" || echo "Command failed with status $?"
or, more generically perhaps:
save_pipe_status() {
STATUS_ID="$1" ; shift
"$#"
echo $? >"/tmp/~pipestatus.$$.$STATUS_ID"
}
get_pipe_status() {
STATUS_ID="$1" ; shift
return `cat "/tmp/~pipestatus.$$.$STATUS_ID"`
}
save_pipe_status my_command_id ./command param1 "param 2" | tee -a file.txt
get_pipe_status my_command_id || echo "Command failed with status $?"
...
rm -f "/tmp/~pipestatus.$$."* # do this in a trap handler, too, to be really clean
There is an arcane POSIX way of doing this:
exec 4>&1; R=$({ { command1; echo $? >&3 ; } | { command2 >&4; } } 3>&1); exec 4>&-
It will set the variable R to the return value of command1, and pipe output of command1 to command2, whose output is redirected to the output of parent shell.
Use process substitution:
command > >( tee -a "$logfile" ) 2>&1
tee runs in a subshell so $? holds the exit status of command.
I've written (well, remixed to arrive at) this Bash script
# pkill.sh
trap onexit 1 2 3 15 ERR
function onexit() {
local exit_status=${1:-$?}
echo Problem killing $kill_this
exit $exit_status
}
export kill_this=$1
for X in `ps acx | grep -i $1 | awk {'print $1'}`; do
kill $X;
done
it works fine but any errors are shown to the display. I only want the echo Problem killing... to show in case of error. How can I "catch" (hide) the error when executing the kill statement?
Disclaimer: Sorry for the long example, but when I make them shorter I inevitably have to explain "what I'm trying to do."
# pkill.sh
trap onexit 1 2 3 15 ERR
function onexit() {
local exit_status=${1:-$?}
echo Problem killing $kill_this
exit $exit_status
}
export kill_this=$1
for X in `ps acx | grep -i $1 | awk {'print $1'}`; do
kill $X 2>/dev/null
if [ $? -ne 0 ]
then
onexit $?
fi
done
You can redirect stderr and stdout to /dev/null via something like pkill.sh > /dev/null 2>&1. If you only want to suppress the output from the kill command, only apply it to that line, e.g., kill $X > /dev/null 2>&1;
What this does is take send the standard output (stdout) from kill $X to /dev/null (that's the > /dev/null), and additionally send stderr (the 2) into stdout (the 1).
For my own notes, here's my new code using Paul Creasey's answer:
# pkill.sh: this is dangerous and should not be run as root!
trap onexit 1 2 3 15 ERR
#--- onexit() -----------------------------------------------------
# #param $1 integer (optional) Exit status. If not set, use `$?'
function onexit() {
local exit_status=${1:-$?}
echo Problem killing $kill_this
exit $exit_status
}
export kill_this=$1
for X in `ps acx | grep -i "$1" | awk {'print $1'}`; do
kill $X 2>/dev/null
done
Thanks all!