setting variables inside a compound command in bash fails (command group) - bash

The group command { list; } should execute list in the current shell environment.
This allows things like variable assignments to be visible outside of the command group (http://mywiki.wooledge.org/BashGuide/CompoundCommands).
I use it to send output to a logfile as well as terminal:
{ { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; } 3>&1 1>&2 2>&3 | tee -a stderr.txt;
On the topic "pipe stdout and stderr to two different processes in shell script?" read here: pipe stdout and stderr to two different processes in shell script?.
{ echo "Result is 13"; echo "ERROR: division by 0" 1>&2; }
simulates a command with output to stdout and stderr.
I want to evaluate the exit status also. /bin/true and /bin/false simulate a command that may succeed or fail. So I try to save $? to a variable r:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
~$ r=init; { /bin/true; r=$?; } 2>/dev/null; echo $r;
0
As you can see the above pipeline construct does not set variable r while the second command line leads to the expected result. Is it a bug or is it my fault? Thanks.
I tested Ubuntu 12.04.2 LTS (~$) and Debian GNU/Linux 7.0 (wheezy) (~#) with the following versions of bash:
~$ echo $BASH_VERSION
4.2.25(1)-release
~# echo $BASH_VERSION
4.2.37(1)-release

I think, you miss that /bin/true returs 0 and /bin/false returns 1
$ r='res:'; { /bin/true; r+=$?; } 2>/dev/null; echo $r;
res:0
And
$ r='res:'; { /bin/false; r+=$?; } 2>/dev/null; echo $r;
res:1

I tried a test program:
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; }
echo $x
x=0
{ x=$$ ; echo "$$ $BASHPID $x" ; } | cat
echo $x
And indeed - it looks like the pipe forces the prior code into another process, but without reinitialising bash - so $BASHPID changes but $$ does.
See Difference between bash pid and $$ for more details of the different between $$ and $BASHPID.
Also outputting $BASH_SUBSHELL shows that the second bit is running in a subshell (level 1), and the first is at level 0.

bash executes all elements of a pipeline as subprocesses; if they're shell builtins or command groups, that means they execute in subshells and so any variables they set don't propagate to the parent shell. This can be tricky to work around in general, but if all you need is the exit status of the command group, you can use the $PIPESTATUS array to get it:
$ { false; } | cat; echo "${PIPESTATUS[#]}"
1 0
$ { false; } | cat; r=${PIPESTATUS[0]}; echo $r
1
$ { true; } | cat; r=${PIPESTATUS[0]}; echo $r
0
Note that this only works for getting the exit status of the last command in the group:
$ { false; true; false; uselessvar=$?; } | cat; r=${PIPESTATUS[0]}; echo $r
0
... because uselessvar=$? succeeded.

Using a variable to hold the exit status is no appropriate method with pipelines:
~$ r=init; { /bin/true; r=$?; } | cat; echo $r;
init
The pipeline creates a subshell. In the pipe the exit status is assigned to a (local) copy of variable r whose value is dropped.
So I want to add my solution to the orginating challenge to send output to a logfile as well as terminal while keeping track of exit status. I decided to use another file descriptor. Formatting in a single line may be a bit confusing ...
{ { r=$( { { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; /bin/false; echo $? 1>&4; } | tee stdout.txt; } 3>&1 1>&2 2>&3 | tee stderr.txt; } 4>&1 1>&2 2>&3 ); } 3>&1; } 1>stdout.term 2>stderr.term; echo r=$r
... so I apply some indentation:
{
{
: # no operation
r=$( {
{
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
} 3>&1 1>&2 2>&3 | tee stderr.txt;
} 4>&1 1>&2 2>&3 );
} 3>&1;
} 1>stdout.term 2>stderr.term; echo r=$r
Do not mind the line "no operation". It showed up that the forum's formatting checker relies on it and otherwise would insist: "Your post appears to contain code that is not properly formatted as code. Please indent all code by 4 spaces using the code toolbar button or the CTRL+K keyboard shortcut. For more editing help, click the [?] toolbar icon."
If executed it yields the following output:
r=1
For demonstration purposes I redirected terminal output to the files stdout.term and stderr.term.
root#voipterm1:~# cat stdout.txt
Result is 13
root#voipterm1:~# cat stderr.txt
ERROR: division by 0
root#voipterm1:~# cat stdout.term
Result is 13
root#voipterm1:~# cat stderr.term
ERROR: division by 0
Let me explain:
The following group command simulates some command that yields an error code of 1 along with some error message. File descriptor 4 is declared in step 3:
{
echo "Result is 13"
echo "ERROR: division by 0" 1>&2
/bin/false; echo $? 1>&4
} | tee stdout.txt;
By the following code stdout and stderr streams are swapped using file descriptor 3 as a dummy. This way error messages are sent to the file stderr.txt:
{
...
} 3>&1 1>&2 2>&3 | tee stderr.txt;
Exit status has been sent to file descriptor 4 in step 1. It is now redirected to file descriptor 1 which defines the value of variable r. Error messages are redirected to file descriptor 2 while normal output ("Result is 13") is attached to file descriptor 3:
r=$( {
...
} 4>&1 1>&2 2>&3 );
Finally file descriptor 3 is redirected to file descriptor 1. This controls the output "Result is 13":
{
...
} 3>&1;
The outermost curly brace just shows how the command behaves.
Gordon Davisson suggested to exploit the array variable PIPESTATUS containing a list of exit status values from the processes in the most-recently-executed foreground pipeline. This may be an promising approach but leads to the question how to hand over its value to the enclosing pipeline.
~# r=init; { { echo "Result is 13"; echo "ERROR: division by 0" 1>&2; } | tee -a stdout.txt; r=${PIPESTATUS[0]}; } 3>&1 1>&2 2>&3 | tee -a stderr.txt; echo "Can you tell me the exit status? $r"
ERROR: division by 0
Result is 13
Can you tell me the exit status? init

Related

How to keep return value while processing a command's output?

In Bash environment, I have a command, and I want to detect if it fails.
However it is not failing gracefully:
# ./program
do stuff1
do stuff2
error!
do stuff3
# echo $?
0
When it runs without errors (successful run), it returns with 0. When it runs into an error, it can either
return with 1, easily detectable
return with 0, but during run it prints some error messages
I want to use this program in a script with these goals:
I need the output to be printing to stdout normally (not at once after it finished!)
I need to catch the output's return value by $? or similar
I need to grep for "error" string in the output and set a variable in case of presence
Then I can evaluate by checking the return value and the "error" output.
However, if I add tee, it will ruin the return value.
I have tried $PIPESTATUS[0] and $PIPESTATUS[1], but it doesn't seem to work:
program | tee >(grep -i error)
Even if there is no error, $PIPESTATUS[1] always returns 0 (true), because the tee command was successful.
So what is the way to do this in bash?
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "ERROR: bash 4.3+ required" >2; exit 1;;
esac
exec {stdout_fd}>&1
if "$#" | tee "/dev/fd/$stdout_fd" | grep -i error >/dev/null; then
echo "Errors occurred (detected on stdout)" >&2
elif (( ${PIPESTATUS[0]} )); then
echo "Errors detected (via exit status)" >&2
else
echo "No errors occurred" >&2
fi
Tested as follows:
$ myfunc() { echo "This is an ERROR"; return 0; }; export -f myfunc
$ ./test-err myfunc
This is an ERROR
Errors occurred (detected on stdout)
$ myfunc() { echo "Everything is not so fine"; return 1; }; export -f myfunc
$ ./test-err myfunc
Everything is not so fine
Errors detected (via exit status)
$ myfunc() { echo "Everything is fine"; }; export -f myfunc
$ ./test-err myfunc
Everything is fine
No errors occurred

Bash script: Redirect error of command to function that receives an argument

how can I achieve to redirect the error to a function that receives a string as an argument?
This is the code:
function error {
echo "[ERROR]: $1"
}
# This does works:
terraform apply myplan || { echo -e '\n[ERROR]: Terraform apply failed. Fix errors and run the script again!' ; exit 1; }
# Output: [ERROR]: Terraform apply failed. Fix errors and run the script again!
# This does NOT work:
terraform apply myplan || { error 'Terraform apply failed. Fix errors and run the script again!' ; exit 1; }
# Output: [ERROR]
I do not understand why.
Example:
#!/bin/bash
# simulate terraform commands
function terraform_ok {
echo "this is on stdout from terraform_ok"
exit 0
}
function terraform_warning {
echo "this is on stdout from terraform_warning"
echo "this is on stderr from terraform_warning" >&2
exit 0
}
function terraform_error {
echo "this is on stdout from terraform_error"
echo "this is on stderr from terraform_error" >&2
echo "this is line two on stderr" >&2
exit 1
}
function catch_error {
rv=$?
if [[ $rv != 0 ]]; then
echo -e "[ERROR] >>>\n$#\n[ERROR] <<<"
elif [[ "$#" != "" ]]; then
echo -e "[WARNING] >>>\n$#\n[WARNING] <<<"
fi
# exit subshell with the same exit code the terraform command had
exit $rv
}
function swap_stdout_and_stderr {
"$#" 3>&2 2>&1 1>&3
}
function perform {
(catch_error "$(swap_stdout_and_stderr "$#")") 2>&1
}
function die {
rv=$?
echo "\"$#\" failed with exit code $rv."
exit $rv
}
function perform_or_die {
perform "$#" || die "$#"
}
perform_or_die terraform_ok apply myplan
perform_or_die terraform_warning apply myplan
perform_or_die terraform_error apply myplan
echo "this will never be reached"
Output (all on stdout):
this is on stdout from terraform_ok
this is on stdout from terraform_warning
[WARNING] >>>
this is on stderr from terraform_warning
[WARNING] <<<
this is on stdout from terraform_error
[ERROR] >>>
this is on stderr from terraform_error
this is line two on stderr
[ERROR] <<<
"terraform_error apply myplan" failed with exit code 1.
Explanation:
The swapping of stdout and stderr (3>&2 2>&1 1>&3) is done because when you do variable=$(command) the variable will get assigned whatever comes on stdout from command. The same applies in catch_error "$(command)". Whatever comes on stdout from command will be assigned to $# in the function catch_error. In your case you I assume you want to catch what comes on stderr instead, hence the swapping.
The final 2>&1 on the line is done to redirect stderr (which is the old stdout) back to stdout so that the expected behavior of greping in the output from this script can be done as usual.
Since the catch_error ... command is running in a subshell I've used || to execute another command in case the subshell returns an error. That command is die "$#" to exit the whole script with the same error code that the command exited with and to be able to show the command that failed.
The simplest way I can think of; this will save all output to a file:
terraform apply --auto-approve -no-color -input=false \
2>&1 | tee /tmp/tf-apply.out
I believe the expression &> would save only errors to the file.

Execute a command in a function displaying real-time output

In some Bash scripts I am executing some commands preserving the real-time output in this way:
exec 5>&1
output=$(ls -1 2>&1 |tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
I moved this piece of code in a function to make it reusable like this:
execute() {
# 1 - Execute backup
echo "Executing command 'very_long_command'..."
exec 5>&1
cmd="very_long_command"
output=$($cmd 2>&1 |tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
echo $output
echo "very_long_command exited with status $status."
return $status
}
When I call the function with exec_output="$(execute)" I can of course get its output, but what I still need is to get the output of very_long_command during its execution, and not at the end in a unique output.
Could you help me to achieve this?
Thanks to #Charles Duffy I solved my problem, redirecting FD 5 to stderr:
execute() {
exec 5>&2
output=$(ls -1 2>&1 | tee /dev/fd/5; exit ${PIPESTATUS[0]})
status=$?
echo "$output"
return $status
}
output="$(execute)"
echo "Function output:"
printf "%s\n" "$output"

Pass bash syntax (pipe operator) correctly to function

How is it possible that operator >> and stream redirection operator are passed to the function try() which catches errors and exits...
When I do this :
exitFunc() { echo "EXIIIIIIIIIIIIIIIIT" }
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exitFunc 111; }
try() { "$#" || die "cannot $*"; }
try commandWhichFails >> "logFile.log" 2>&1
When I run the above, also the exitFunction echo is output into the logFile...
How do I need to change the above that the try command does basically this
try ( what ever comes here >> "logFile.log" 2>&1 )
Can this be achieved with subshells?
If you want to use stderr in yell and not have it lost by your redirection in the body of the script, then you need to preserve it at the start of the script. For example in file descriptor 5:
#!/bin/bash
exec 5>&2
yell() { echo "$0: $*" >&5; }
...
If your bash supports it you can ask it to allocate the new file descriptor for you using a new syntax:
#!/bin/bash
exec {newfd}>&2
yell() { echo "$0: $*" >&$newfd; }
...
If you need to you can close the new fd with exec {newfd}>&-.
If I understand you correctly, you can't achieve it with subshells.
If you want the output of commandWhichFails to be sent to logFile.log, but not the errors from try() etc., the problem with your code is that redirections are resolved before command execution, in order of appearance.
Where you've put
try false >> "logFile.log" 2>&1
(using false as a command which fails), the redirections apply to the output of try, not to its arguments (at this point, there is no way to know that try executes its arguments as a command).
There may be a better way to do this, but my instinct is to add a catch function, thus:
last_command=
exitFunc() { echo "EXIIIIIIIIIIIIIIIIT"; } #added ; here
yell() { echo "$0: $*" >&2; }
die() { yell "$*"; exitFunc 111; }
try() { last_command="$#"; "$#"; }
catch() { [ $? -eq 0 ] || die "cannot $last_command"; }
try false >> "logFile.log" 2>&1
catch
Depending on portability requirements, you can always replace last_command with a function like last_command() { history | tail -2 | sed -n '1s/^ *[0-9] *//p' ;} (bash), which requires set -o history and removes the necessity of the try() function. You can replace the -2 with -"$1" to get the N th previous command.
For a more complete discussion, see BASH: echoing the last command run . I'd also recommend looking at trap for general error handling.

bash: redirect (and append) stdout and stderr to file and terminal and get proper exit status

To redirect (and append) stdout and stderr to a file, while also displaying it on the terminal, I do this:
command 2>&1 | tee -a file.txt
However, is there another way to do this such that I get an accurate value for the exit status?
That is, if I test $?, I want to see the exit status of command, not the exit status of tee.
I know that I can use ${PIPESTATUS[0]} here instead of $?, but I am looking for another solution that would not involve having to check PIPESTATUS.
Perhaps you could put the exit value from PIPESTATUS into $?
command 2>&1 | tee -a file.txt ; ( exit ${PIPESTATUS} )
Another possibility, with some bash flavours, is to turn on the pipefail option:
pipefail
If set, the return value of a pipeline is
the value of the last (rightmost)
command to exit with a non-zero
status, or zero if all commands in the
pipeline exit successfully. This
option is disabled by default.
set -o pipefail
...
command 2>&1 | tee -a file.txt || echo "Command (or tee?) failed with status $?"
This having been said, the only way of achieving PIPESTATUS functionality portably (e.g. so it'd also work with POSIX sh) is a bit convoluted, i.e. it requires a temp file to propagate a pipe exit status back to the parent shell process:
{ command 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a file.txt
if [ "`cat \"/tmp/~pipestatus.$$\"`" -ne 0 ] ; then
...
fi
or, encapsulating for reuse:
log2file() {
LOGFILE="$1" ; shift
{ "$#" 2>&1 ; echo $? >"/tmp/~pipestatus.$$" ; } | tee -a "$LOGFILE"
MYPIPESTATUS="`cat \"/tmp/~pipestatus.$$\"`"
rm -f "/tmp/~pipestatus.$$"
return $MYPIPESTATUS
}
log2file file.txt command param1 "param 2" || echo "Command failed with status $?"
or, more generically perhaps:
save_pipe_status() {
STATUS_ID="$1" ; shift
"$#"
echo $? >"/tmp/~pipestatus.$$.$STATUS_ID"
}
get_pipe_status() {
STATUS_ID="$1" ; shift
return `cat "/tmp/~pipestatus.$$.$STATUS_ID"`
}
save_pipe_status my_command_id ./command param1 "param 2" | tee -a file.txt
get_pipe_status my_command_id || echo "Command failed with status $?"
...
rm -f "/tmp/~pipestatus.$$."* # do this in a trap handler, too, to be really clean
There is an arcane POSIX way of doing this:
exec 4>&1; R=$({ { command1; echo $? >&3 ; } | { command2 >&4; } } 3>&1); exec 4>&-
It will set the variable R to the return value of command1, and pipe output of command1 to command2, whose output is redirected to the output of parent shell.
Use process substitution:
command > >( tee -a "$logfile" ) 2>&1
tee runs in a subshell so $? holds the exit status of command.

Resources