Bash capture stderr into variable without redirect - bash

Specifically, I'm writing a script to make it easier to compile and run my C++ code. It's easy for it to tell if the compilation succeeded of failed, but I also want to add a state where it "compiled with warnings".
$out # to avoid an "ambiguous redirect"
g++ -Wall -Wextra $1 2> out
if [ $? == 0 ]
then
# this is supposed to test the length of the output string
# unless there are errors, $out should be length 0
if [ ${#out} == 0 ]
then
# print "Successful"
else
# print "Completed with Warnings"
fi
else
# print "Failed"
fi
As it is, the failure case check works fine, but $out is always an empty string, though stderr is no longer displaying on the screen, $out is never actually set. If possible, I would also like stderr to still go to the screen.
I hope what I've said makes sense. Cheers.

g++ -Wall -Wextra $1 2> out
This redirects stderr to a file named out, not a variable named $out.
If you want to run gcc and see stdout and stderr on screen as well as save stderr's output, you could use a named pipe (FIFO). It's a bit roundabout, but it'd get the job done.
mkfifo stderr.fifo
gcc -Wall -o /dev/null /tmp/warn.c 2> stderr.fifo &
tee stderr.log < stderr.fifo >&2
rm -f stderr.fifo
wait
After running these commands, the warnings will be available in stderr.log. Taking advantage of the fact that wait will return gcc's exit code, you could then do something like:
if wait; then
if [[ -s stderr.log ]]; then
# print "Completed with Warnings"
else
# print "Successful"
fi
else
# print "Failed"
fi
Annotated:
# Created a named pipe. If one process writes to the pipe, another process can
# read from it to see what was written.
mkfifo stderr.fifo
# Run gcc and redirect its stderr to the pipe. Do it in the background so we can
# read from the pipe in the foreground.
gcc -Wall -o /dev/null /tmp/warn.c 2> stderr.fifo &
# Read from the pipe and write its contents both to the screen (stdout) and to
# the named file (stderr.log).
tee stderr.log < stderr.fifo >&2
# Clean up.
rm -f stderr.fifo
# Wait for gcc to finish and retrieve its exit code. `$?` will be gcc's exit code.
wait

To capture in a variable and display on the screen, use tee:
out=$( g++ -Wall -Wextra "$1" 2>&1 >dev/null | tee /dev/stderr )
This throws out the standard output of g++ and redirects standard error to standard output. That output is piped to tee, which writes it to the named file (/dev/stderr, so that the messages go back to the original standard error) and standard output, which is captured in the variable out.

Related

How to redirect the command ssh -V to a file? [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 1 year ago.
I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?
Take a look here. It should be:
yourcommand &> filename
It redirects both standard output and standard error to file filename.
do_something 2>&1 | tee -a some_file
This is going to redirect standard error to standard output and standard output to some_file and print it to standard output.
You can redirect stderr to stdout and the stdout into a file:
some_command >file.log 2>&1
See Chapter 20. I/O Redirection
This format is preferred over the most popular &> format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable - 2 (is standard error) redirected to 1 (standard output).
# Close standard output file descriptor
exec 1<&-
# Close standard error file descriptor
exec 2<&-
# Open standard output as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect standard error to standard output
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.
To the author of the original post,
It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.
Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...
exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
trap 'cleanup' INT QUIT TERM EXIT
get_pids_of_ppid() {
local ppid="$1"
RETVAL=''
local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
RETVAL="$pids"
}
# Needed to kill processes running in background
cleanup() {
local current_pid element
local pids=( "$$" )
running_pids=("${pids[#]}")
while :; do
current_pid="${running_pids[0]}"
[ -z "$current_pid" ] && break
running_pids=("${running_pids[#]:1}")
get_pids_of_ppid $current_pid
local new_pids="$RETVAL"
[ -z "$new_pids" ] && continue
for element in $new_pids; do
running_pids+=("$element")
pids=("$element" "${pids[#]}")
done
done
kill ${pids[#]} 2>/dev/null
}
So, from the beginning. Let's assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.
Create file descriptors (FDs) #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing file descriptor #1 doesn't affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
1> >(...) redirects standard output to command in parentheses
Parentheses (sub-shell) executes 'tee', reading from exec's standard output (pipe) and redirects to the 'logger' command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.
The result of running a script having the above line and additionally this one:
echo "Will end up in standard output (terminal) and /var/log/messages"
...is as follows:
$ ./my_script
Will end up in standard output (terminal) and /var/log/messages
$ tail -n1 /var/log/messages
Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages
If you want to see clearer picture, add these two lines to the script:
ls -l /proc/self/fd/
ps xf
bash your_script.sh 1>file.log 2>&1
1>file.log instructs the shell to send standard output to the file file.log, and 2>&1 tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).
Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn't work.
Curiously, this works:
yourcommand &> filename
But this gives a syntax error:
yourcommand &>> filename
syntax error near unexpected token `>'
You have to use:
yourcommand 1>> filename 2>&1
Short answer: Command >filename 2>&1 or Command &>filename
Explanation:
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout to stdout, and create a file named "2" and write stderror there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2, is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we can explain why the solution why the following code produces no output:
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
File descriptors behave like pointers (although file descriptors are not the same as file pointers)
Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers
LOG_FACILITY="local7.notice"
LOG_TOPIC="my-prog-name"
LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
It is related: Writing standard output and standard error to syslog.
It almost works, but not from xinetd ;(
For the situation when "piping" is necessary, you can use |&.
For example:
echo -ne "15\n100\n" | sort -c |& tee >sort_result.txt
or
TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods | grep node >>js.log ; done |& sort -h
These Bash-based solutions can pipe standard output and standard error separately (from standard error of "sort -c", or from standard error to "sort -h").
I wanted a solution to have the output from stdout plus stderr written into a log file and stderr still on console. So I needed to duplicate the stderr output via tee.
This is the solution I found:
command 3>&1 1>&2 2>&3 1>>logfile | tee -a logfile
First swap stderr and stdout
then append the stdout to the log file
pipe stderr to tee and append it also to the log file
Adding to what Fernando Fabreti did, I changed the functions slightly and removed the &- closing and it worked for me.
function saveStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
exec 3>&1
exec 4>&2
trap restoreStandardOutputs EXIT
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
}
# Parameters: $1 => logfile to write to
function redirectOutputsToLogfile {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
saveStandardOutputs
exec 1>>${LOGFILE}
exec 2>&1
OUTPUTS_REDIRECTED="true"
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
}
function restoreStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
exec 1>&3 #restore stdout
exec 2>&4 #restore stderr
OUTPUTS_REDIRECTED="false"
fi
}
LOGFILE_NAME="tmp/one.log"
OUTPUTS_REDIRECTED="false"
echo "this goes to standard output"
redirectOutputsToLogfile $LOGFILE_NAME
echo "this goes to logfile"
echo "${LOGFILE_NAME}"
restoreStandardOutputs
echo "After restore this goes to standard output"
The "easiest" way (Bash 4 only):
ls * 2>&- 1>&-
In situations when you consider using things like exec 2>&1, I find it easier to read, if possible, rewriting code using Bash functions like this:
function myfunc(){
[...]
}
myfunc &>mylog.log
The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.
#!/bin/bash
#set -x
# global vars
OUTPUTS_REDIRECTED="false"
LOGFILE=/dev/stdout
# "private" function used by redirect_outputs_to_logfile()
function save_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
exec 3>&1
exec 4>&2
trap restore_standard_outputs EXIT
}
# Params: $1 => logfile to write to
function redirect_outputs_to_logfile {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
save_standard_outputs
exec 1>>${LOGFILE%.log}.log
exec 2>&1
OUTPUTS_REDIRECTED="true"
}
# "private" function used by save_standard_outputs()
function restore_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
exit 1;
fi
exec 1>&- #closes FD 1 (logfile)
exec 2>&- #closes FD 2 (logfile)
exec 2>&4 #restore stderr
exec 1>&3 #restore stdout
OUTPUTS_REDIRECTED="false"
}
Example of usage inside script:
echo "this goes to stdout"
redirect_outputs_to_logfile /tmp/one.log
echo "this goes to logfile"
restore_standard_outputs
echo "this goes to stdout"
For tcsh, I have to use the following command:
command >& file
If using command &> file, it will give an "Invalid null command" error.

testing a program in bash

I wrote a program in c++ and now I have a binary. I have also generated a bunch of tests for testing. Now I want to automate the process of testing with bash. I want to save three things in one execution of my binary:
execution time
exit code
output of the program
Right now I am stack up with a script that only tests that binary does its job and returns 0 and doesn't save any information that I mentioned above. My script looks like this
#!/bin/bash
if [ "$#" -ne 2 ]; then
echo "Usage: testScript <binary> <dir_with_tests>"
exit 1
fi
binary="$1"
testsDir="$2"
for test in $(find $testsDir -name '*.txt'); do
testname=$(basename $test)
encodedTmp=$(mktemp /tmp/encoded_$testname)
decodedTmp=$(mktemp /tmp/decoded_$testname)
printf 'testing on %s...\n' "$testname"
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
echo 'encoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! "$binary" -u -f $encodedTmp -o $decodedTmp > /dev/null; then
echo 'decoder failed'
rm "$encodedTmp"
rm "$decodedTmp"
continue
fi
if ! diff "$test" "$decodedTmp" > /dev/null ; then
echo "result differs with input"
else
echo "$testname passed"
fi
rm "$encodedTmp"
rm "$decodedTmp"
done
I want save output of $binary in a variable and not send it into /dev/null. I also want to save time using time bash function
As you asked for the output to be saved in a shell variable, I tried answering this without using output redirection – which saves output in (temporary) text files (which then have to be cleaned).
Saving the command output
You can replace this line
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
with
if ! output=$("$binary" -c -f $test -o $encodedTmp); then
Using command substitution saves the program output of $binary in the shell variable. Command substitution (combined with shell variable assignment) also allows exit codes of programs to be passed up to the calling shell so the conditional if statement will continue to check if $binary executed without error.
You can view the program output by running echo "$output".
Saving the time
Without a more sophisticated form of Inter-Process Communication, there’s no way for a shell that’s a sub-process of another shell to change the variables or the environment of its parent process so the only way that I could save both the time and the program output was to combine them in the one variable:
if ! time-output=$(time "$binary" -c -f $test -o $encodedTmp) 2>&1); then
Since time prints its profiling information to stderr, I use the parentheses operator to run the command in subshell whose stderr can be redirected to stdout. The programming output and the output of time can be viewed by running echo "$time-output" which should return something similar to:
<program output>
<blank line>
real 0m0.041s
user 0m0.000s
sys 0m0.046s
You can get the process status in bash by using $? and print it out by echo $?.
And to catch the output of time, you could use sth like that
{ time sleep 1 ; } 2> time.txt
Or you can save the output of the program and execution time at once
(time ls) > out.file 2>&1
You can save output to a file using output redirection. Just change first /dev/null line:
if ! "$binary" -c -f $test -o $encodedTmp > /dev/null; then
to
if ! "$binary" -c -f $test -o $encodedTmp > prog_output; then
then change second and third /dev/null lines respectively:
if ! "$binary" -u -f $encodedTmp -o $decodedTmp >> prog_output; then
if ! diff "$test" "$decodedTmp" >> prog_output; then
To measure program execution put
start=$(date +%s)
on the first line
then
end=$(date +%s)
echo "Execution time in seconds: " $((end-start)) >> prog_output
on the end.

Why can't I redirect stderr from within a bash -c command line?

I'm trying to log the time for the execution of a command, so I'm doing that by using the builtin time command in bash. I also wish to redirect the stderr and stdout to a logfile at the same time. However, it doesn't seem to be working as the stderr just spills out onto my terminal.
Here is the command:
rm -rf doxygen
mkdir doxygen
bash -c 'time "/cygdrive/d/Program Files/doxygen/bin/doxygen.exe" Doxyfile > doxygen/doxygen.log 1>&2' genfile > doxygen/time 1>&2 &
What am I doing wrong here?
You are using 1>&2 instead of 2>&1.
With the lengths of names reduced, you're trying to run:
bash -c 'time doxygen Doxyfile > doxygen.log 1>&2' genfile > doxygen.time 1>&2 &
The > doxygen.log sends standard output to the file; the 1>&2 then changes your mind and sends standard output to the same place that standard error is going. Similarly with the outer pair of redirections.
If you used:
bash -c 'time doxygen Doxyfile > doxygen.log 2>&1' genfile > doxygen.time 2>&1 &
then you send standard error to the same place that standard output goes — twice.
Incidentally, do you realize that the genfile serves as the $0 for the script run by bash -c '…'? I'm not convinced it is needed in your script. To see this, try:
bash -c 'echo 0=$0; echo 1=$1; echo 2=$2' genfile jarre oxygene
When run, this produces:
0=genfile
1=jarre
2=oxygene

equivalent of pipefail in dash shell

Is there some similar option in dash shell corresponding to pipefail in bash?
Or any other way of getting a non-zero status if one of the commands in pipe fail (but not exiting on it which set -e would).
To make it clearer, here is an example of what I want to achieve:
In a sample debugging makefile, my rule looks like this:
set -o pipefail; gcc -Wall $$f.c -o $$f 2>&1 | tee err; if [ $$? -ne 0 ]; then vim -o $$f.c err; ./$$f; fi;
Basically it runs opens the error file and source file on error and runs the programs when there is no error. Saves me some typing. Above snippet works well on bash but my newer Ubunty system uses dash which doesn't seem to support pipefail option.
I basically want a FAILURE status if the first part of the below group of commands fail:
gcc -Wall $$f.c -o $$f 2>&1 | tee err
so that I can use that for the if statement.
Are there any alternate ways of achieving it?
Thanks!
I ran into this same issue and the bash options of set -o pipefail and ${PIPESTATUS[0]} both failed in the dash shell (/bin/sh) on the docker image I'm using. I'd rather not modify the image or install another package, but the good news is that using a named pipe worked perfectly for me =)
mkfifo named_pipe
tee err < named_pipe &
gcc -Wall $$f.c -o $$f > named_pipe 2>&1
echo $?
See this answer for where I found the info: https://stackoverflow.com/a/1221844/431296
The Q.'s sample problem requires:
I basically want a FAILURE status if the first part of the ... group of commands fail:
Install moreutils, and try the mispipe util, which returns the exit status of the first command in a pipe:
sudo apt install moreutils
Then:
if mispipe "gcc -Wall $$f.c -o $$f 2>&1" "tee err" ; then \
./$$f
else
vim -o $$f.c err
fi
While 'mispipe' does the job here, it is not an exact duplicate of the bash shell's pipefail; from man mispipe:
Note that some shells, notably bash, do offer a
pipefail option, however, that option does not
behave the same since it makes a failure of any
command in the pipeline be returned, not just the
exit status of the first.

Suppress "nothing to be done for 'all' "

I am writing a short shell script which calls 'make all'. It's not critical, but is there a way I can suppress the message saying 'nothing to be done for all' if that is the case? I am hoping to find a flag for make which suppresses this (not sure there is one), but an additional line or 2 of code would work too.
FYI I'm using bash.
Edit: to be more clear, I only want to suppess messages that therer is nothing to be done. Otherwise, I want to display the output.
You can make "all" a PHONY target (if it isn't already) which has the real target as a prerequisite, and does something inconspicuous:
.PHONY: all
all: realTarget
#echo > /dev/null
I would like to improve on the previous solution, just to make it a little bit more efficient...:)
.PHONY: all
all: realTarget
#:
#true would also work but is a little slower than #: (I've done some performance tests). In any case both are quite faster than "echo > /dev/null"...
The flag -s silences make: make -s all
EDIT: I originally answered that the flag -q silenced make. It works for me, although the manpage specifies -s, --silent, --quiet as the valid flags.
The grep solution:
{ make all 2>&1 1>&3 | grep -v 'No rule to make target `all' >&2; } 3>&1
The construct 2>&1 1>&3 sends make's stdout to fd 3 and make's stderr to stdout. grep then reads from the previous command's stdout, removes the offending line and sends its stdout to stderr. Finally, fd 3 is returned to stdout.
2022-11-17, A response to #Pryftan's comment:
Ignoring the minor error that I used the wrong text.
Lets create a function that outputs some stuff
make() {
echo "this is stdout"
echo "this is stderr" >&2
printf 'oops, No rule to make target `%s`, not at all' "$1" >&2
}
Testing my solution:
$ { make foobar 2>&1 1>&3 | grep -v 'No rule to make target `all' >&2; } 3>&1
this is stdout
this is stderr
oops, No rule to make target `foobar`, not at all
$ { make all 2>&1 1>&3 | grep -v 'No rule to make target `all' >&2; } 3>&1
this is stdout
this is stderr
Looks good so far.
What about without the braces?
$ make all 2>&1 1>&3 | grep -v 'No rule to make target `all' >&2 3>&1
bash: 3: Bad file descriptor
In this case, we'd need to explicitly create fd 3
$ exec 3>&1; make all 2>&1 1>&3 | grep -v 'No rule to make target `all' >&2 3>&1
this is stdout
this is stderr
What is it about the braces? I think it's delaying evaluation of the contents, and that allows the trailing 3>&1 to be processed first. And that makes the inner 1>&3 valid.

Resources