IO Redirection - Swapping stdout and stderr - shell

Given a shell script:
#!/bin/sh
echo "I'm stdout";
echo "I'm stderr" >&2;
Is there a way to call that script such that only stderr would print out, when the last part of the command is 2>/dev/null, ie
$ > sh myscript.sh SOME_OPTIONS_HERE 2>/dev/null
I'm stderr
Or, alternatively:
$ > sh myscript.sh SOME_OPTIONS_HERE >/dev/null
I'm stdout
It's a question at the end of a set of lecture slides, but after nearly a day working at this, I'm nearly certain it's some sort of typo. Pivoting doesn't work. 2>&- doesn't work. I'm out of ideas!

% (sh myscript.sh 3>&2 2>&1 1>&3) 2>/dev/null
I'm stderr
% (sh myscript.sh 3>&2 2>&1 1>&3) >/dev/null
I'm stdout
Explanation of 3>&2 2>&1 1>&3:
3>&2 means make a copy of file descriptor 2 (fd 2) (stderr), named fd 3 (file descriptor 3). It copies the file descriptor, it doesn't duplicate the stream as tee does.
2>&1 means that fd 2 of sh myscript.sh becomes a copy of it's fd 1 (stdout). Now, when myscript writes to it's stderr (it's fd 2), we receive it on stdout (our fd 1).
1>&3 means that fd 1 of sh myscript.sh becomes a copy of fd 3 (stderr). Now, when myscript writes to it's stdout (it's fd 1), we receive it on stderr (our fd 2).

For sake of completeness, based on a comment by #200_success above, it is probably better to move the file descriptor 3 using 1>&3- :
$ (sh myscript.sh 3>&2 2>&1 1>&3-) 2>/dev/null
I'm stderr
$ (sh myscript.sh 3>&2 2>&1 1>&3-) >/dev/null
I'm stdout
Instead of swapping file descriptors on a per-process basis, using exec you can swap stdin & stderr for all following commands launched by the current shell :
$ (exec 3>&2 2>&1 1>&3- ; sh myscript.sh ; sh myscript.sh ) 2>/dev/null
I'm stderr
I'm stderr
$ (exec 3>&2 2>&1 1>&3- ; sh myscript.sh ; sh myscript.sh ) >/dev/null
I'm stdout
I'm stdout

Moving a file descriptor (1>&3-) is not portable, not all POSIX shell implementations support it. It is a ksh93-ism and bash-ism. (more info here https://unix.stackexchange.com/questions/65000/practical-use-for-moving-file-descriptors)
It is also possible to close FD 3 instead after performing the redirections.
ls 3>&2 2>&1 1>&3 3>&-
prints the contents of the current working directory out stderr.
The syntax 3>&- or 3<&- closes file descriptor 3.

The bash hackers wiki can be very useful in this kind of things.
There's a way of doing it which is not mentioned among these answers, so
I'll put my two cents.
The semantics of >&N, for numeric N, means redirect to the target of
the file descriptor N. The word target is important since the descriptor can change target later, but once we copied that target we don't care. That's the reason why the order in which we declare of redirection is relevant.
So you can do it as follows:
./myscript.sh 2>&1 >/dev/null
That means:
redirect stderr to stdout's target, that is the stdout output stream. Now
stderr copied stdout's target
change stdout to /dev/null. This won't affect stderr, since it "copied"
the target before we changed it.
No need for a third file descriptor.
It is interesting how I can't simply do >&-, instead of >/dev/null. This actually closes stdout, so I'm getting an error (on stderr's target, that is the actual stdout, of course :D)
line 3: echo: write error: Bad file descriptor
You can see that order is relevant by trying to swap the redirections:
./myscript.sh >/dev/null 2>&1
This will not work, because:
We set the target of stdout to /dev/null
We set the target of stderr to stdout's target, that is /dev/null again.

Related

What does exec 3>&1 4>&2 1>>/tmp/output.log2>&1 do in a bash script? [duplicate]

This question already has answers here:
How to undo exec > /dev/null in bash?
(5 answers)
How to undo exec 2> /dev/null in bash? [duplicate]
(1 answer)
Writing outputs to log file and console
(9 answers)
Closed 10 months ago.
I saw something like this in a bash script
exec 3>&1 4>&2 1>>/tmp/output.log 2>&1
As far I understand, the stdout is redirected to a new fd 3 and stderr to 4. What does 1 and 2 fd hold then and what does it mean to redirecto 1>> file as well 2>&1?
I see the output as well as err from the script is written to /tmp/output.log.
I want the script to write stdout and stderr to the /tmp/output.log as well as display in console while it is running. How should the redirection look like?
exec 3>&1 4>&2 1>>/tmp/output.log 2>&1 does the following:
make file descriptor 3 a copy of stdout
make file descriptor 4 a copy of stderr
redirect-append stdout to file /tmp/output.log
make stderr a copy of stdout
File descriptors 3 and 4 are maybe used here as temporary storage, such that the initial stdout and stderr can be restored with exec 1>&3 2>&4.
The third redirection sends stdout to /tmp/output.log in append mode. The last redirection makes stderr a copy of stdout, that is, also sends stderr to /tmp/output.log in append mode. Note that the order of these 4 redirections matters.
I want the script to write stdout and stderr to the /tmp/output.log as well as display in console while it is running. How should the redirection look like?
Redirect stdout to a tee -a /tmp/output.log command using process substitution. Then redirect stderr to stdout. Example:
$ cat foo
#!/usr/bin/env bash
# redirect
exec 1> >( tee -a /tmp/output.log ) 2>&1
ls -z
echo "foo"
$ ./foo
ls: invalid option -- 'z'
Try 'ls --help' for more information.
foo
$ cat /tmp/output.log
ls: invalid option -- 'z'
Try 'ls --help' for more information.
foo
If at one point in your script you want to restore stdout and stderr to their original state you can use the same trick as in the example you found:
$ cat foo
#!/usr/bin/env bash
# redirect
exec 3>&1 4>&2 1> >( tee -a /tmp/output.log ) 2>&1
ls -z
echo "foo"
...
# restore
exec 1>&3 2>&4
...

What's the difference between `command > output` and `command 2>&1 > output`?

I'm somewhat familiar with the common way of redirecting stdout to a file, and then redirecting stderr to stdout.
If I run a command such as ls > output.txt 2>&1, my guess is that under the hood, the shell is executing something like the following c code:
close(1)
open("output.txt") // assigned to fd 1
close(2)
dup2(1, 2)
Since fd 1 has already been replaced with output.txt, anything printed to stderr will be redirected to output.txt.
But, if I run ls 2>&1 > output.txt, I'm guessing that this is instead what happens:
close(2)
dup2(1, 2)
close(1)
open("output.txt")
But, since the shell prints out both stdout and stderr by default, is there any difference between ls 2>&1 output.txt and ls > output.txt? In both cases, stdout will be redirected to output.txt, while stderr will be printed to the console.
With ls >output.txt, the stderr from ls goes to the stderr inherited from the calling process. In contrast, with ls 2>&1 >output.txt, the stderr of ls is sent to the stdout of the calling process.
Let's try this with an example script that prints a line of output to each of stdout and stderr:
$ cat pr.sh
#!/bin/sh
echo "to stdout"
echo "to stderr" 1>&2
$ sh pr.sh >/dev/null
to stderr
$ sh pr.sh 2>/dev/null
to stdout
Now if we insert "2>&1" into the first command line, nothing appears different:
$ sh pr.sh 2>&1 >/dev/null
to stderr
But now let's run both of those inside a context where the inherited stdout is going someplace other than the console:
$ (sh pr.sh 2>&1 >/dev/null) >/dev/null
$ (sh pr.sh >/dev/null) >/dev/null
to stderr
The second command still prints because the inherited stderr is still going to the console. But the first prints nothing because the "2>&1" redirects the inner stderr to the outer stdout, which is going to /dev/null.
Although I've never used this construction, conceivably it could be useful in a situation where (in a script, most likely) you want to run a program, send its stdout to a file, but forward its stderr on to the caller as if it were "normal" output, perhaps because that program is being run along with some other programs and you want the first program's "error" output to be part of the same stream as the other programs' "normal" output. (Perhaps both programs are compilers, and you want to capture all the error messages, but they disagree about which stream errors are sent to.)

Why doesn't this redirect to /dev/null?

Perhaps it's just late and I'm having brain farts but shouldn't this
(>&2 echo dying) 2>&1 >/dev/null
produce no output in a normal shell?
Similarly if this is /tmp/x.pl
#!/usr/bin/perl
die "dying"
Then why does this
#> perl /tmp/x.pl 2>&1 >/dev/null
output
dying at /tmp/x.pl line 2.
?
Redirections are processed left-to-right. So you're doing 2>&1 before you do >/dev/null. This redirects FD 2 to the original connection of FD 1 (presumably the terminal), then redirects FD 1 to /dev/null. FD 2 is still connected to the terminal.
To redirect both stdout and stderr to /dev/null, you have to use
(>&2 echo dying) >/dev/null 2>&1
The order in which file descriptor redirections is done is very important.
Just switch the orders:
(>&2 echo dying) >/dev/null 2>&1
perl /tmp/x.pl >/dev/null 2>&1
While you are doing:
(>&2 echo dying) 2>&1 >/dev/null
the STDOUT of subshell (()) is redirected to where STDERR of the subshell is first. Then in the parent (main) shell, you have redirected STDERR to STDOUT, which is pointing to the terminal at that moment, so the STDERR from the subshell will get printed, then you are redirecting STDOUT to /dev/null which will send STDOUT to /dev/null from that time of evaluation, not before.
Similar note goes for the second case too.
So, always maintain order while manipulation file descriptors, and the order of evaluation if from left to right.

How do I copy stderr without stopping it writing to the terminal?

I want to write a shell script that runs a command, writing its stderr to my terminal as it arrives. However, I also want to save stderr to a variable, so I can inspect it later.
How can I achieve this? Should I use tee, or a subshell, or something else?
I've tried this:
# Create FD 3 that can be used so stdout still comes through
exec 3>&1
# Run the command, piping stdout to normal stdout, but saving stderr.
{ ERROR=$( $# 2>&1 1>&3) ; }
echo "copy of stderr: $ERROR"
However, this doesn't write stderr to the console, it only saves it.
I've also tried:
{ $#; } 2> >(tee stderr.txt >&2 )
echo "stderr was:"
cat stderr.txt
However, I don't want the temporary file.
I often want to do this, and find myself reaching for /dev/stderr, but there can be problems with this approach; for example, Nix build scripts give "permission denied" errors if they try to write to /dev/stdout or /dev/stderr.
After reinventing this wheel a few times, my current approach is to use process substitution as follows:
myCmd 2> >(tee >(cat 1>&2))
Reading this from the outside in:
This will run myCmd, leaving its stdout as-is. The 2> will redirect the stderr of myCmd to a different destination; the destination here is >(tee >(cat 1>&2)) which will cause it to be piped into the command tee >(cat 1>&2).
The tee command duplicates its input (in this case, the stderr of myCmd) to its stdout and to the given destination. The destination here is >(cat 1>&2), which will cause the data to be piped into the command cat 1>&2.
The cat command just passes its input straight to stdout. The 1>&2 redirects stdout to go to stderr.
Reading from the inside out:
The cat 1>&2 command redirects its stdin to stderr, so >(cat 1>&2) acts like /dev/stderr.
Hence tee >(cat 1>&2) duplicates its stdin to both stdout and stderr, acting like tee /dev/stderr.
We use 2> >(tee >(cat 1>&2)) to get 2 copies of stderr: one on stdout and one on stderr.
We can use the copy on stdout as normal, for example storing it in a variable. We can leave the copy on stderr to get printed to the terminal.
We can combine this with other redirections if we like, e.g.
# Create FD 3 that can be used so stdout still comes through
exec 3>&1
# Run the command, redirecting its stdout to the shell's stdout,
# duplicating its stderr and sending one copy to the shell's stderr
# and using the other to replace the command's stdout, which we then
# capture
{ ERROR=$( $# 2> >(tee >(cat 1>&2)) 1>&3) ; }
echo "copy of stderr: $ERROR"
Credit goes to #Etan Reisner for the fundamentals of the approach; however, it's better to use tee with /dev/stderr rather than /dev/tty in order to preserve normal behavior (if you send to /dev/tty, the outside world doesn't see it as stderr output, and can neither capture nor suppress it):
Here's the full idiom:
exec 3>&1 # Save original stdout in temp. fd #3.
# Redirect stderr to *captured* stdout, send stdout to *saved* stdout, also send
# captured stdout (and thus stderr) to original stderr.
errOutput=$("$#" 2>&1 1>&3 | tee /dev/stderr)
exec 3>&- # Close temp. fd.
echo "copy of stderr: $errOutput"

How to redirect stdout+stderr to one file while keeping streams separate?

Redirecting stdout+stderr such that both get written to a file while still outputting to stdout is simple enough:
cmd 2>&1 | tee output_file
But then now both stdout/stderr from cmd are coming on stdout. I'd like to write stdout+stderr to the same file (so ordering is preserved assuming cmd is single threaded) but then still be able to also separately redirect them, something like this:
some_magic_tee_variant combined_output cmd > >(command-expecting-stdout) 2> >(command-expecting-stderr)
So combined_output contains the both with order preserved, but the command-expecting-stdout only gets stdout and command-expecting-stderr only gets stderr. Basically, I want to log stdout+stderr while still allowing stdout and stderr to be separately redirected and piped. The problem with the tee approach is it globs them together. Is there a way to do this in bash/zsh?
From what I unterstand this is what you are looking for. First I made a litte script to write on stdout and stderr. It looks like this:
$ cat foo.sh
#!/bin/bash
echo foo 1>&2
echo bar
Then I ran it like this:
$ ./foo.sh 2> >(tee stderr | tee -a combined) 1> >(tee stdout | tee -a combined)
foo
bar
The results in my bash look like this:
$ cat stderr
foo
$ cat stdout
bar
$ cat combined
foo
bar
Note that the -a flag is required so the tees don't overwrite the other tee's content.
{ { cmd | tee out >&3; } 2>&1 | tee err >&2; } 3>&1
Or, to be pedantic:
{ { cmd 3>&- | tee out >&3 2> /dev/null; } 2>&1 | tee err >&2 3>&- 2> /dev/null; } 3>&1
Note that it's futile to try and preserve order. It is basically impossible. The only solution would be to modify "cmd" or use some LD_PRELOAD or gdb hack,
Order can indeed be preserved. Here's an example which captures the standard output and error, in the order in which they are generated, to a logfile, while displaying only the standard error on any terminal screen you like. Tweak to suit your needs.
1.Open two windows (shells)
2.Create some test files
touch /tmp/foo /tmp/foo1 /tmp/foo2
3.In window1:
mkfifo /tmp/fifo
</tmp/fifo cat - >/tmp/logfile
4.Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/1
Where /dev/pts/1 can be whatever terminal display you want. The subshell runs some "ls" and "echo" commands in sequence, some succeed (providing stdout) and some fail (providing stderr) in order to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file.
Here's how I do it:
exec 3>log ; example_command 2>&1 1>&3 | tee -a log ; exec 3>&-
Worked Example
bash$ exec 3>log ; { echo stdout ; echo stderr >&2 ; } 2>&1 1>&3 | \
tee -a log ; exec 3>&-
stderr
bash$ cat log
stdout
stderr
Here's how that works:
exec 3>log sets up file descriptor 3 to redirect into the file called log, until further notice.
example_command to make this a working example, I used { echo stdout ; echo stderr >&2 ; }. Or you could use ls /tmp doesnotexist to provide output instead.
Need to jump ahead to the pipe | at this point because bash does it first. The pipe sets up a pipe and redirects the file descriptor 1 into this pipe. So now, STDOUT is going into the pipe.
Now we can go back to where we were next in our left-to-right interpretation: 2>&1 this says errors from the program are to go to where STDOUT currently points, i.e. into the pipe we just set up.
1>&3 means STDOUT is redirected into file descriptor 3, which we earlier set up to output to the log file. So STDOUT from the command just goes into the log file, not to the terminal's STDOUT.
tee -a log takes it's input from the pipe (which you'll remember is now the errors from the command), and outputs it to STDOUT and also appends it to the log file.
exec 3>&- closes the file descriptor 3.
Victor Sergienko's comment is what worked for me, adding exec to the front of it makes this work for the entire script (instead of having to put it after individual commands)
exec 2> >(tee -a output_file >&2) 1> >(tee -a output_file)

Resources