Why is
echo 'foo' 1>&2 2>/dev/null
giving output? foo is redirected to file descriptor 2, file descriptor 2 is redirected to /dev/null. There should be no output?
It's about the order of when you do the redirection. When bash sees several redirections it processes them from left to right.
The first redirection of stdout redirects to current value of stderr (tty). When stderr changes to /dev/null, the stdout is still written to stderrs previous output (tty).
Change place on them and it will work.
echo 'foo' 2>/dev/null 1>&2
For more details, see http://www.catonmat.net/blog/bash-one-liners-explained-part-three/
Related
I have the following two bash scripts:
one.bash:
#!/bin/bash
echo "I don't control this line of output to stdout"
echo "I don't control this line of output to stderr" >&2
echo "I do control this line of output to fd 5" >&5
callone.bash:
#!/bin/bash
# here I try to merge stdout and stderr into stderr.
# then direct fd5 into stdout.
bash ./one.bash 1>&2 5>&1
When I run it like this:
bash callone.bash 2>stderr.txt >stdout.txt
The stderr.txt file looks like this:
I don't control this line of output to stdout
I don't control this line of output to stderr
I do control this line of output to fd 5
and stdout is empty.
I would like the "do control" line to be output to only stdout.txt.
The restrictions on making changes are:
I can change anything in callone.bash.
I can change the line in one.bash that I control.
I can add an exec in one.bash related to file descriptor 5.
I have to run the script as indicated.
[EDIT] The use case for this is: I have a script that does all kinds of running of other scripts that can output to stderr and stdout. But I need to ensure that the user only sees the well controlled message. So I send the well controlled message to fd5, and everything else (stdout & stderr) is sent to the log.
Redirections happen in order.
Once you run 1>&2 you've replaced fd 1 with fd 2.
So when you then run 5>&1 you are redirecting fd 5 to where fd 1 points now (not where it was when it started).
You need to invert the two redirections:
bash ./one.bash 5>&1 1>&2
I'm putting together a complex pipeline, where I want to include stderr in the program output for recordkeeping purposes but I also want errors to remain on stderr so I can spot problems.
I found this question that asks how to direct stdout+stderr to a file and still get stderr on the terminal; it's close, but I don't want to redirect stdout to a file yet: The program's output will be consumed by other scripts, so I'd like it to remain on stdout (and same for stderr). So, to summarize:
Script produces output in fd 1, errors in fd 2.
I want the calling program to rearrange things so that output+errors appear in fd 1, errors in fd 2.
Also, errors should be interleaved with output (as much as their own buffering allows), not saved and added at the end.
Due-diligence notes: Capturing stderr is easy enough with 2>&1. Saving and viewing stdout is easy enough by piping through tee. I also know how to divert stdout to a file and direct stderr through a pipe: command 2>&1 1>fileA | tee fileB. But how do I duplicate stderr and put stdout back in fd 1?
As test to generate both stdout and stderr, let's use the following:
{ echo out; echo err >&2; }
The following code demonstrates how both stdout and stderr can be sent to the next step in the pipeline while also sending stderr to the terminal:
$ { echo out; echo err >&2; } 2> >(tee /dev/stderr) | cat >f
err
$ cat f
out
err
How it works
2>
This redirects stderr to the (pseudo) file which follows.
>(tee /dev/stderr)
This is process substitution and its acts as a pseudo-file that receives input from stderr. Any input it receives is sent to the tee command which sends it both to stderr and to stdout.
Consider ./my_script >/var/log/my_log
Single echo statement from this script alone must to go to stdout.
How could this be accomplished ?
so we have some clever program
cat print2stdout
#!/bin/sh
echo some words secret and sent to null
echo some words to stdout > /dev/fd/3
last line puts to echo to 3 file descriptor opened.
and when invoking we map 3 FD to stdout, then redirect stdout to file
the result looks like that:
./print2stdout 3>&1 >/dev/null
some words to stdout
Just use /dev/tty which points to your terminal emulator regardless of redirections.
#!/bin/sh
echo this line go to the possibly redirected stdout
echo this line shows up on the screen > /dev/tty
When combining stderr with stdout, why does 2>&1 need to come before a | (pipe) but after a > myfile (redirect to file)?
To redirect stderr to stdout for file output:
echo > myfile 2>&1
To redirect stderr to stdout for a pipe:
echo 2>&1 | less
My assumption was that I could just do:
echo | less 2>&1
and it would work, but it doesn't. Why not?
A pipeline is a |-delimited list of commands. Any redirections you specify apply to the constituent commands (simple or compound), but not to the pipeline as a whole. Each pipe chains one command's stdout to the stdin of the next by implicitly applying a redirect to each subshell before any redirects associated with a command are evaluated.
cmd 2>&1 | less
First stdout of the first subshell is redirected to the pipe from which less is reading. Next, the 2>&1 redirect is applied to the first command. Redirecting stderr to stdout works because stdout is already pointing at the pipe.
cmd | less 2>&1
Here, the redirect applies to less. Less's stdout and stderr both presumably started out pointed at the terminal, so 2>&1 in this case has no effect.
If you want a redirect to apply to an entire pipeline, to group multiple commands as part of a pipeline, or to nest pipelines, then use a command group (or any other compound command):
{ { cmd1 >&3; cmd2; } 2>&1 | cmd3; } 3>&2
Might be a typical example. The end result is: cmd1 and cmd2's stderr -> cmd3; cmd2's stdout -> cmd3; and cmd1 and cmd3's stderr, and cmd3's stdout -> the terminal.
If you use the Bash-specific |& pipe, things get stranger, because each of the pipeline's stdout redirects still occur first, but the stderr redirect actually comes last. So for example:
f() { echo out; echo err >&2; }; f >/dev/null |& cat
Now, counterintuitively, all output is hidden. First stdout of f goes to the pipe, next stdout of f is redirected to /dev/null, and finally, stderr is redirected to stdout (/dev/null still).
I recommend never using |& in Bash -- it's used here for demonstration.
Many of the obligatory redirection links
To add to ormaaj's answer:
The reason you need to specify redirection operators in the proper order is that they're evaluated from left to right. Consider these command lists:
# print "hello" on stdout and "world" on stderr
{ echo hello; echo world >&2; }
# Redirect stdout to the file "out"
# Then redirect stderr to the file "err"
{ echo hello; echo world >&2; } > out 2> err
# Redirect stdout to the file "out"
# Then redirect stderr to the (already redirected) stdout
# Result: all output is stored in "out"
{ echo hello; echo world >&2; } > out 2>&1
# Redirect stderr to the current stdout
# Then redirect stdout to the file "out"
# Result: "world" is displayed, and "hello" is stored in "out"
{ echo hello; echo world >&2; } 2>&1 > out
My answer is by understanding file descriptors. Each process has a bunch of file descriptors: entries to files that are opened. By default, number 0 is for stdin, number 1 is for stdout and number 2 is for stderr.
The i/o redirectors > and < by default connect to their most reasonable file descriptors, stout and stdin. If you re-route stdout to a file (as with foo > bar), on starting process 'foo', the file 'bar' is opened for writing and hooked on file descriptor number 1. If you want only stderr in 'bar', you'd use foo 2> bar which opens file bar and hooks it to the stderr.
Now the i/o redirector '2>&1'. I normally read that as 'put file descriptor 2 to the same as file descriptor 1. While reading the commandline from left to right, you can do the next: foo 1>bar 2>&1 1>/dev/tty. With this, file descriptor 1 is set to the file 'bar', file descriptor 2 is set to the same as 1 (hence 'bar') and after that, file descriptor 1 is set to /dev/tty. The runnning foo is sending its output to /dev/tty and it stderr to the file 'bar'.
Now the pipeline comes in: this does not alter the file descriptors, however, it will connect them between the processes: stdout of the left process ot stdin of the next. Stderr is passed on. Hence, if you like the pipeline to work on stderr only, you use foo 2| bar, which connects stderr of foo to stdin of bar. (I'm not sure what happens with the stdout of foo.)
With the above, if you use foo 2>&1 | bar, since stderr of foo is re-routed to stdout of foo, both stdout and stderr of foo arrive at the stdin of bar.
From this perldoc page,
To capture a command's STDERR and STDOUT together:
$output = `cmd 2>&1`;
To capture a command's STDOUT but discard its STDERR:
$output = `cmd 2>/dev/null`;
To capture a command's STDERR but discard its STDOUT (ordering is important here):
$output = `cmd 2>&1 1>/dev/null`;
To exchange a command's STDOUT and STDERR in order to capture the STDERR but leave its STDOUT to come out the old STDERR:
$output = `cmd 3>&1 1>&2 2>&3 3>&-`;
I do not understand how 3 and 4 work, and I am not too sure what I understand about 1 and 2 is right. Below is what I understand. Please correct me where I am wrong.
I know that 0, 1 and 2 symbolize STDIN, STDOUT and STDERR.
redirect 2 to 1, so that both of them use the same stream now (& escaped 1 making sure that STDERR does not get redirected to a file named 1 instead)
redirect 2 (STDERR) to null stream, so that it gets discarded
I do not understand this one. Shouldn't it be just
$output = `cmd 1>/dev/null`;
Also, if the aim is to get the STDERR messages at STDOUT, won't 1>/dev/null redirect everything to /dev/null?
What is happening here? What is stream 3? Is it like a temporary variable?
Really, none of this is Perl -- all of this is handled by the shell that you're invoking by using the backticks operator. So your best reading is man sh, or the Shell chapter of the Unix standard.
In short, though, for #4:
3>&1: Open FD 3 to point to where stdout currently points.
1>&2: Reopen stdout to point to where stderr currently points.
2>&3: Reopen stderr to point to where FD 3 currently points, which is where stdout pointed before the previous step was completed. Now stdout and stderr have been succesfully swapped.
3>&-: Close FD 3 because it's not needed anymore.
Though documented in the perldocs, the redirection is all standard linux redirection. You understand 1 and 2 correctly.
3) Only STDOUT is normally caught by a basic redirect (>), so the original STDOUT must be discarded, and STDERR must be send to STDOUT.
4) cmd 3>&1 1>&2 2>&3 3>&- is equivalent to
var tmp = STDOUT;
STDOUT = STDERR;
STDERR = tmp;
delete tmp;
Normally we have this:
1-->STDOUT
2-->STDERR
2>&1 redirects file descriptor fd2 to fd1
1-->STDOUT
/
2./
2>/dev/null redirects fd2 to /dev/null.
1-->STDOUT
2-->/dev/null
2>&1 1>/dev/null redirects fd2 to fd1, and then redirects fd1 to /dev/null
/dev/null
/
1./ STDOUT
/
2./
3>&1 1>&2 2>&3 3>&-
first directs a new fd 3 to wherever
fd 1 is currently pointing (STDOUT).
then redirects fd1 to wherever fd2 is
current pointing (STDERR),
then redirects fd 2 to wherever fd 3
is currently pointing (STDOUT)
then closes fd3 (3>&- means close
file descriptor 3).
The whole thing effectively swaps fd1 and fd2. fd3 acted as a temporary variable.
1 --STDOUT
X
2 `-STDERR
See the docs for more information on IO redirection.
3.Nope. The ordering matters, so it gets rid of the original stdout, then it moves stderr to stdout.
4.3 is just another file descriptor, same as the first 3. Most processes can use a total of 256 different file descriptors.