The "|" pipe operator connects the stdout of one process to the stdin of another. Is there any way to create a pipe that connects the stderr of one process to the stdin of another keeping the stdout alive in my terminal? Searching on the internet gave me no information at all...
Thank you in advance,
Michalis.
If you're happy to mix stdouot and stderr, then you can first redirect stderr to stdout and then pipe that:
theprogram 2>&1 | otherprogram
If you don't want stdout, you can kill that one:
theprogram 2>&1 1> /dev/null | otherprogram
If you do want to store the original stdout as well, then you have to redirect it either to a file (in place of /dev/null), or to another file descriptor that you opened previously with exec. Here are some details.
(Unfortunately there is no direct "pipe this file descriptor" syntax like 2|. That would have been handy.)
You can get this effect with bash's process substitution feature:
somecommand 2> >(errorprocessor)
You could use named pipes:
mkfifo /my/pipe
error-handler </my/pipe &
do-something 2>/my/pipe
This should keep stdin & stdout of "do-something" in your terminal und redirect stderr to /my/pipe, which is read by "error-handler".
(I hope this work, have no bash to test)
You may also swap the stdout & stderr streams, i. e. stdout becomes the new stderr and stderr becomes the new stdout).
# only the stdout stream gets upcased
ls -ld / xxx ~/.bashrc yyy 3>&1 1>&2 2>&3 3>&- | tr '[[:lower:]]' '[[:upper:]]'
# block original stdout by closing fd 1
ls -ld / xxx ~/.bashrc yyy 2>&1 1>&- | tr '[[:lower:]]' '[[:upper:]]'
Related
This question already has answers here:
How to store standard error in a variable
(20 answers)
Closed 6 years ago.
I'm writing a script to backup a database. I have the following line:
mysqldump --user=$dbuser --password=$dbpswd \
--host=$host $mysqldb | gzip > $filename
I want to assign the stderr to a variable, so that it will send an email to myself letting me know what happened if something goes wrong. I've found solutions to redirect stderr to stdout, but I can't do that as the stdout is already being sent (via gzip) to a file. How can I separately store stderr in a variable $result ?
Try redirecting stderr to stdout and using $() to capture that. In other words:
VAR=$((your-command-including-redirect) 2>&1)
Since your command redirects stdout somewhere, it shouldn't interfere with stderr. There might be a cleaner way to write it, but that should work.
Edit:
This really does work. I've tested it:
#!/bin/bash
BLAH=$((
(
echo out >&1
echo err >&2
) 1>log
) 2>&1)
echo "BLAH=$BLAH"
will print BLAH=err and the file log contains out.
For any generic command in Bash, you can do something like this:
{ error=$(command 2>&1 1>&$out); } {out}>&1
Regular output appears normally, anything to stderr is captured in $error (quote it as "$error" when using it to preserve newlines). To capture stdout to a file, just add a redirection at the end, for example:
{ error=$(ls /etc/passwd /etc/bad 2>&1 1>&$out); } {out}>&1 >output
Breaking it down, reading from the outside in, it:
creates a file description $out for the whole block, duplicating stdout
captures the stdout of the whole command in $error (but see below)
the command itself redirects stderr to stdout (which gets captured above) then stdout to the original stdout from outside the block, so only the stderr gets captured
You can save the stdout reference from before it is redirected in another file number (e.g. 3) and then redirect stderr to that:
result=$(mysqldump --user=$dbuser --password=$dbpswd \
--host=$host $mysqldb 3>&1 2>&3 | gzip > $filename)
So 3>&1 will redirect file number 3 to stdout (notice this is before stdout is redirected with the pipe). Then 2>&3 redirects stderr to file number 3, which now is the same as stdout. Finally stdout is redirected by being fed into a pipe, but this is not affecting file numbers 2 and 3 (notice that redirecting stdout from gzip is unrelated to the outputs from the mysqldump command).
Edit: Updated the command to redirect stderr from the mysqldump command and not gzip, I was too quick in my first answer.
dd writes both stdout and stderr:
$ dd if=/dev/zero count=50 > /dev/null
50+0 records in
50+0 records out
the two streams are independent and separately redirectable:
$ dd if=/dev/zero count=50 2> countfile | wc -c
25600
$ cat countfile
50+0 records in
50+0 records out
$ mail -s "countfile for you" thornate < countfile
if you really needed a variable:
$ variable=`cat countfile`
I'm looking for a way to "forward" stdin to stdout in a pipe, while in that step something is written to stderr. The example should clarify this:
echo "before.." | >&2 echo "some logging..."; [[forward stdin>stdout]] | cat
This should put "before.." to stdout, meanwhile "some logging..." to stderr.
How to do that? Or is there maybe another quite different approach to this?
Here's a solution based on your comments:
cat ~/.bashrc | tee >( cat -n >&2 ) | sort
cat ~/.bashrc represents the start of your pipeline, producing some data.
tee duplicates its input, writing to both stdout and any files listed as arguments.
>( ... ) is a bash construct that runs ... as a pipe subcommand but replaces itself by a filename (something that tee can open and write to).
cat -n represents modifying the input (adding line numbers).
>&2 redirects stdout to stderr.
sort represents the end of your pipeline (normal processing of the unchanged input).
Putting it all together, bash will
run cat ~/.bashrc, putting the contents of ~/.bashrc on stdout
... which is piped to the stdin of tee
run cat -n with stdout redirected to stderr and stdin redirected to a new pipe
run tee /dev/fd/63 (where /dev/fd/63 represents the other end of the cat -n pipe)
this is where it all comes together: tee reads its input and writes it to both its stdout and to the other pipe that goes to cat -n (and from there to stderr)
finally tee's stdout goes into sort
Redirections follows the simple command they refer to, thus
echo "before" >&1
echo "some logging..." >&2
should do the trick, if I understand what you're trying to do.
It seems that newer versions of bash have the &> operator, which (if I understand correctly), redirects both stdout and stderr to a file (&>> appends to the file instead, as Adrian clarified).
What's the simplest way to achieve the same thing, but instead piping to another command?
For example, in this line:
cmd-doesnt-respect-difference-between-stdout-and-stderr | grep -i SomeError
I'd like the grep to match on content both in stdout and stderr (effectively, have them combined into one stream).
Note: this question is asking about piping, not redirecting - so it is not a duplicate of the question it's currently marked as a duplicate of.
(Note that &>>file appends to a file while &> would redirect and overwrite a previously existing file.)
To combine stdout and stderr you would redirect the latter to the former using 1>&2. This redirects stdout (file descriptor 1) to stderr (file descriptor 2), e.g.:
$ { echo "stdout"; echo "stderr" 1>&2; } | grep -v std
stderr
$
stdout goes to stdout, stderr goes to stderr. grep only sees stdout, hence stderr prints to the terminal.
On the other hand:
$ { echo "stdout"; echo "stderr" 1>&2; } 2>&1 | grep -v std
$
After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.
You can read more about redirection here.
Regarding your example (POSIX):
cmd-doesnt-respect-difference-between-stdout-and-stderr 2>&1 | grep -i SomeError
or, using >=bash-4:
cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError
Bash has a shorthand for 2>&1 |, namely |&, which pipes both stdout and stderr (see the manual):
cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError
This was introduced in Bash 4.0, see the release notes.
When combining stderr with stdout, why does 2>&1 need to come before a | (pipe) but after a > myfile (redirect to file)?
To redirect stderr to stdout for file output:
echo > myfile 2>&1
To redirect stderr to stdout for a pipe:
echo 2>&1 | less
My assumption was that I could just do:
echo | less 2>&1
and it would work, but it doesn't. Why not?
A pipeline is a |-delimited list of commands. Any redirections you specify apply to the constituent commands (simple or compound), but not to the pipeline as a whole. Each pipe chains one command's stdout to the stdin of the next by implicitly applying a redirect to each subshell before any redirects associated with a command are evaluated.
cmd 2>&1 | less
First stdout of the first subshell is redirected to the pipe from which less is reading. Next, the 2>&1 redirect is applied to the first command. Redirecting stderr to stdout works because stdout is already pointing at the pipe.
cmd | less 2>&1
Here, the redirect applies to less. Less's stdout and stderr both presumably started out pointed at the terminal, so 2>&1 in this case has no effect.
If you want a redirect to apply to an entire pipeline, to group multiple commands as part of a pipeline, or to nest pipelines, then use a command group (or any other compound command):
{ { cmd1 >&3; cmd2; } 2>&1 | cmd3; } 3>&2
Might be a typical example. The end result is: cmd1 and cmd2's stderr -> cmd3; cmd2's stdout -> cmd3; and cmd1 and cmd3's stderr, and cmd3's stdout -> the terminal.
If you use the Bash-specific |& pipe, things get stranger, because each of the pipeline's stdout redirects still occur first, but the stderr redirect actually comes last. So for example:
f() { echo out; echo err >&2; }; f >/dev/null |& cat
Now, counterintuitively, all output is hidden. First stdout of f goes to the pipe, next stdout of f is redirected to /dev/null, and finally, stderr is redirected to stdout (/dev/null still).
I recommend never using |& in Bash -- it's used here for demonstration.
Many of the obligatory redirection links
To add to ormaaj's answer:
The reason you need to specify redirection operators in the proper order is that they're evaluated from left to right. Consider these command lists:
# print "hello" on stdout and "world" on stderr
{ echo hello; echo world >&2; }
# Redirect stdout to the file "out"
# Then redirect stderr to the file "err"
{ echo hello; echo world >&2; } > out 2> err
# Redirect stdout to the file "out"
# Then redirect stderr to the (already redirected) stdout
# Result: all output is stored in "out"
{ echo hello; echo world >&2; } > out 2>&1
# Redirect stderr to the current stdout
# Then redirect stdout to the file "out"
# Result: "world" is displayed, and "hello" is stored in "out"
{ echo hello; echo world >&2; } 2>&1 > out
My answer is by understanding file descriptors. Each process has a bunch of file descriptors: entries to files that are opened. By default, number 0 is for stdin, number 1 is for stdout and number 2 is for stderr.
The i/o redirectors > and < by default connect to their most reasonable file descriptors, stout and stdin. If you re-route stdout to a file (as with foo > bar), on starting process 'foo', the file 'bar' is opened for writing and hooked on file descriptor number 1. If you want only stderr in 'bar', you'd use foo 2> bar which opens file bar and hooks it to the stderr.
Now the i/o redirector '2>&1'. I normally read that as 'put file descriptor 2 to the same as file descriptor 1. While reading the commandline from left to right, you can do the next: foo 1>bar 2>&1 1>/dev/tty. With this, file descriptor 1 is set to the file 'bar', file descriptor 2 is set to the same as 1 (hence 'bar') and after that, file descriptor 1 is set to /dev/tty. The runnning foo is sending its output to /dev/tty and it stderr to the file 'bar'.
Now the pipeline comes in: this does not alter the file descriptors, however, it will connect them between the processes: stdout of the left process ot stdin of the next. Stderr is passed on. Hence, if you like the pipeline to work on stderr only, you use foo 2| bar, which connects stderr of foo to stdin of bar. (I'm not sure what happens with the stdout of foo.)
With the above, if you use foo 2>&1 | bar, since stderr of foo is re-routed to stdout of foo, both stdout and stderr of foo arrive at the stdin of bar.
I was wondering how to redirect stderr to multiple outputs. I tried it with this script, but I couldn't get it to work quite right. The first file should have both stdout and stderr, and the 2nd should just have errors.
perl script.pl &> errorTestnormal.out &2> errorTest.out
Is there a better way to do this? Any help would be much appreciated. Thank you.
perl script.pl 2>&1 >errorTestnormal.out | tee -a errorTestnormal.out > errorTest.out
Will do what you want.
This is a bit messy, lets go through it step by step.
We say what used to go to STDERR will now go STDOUT
We say what used to go to STDOUT will now go to errorTestnormal.out.
So now, STDOUT gets printed to a file, and STDERR gets printed to STDOUT. We want put STDERR into 2 different files, which we can do with tee. tee appends the text it's given to a file, and also echoes to STDOUT.
We use tee to append to errorTestnormal.out, so it now contains all the STDOUT and STDERR output of script.pl.
Then, we write STDOUT of tee (which contains STDERR from script.pl) into errorTest.out
After this, errorTestnormal.out has all the STDOUT output, and then all the STDERR output. errotTest.out contains only the STDERR output.
I had to mess around with this for a while as well. In order to get stderr in both files, while only putting stdout into a single file (e.g. stderr into errors.log and output.log and then stdout into just output.log) AND in the order that they happen, this command is better:
((sh test.sh 2>&1 1>&3 | tee errors.log) 3>&1 | tee output.log) > /dev/null 2>&1
The last /dev/nul 2>&1 can be omitted if you want the stdout and stderr to still be output onto the screen.
I guess in case of the 2nd ">" you try to send the error output of errorTestnormal.out (and not that of script.pl) to errorTest.out.