I was wondering how to redirect stderr to multiple outputs. I tried it with this script, but I couldn't get it to work quite right. The first file should have both stdout and stderr, and the 2nd should just have errors.
perl script.pl &> errorTestnormal.out &2> errorTest.out
Is there a better way to do this? Any help would be much appreciated. Thank you.
perl script.pl 2>&1 >errorTestnormal.out | tee -a errorTestnormal.out > errorTest.out
Will do what you want.
This is a bit messy, lets go through it step by step.
We say what used to go to STDERR will now go STDOUT
We say what used to go to STDOUT will now go to errorTestnormal.out.
So now, STDOUT gets printed to a file, and STDERR gets printed to STDOUT. We want put STDERR into 2 different files, which we can do with tee. tee appends the text it's given to a file, and also echoes to STDOUT.
We use tee to append to errorTestnormal.out, so it now contains all the STDOUT and STDERR output of script.pl.
Then, we write STDOUT of tee (which contains STDERR from script.pl) into errorTest.out
After this, errorTestnormal.out has all the STDOUT output, and then all the STDERR output. errotTest.out contains only the STDERR output.
I had to mess around with this for a while as well. In order to get stderr in both files, while only putting stdout into a single file (e.g. stderr into errors.log and output.log and then stdout into just output.log) AND in the order that they happen, this command is better:
((sh test.sh 2>&1 1>&3 | tee errors.log) 3>&1 | tee output.log) > /dev/null 2>&1
The last /dev/nul 2>&1 can be omitted if you want the stdout and stderr to still be output onto the screen.
I guess in case of the 2nd ">" you try to send the error output of errorTestnormal.out (and not that of script.pl) to errorTest.out.
Related
I want to start a program and redirect stdout and stderr in one file and stderr to another. I read a lot about using tee but this seems not to work for cmd.
This already works but I need stderr in a second file as well.
programm >> combined.log 2>&1
I have tried sth like this but it didnt work.
program >> combined.log 2>&1 2>> error.log
It would be nice to have your cake and eat it too. That is not always possible. The stderr log can be captured, then appended to the combined log.
program >>combined.log 2>err.log
type err.log >>combined.log
This question already has answers here:
How to store standard error in a variable
(20 answers)
Closed 6 years ago.
I'm writing a script to backup a database. I have the following line:
mysqldump --user=$dbuser --password=$dbpswd \
--host=$host $mysqldb | gzip > $filename
I want to assign the stderr to a variable, so that it will send an email to myself letting me know what happened if something goes wrong. I've found solutions to redirect stderr to stdout, but I can't do that as the stdout is already being sent (via gzip) to a file. How can I separately store stderr in a variable $result ?
Try redirecting stderr to stdout and using $() to capture that. In other words:
VAR=$((your-command-including-redirect) 2>&1)
Since your command redirects stdout somewhere, it shouldn't interfere with stderr. There might be a cleaner way to write it, but that should work.
Edit:
This really does work. I've tested it:
#!/bin/bash
BLAH=$((
(
echo out >&1
echo err >&2
) 1>log
) 2>&1)
echo "BLAH=$BLAH"
will print BLAH=err and the file log contains out.
For any generic command in Bash, you can do something like this:
{ error=$(command 2>&1 1>&$out); } {out}>&1
Regular output appears normally, anything to stderr is captured in $error (quote it as "$error" when using it to preserve newlines). To capture stdout to a file, just add a redirection at the end, for example:
{ error=$(ls /etc/passwd /etc/bad 2>&1 1>&$out); } {out}>&1 >output
Breaking it down, reading from the outside in, it:
creates a file description $out for the whole block, duplicating stdout
captures the stdout of the whole command in $error (but see below)
the command itself redirects stderr to stdout (which gets captured above) then stdout to the original stdout from outside the block, so only the stderr gets captured
You can save the stdout reference from before it is redirected in another file number (e.g. 3) and then redirect stderr to that:
result=$(mysqldump --user=$dbuser --password=$dbpswd \
--host=$host $mysqldb 3>&1 2>&3 | gzip > $filename)
So 3>&1 will redirect file number 3 to stdout (notice this is before stdout is redirected with the pipe). Then 2>&3 redirects stderr to file number 3, which now is the same as stdout. Finally stdout is redirected by being fed into a pipe, but this is not affecting file numbers 2 and 3 (notice that redirecting stdout from gzip is unrelated to the outputs from the mysqldump command).
Edit: Updated the command to redirect stderr from the mysqldump command and not gzip, I was too quick in my first answer.
dd writes both stdout and stderr:
$ dd if=/dev/zero count=50 > /dev/null
50+0 records in
50+0 records out
the two streams are independent and separately redirectable:
$ dd if=/dev/zero count=50 2> countfile | wc -c
25600
$ cat countfile
50+0 records in
50+0 records out
$ mail -s "countfile for you" thornate < countfile
if you really needed a variable:
$ variable=`cat countfile`
I'm putting together a complex pipeline, where I want to include stderr in the program output for recordkeeping purposes but I also want errors to remain on stderr so I can spot problems.
I found this question that asks how to direct stdout+stderr to a file and still get stderr on the terminal; it's close, but I don't want to redirect stdout to a file yet: The program's output will be consumed by other scripts, so I'd like it to remain on stdout (and same for stderr). So, to summarize:
Script produces output in fd 1, errors in fd 2.
I want the calling program to rearrange things so that output+errors appear in fd 1, errors in fd 2.
Also, errors should be interleaved with output (as much as their own buffering allows), not saved and added at the end.
Due-diligence notes: Capturing stderr is easy enough with 2>&1. Saving and viewing stdout is easy enough by piping through tee. I also know how to divert stdout to a file and direct stderr through a pipe: command 2>&1 1>fileA | tee fileB. But how do I duplicate stderr and put stdout back in fd 1?
As test to generate both stdout and stderr, let's use the following:
{ echo out; echo err >&2; }
The following code demonstrates how both stdout and stderr can be sent to the next step in the pipeline while also sending stderr to the terminal:
$ { echo out; echo err >&2; } 2> >(tee /dev/stderr) | cat >f
err
$ cat f
out
err
How it works
2>
This redirects stderr to the (pseudo) file which follows.
>(tee /dev/stderr)
This is process substitution and its acts as a pseudo-file that receives input from stderr. Any input it receives is sent to the tee command which sends it both to stderr and to stdout.
The "|" pipe operator connects the stdout of one process to the stdin of another. Is there any way to create a pipe that connects the stderr of one process to the stdin of another keeping the stdout alive in my terminal? Searching on the internet gave me no information at all...
Thank you in advance,
Michalis.
If you're happy to mix stdouot and stderr, then you can first redirect stderr to stdout and then pipe that:
theprogram 2>&1 | otherprogram
If you don't want stdout, you can kill that one:
theprogram 2>&1 1> /dev/null | otherprogram
If you do want to store the original stdout as well, then you have to redirect it either to a file (in place of /dev/null), or to another file descriptor that you opened previously with exec. Here are some details.
(Unfortunately there is no direct "pipe this file descriptor" syntax like 2|. That would have been handy.)
You can get this effect with bash's process substitution feature:
somecommand 2> >(errorprocessor)
You could use named pipes:
mkfifo /my/pipe
error-handler </my/pipe &
do-something 2>/my/pipe
This should keep stdin & stdout of "do-something" in your terminal und redirect stderr to /my/pipe, which is read by "error-handler".
(I hope this work, have no bash to test)
You may also swap the stdout & stderr streams, i. e. stdout becomes the new stderr and stderr becomes the new stdout).
# only the stdout stream gets upcased
ls -ld / xxx ~/.bashrc yyy 3>&1 1>&2 2>&3 3>&- | tr '[[:lower:]]' '[[:upper:]]'
# block original stdout by closing fd 1
ls -ld / xxx ~/.bashrc yyy 2>&1 1>&- | tr '[[:lower:]]' '[[:upper:]]'
Most of us know that to redirect STDERR to STDOUT we do 2>&1
We also know about FILE redirection using ">" and process redirection using "|"
What I always wondered about was about the combination of the above two
If you want to redirect both STDERR and STDOUT of prog1 to prog2 you place the 2>&1 prior to the |prog2 pipe. On the other hand, if you are redirecting STDERR and STDOUT of prog1 to a file (file.txt), the 2>&1 goes after the > file.txt.
So I know HOW to do it, I am just wondering WHY it is done like that. To me it seems inconsistent, but I may be looking at it the wrong way
Thanks
They are processed in order.
So if you do
progname 2>&1 1>out.txt
That diverts stderr from the program to the current destination of the program's stdout, which is the stdout stream of the shell, and diverts stdout of the program to out.txt.
if you do
progname 1>out.txt 2>&1
That diverts the stdout of the program to out.txt, then diverts the stderr from the program to the current destination of the program's stdout, which is out.txt.
It helps if you don't think of a pipe as redirection. Using 2>&1, you're redirecting stderr to stdout. Only stdout goes through a pipe. If you redirect stdout before a pipe, then nothing goes through.