How do I log stderr and stdout synchronously, but print stderr to screen only? - bash

This is a task that I try to do pretty often.
I want to log both stderr and stdout to a log file. But I only want to print to console stderr.
I've tried with tee, but once I've merge stderr and stdout using "2>&1". I can not print stdout to the screen anymore since both my pipes are merged.
Here is a simple example of what I tried
./dosomething.sh | tee -a log 2>&1.
Now I have both stderr and stdout to the log and the screen.
Any Ideas?
Based on some reading on this web site, this question has been asked.
Write STDOUT & STDERR to a logfile, also write STDERR to screen
And also a question very similar here:
Save stdout, stderr and stdout+stderr synchronously
But neither of them are able to redirect both stdout+stderr to a log and stderr to the screen while stdoud and stderr are synchronously written to the log file.

I was able to get this working in bash:
(./tmp.sh 2> >(tee >(cat >&2) >&1)) > tmp.log
This does not work correctly in zsh (the prompt does not wait for the process to exit), and does not work at all in dash. A more portable solution may be to write a simple C program to do it.

I managed to get this working with this script in bash.
mkfifo stdout
mkfifo stderr
rm -f out
cat stderr | tee -a out &
cat stdout >> out &
(echo "stdout";
grep;
echo "an other stdout";
echo "again stdout";
stat) 2> stderr > stdout
rm -f stdout
rm -f stderr
The order of the output is preserved. With this script the process ends correctly.
Note: I used grep and stat without parameter to generate stdout.

Related

How run a command, show stderr on screen, and at the same time, save stderr in a file

I want to run a command, show stderr on screen and, at the same time, save stderr in a file.
In bash (or zsh), you can do this by redirecting stderr to tee via a process substitution:
somecommand 2> >(tee errors.log)
This command shows stdout and stderr on screen and saves stderr on /tmp/errors:
$ ( ls file_do_not_exist /bin/true 2>&1 1>&3 | tee /tmp/errors 1>&2; ) 3>&1

Redirect stdout to file and tee stderr to the same file

I am running a command which will (very likely) output text to both stderr and stdout. I want to save both stderr and stdout to the same file, but I only want stderr printing to the terminal.
How can I get this to work? I've tried mycommand 1>&2 | tee file.txt >/dev/null but that doesn't print anything to the terminal.
If You Don't Need Perfect Ordering
Using two separate copies of tee, both writing to the same file in append mode but only one of them subsequently forwarding content to /dev/null, will get you where you need to be:
mycommand \
2> >(tee -a file.txt >&2) \
> >(tee -a file.txt >/dev/null)
If You Do Need Perfect Ordering
See Separately redirecting and recombining stderr/stdout without losing ordering

What's the difference between `command > output` and `command 2>&1 > output`?

I'm somewhat familiar with the common way of redirecting stdout to a file, and then redirecting stderr to stdout.
If I run a command such as ls > output.txt 2>&1, my guess is that under the hood, the shell is executing something like the following c code:
close(1)
open("output.txt") // assigned to fd 1
close(2)
dup2(1, 2)
Since fd 1 has already been replaced with output.txt, anything printed to stderr will be redirected to output.txt.
But, if I run ls 2>&1 > output.txt, I'm guessing that this is instead what happens:
close(2)
dup2(1, 2)
close(1)
open("output.txt")
But, since the shell prints out both stdout and stderr by default, is there any difference between ls 2>&1 output.txt and ls > output.txt? In both cases, stdout will be redirected to output.txt, while stderr will be printed to the console.
With ls >output.txt, the stderr from ls goes to the stderr inherited from the calling process. In contrast, with ls 2>&1 >output.txt, the stderr of ls is sent to the stdout of the calling process.
Let's try this with an example script that prints a line of output to each of stdout and stderr:
$ cat pr.sh
#!/bin/sh
echo "to stdout"
echo "to stderr" 1>&2
$ sh pr.sh >/dev/null
to stderr
$ sh pr.sh 2>/dev/null
to stdout
Now if we insert "2>&1" into the first command line, nothing appears different:
$ sh pr.sh 2>&1 >/dev/null
to stderr
But now let's run both of those inside a context where the inherited stdout is going someplace other than the console:
$ (sh pr.sh 2>&1 >/dev/null) >/dev/null
$ (sh pr.sh >/dev/null) >/dev/null
to stderr
The second command still prints because the inherited stderr is still going to the console. But the first prints nothing because the "2>&1" redirects the inner stderr to the outer stdout, which is going to /dev/null.
Although I've never used this construction, conceivably it could be useful in a situation where (in a script, most likely) you want to run a program, send its stdout to a file, but forward its stderr on to the caller as if it were "normal" output, perhaps because that program is being run along with some other programs and you want the first program's "error" output to be part of the same stream as the other programs' "normal" output. (Perhaps both programs are compilers, and you want to capture all the error messages, but they disagree about which stream errors are sent to.)

Save StdOut and StdErr to two different files and append them in third one

I would like to save result of my script run with sh.exe on windows into three different files:
stdout to stdout.txt; stderr to stderr.txt and append stdout and stderr to all.txt. I tried to use
foo.sh &> all.txt 2> stderr.txt
or
foo.sh 2>&1 1>logfile | tee -a logfile
but it doen't even append stderr and stdout.
How can I do it?

How do I copy stderr without stopping it writing to the terminal?

I want to write a shell script that runs a command, writing its stderr to my terminal as it arrives. However, I also want to save stderr to a variable, so I can inspect it later.
How can I achieve this? Should I use tee, or a subshell, or something else?
I've tried this:
# Create FD 3 that can be used so stdout still comes through
exec 3>&1
# Run the command, piping stdout to normal stdout, but saving stderr.
{ ERROR=$( $# 2>&1 1>&3) ; }
echo "copy of stderr: $ERROR"
However, this doesn't write stderr to the console, it only saves it.
I've also tried:
{ $#; } 2> >(tee stderr.txt >&2 )
echo "stderr was:"
cat stderr.txt
However, I don't want the temporary file.
I often want to do this, and find myself reaching for /dev/stderr, but there can be problems with this approach; for example, Nix build scripts give "permission denied" errors if they try to write to /dev/stdout or /dev/stderr.
After reinventing this wheel a few times, my current approach is to use process substitution as follows:
myCmd 2> >(tee >(cat 1>&2))
Reading this from the outside in:
This will run myCmd, leaving its stdout as-is. The 2> will redirect the stderr of myCmd to a different destination; the destination here is >(tee >(cat 1>&2)) which will cause it to be piped into the command tee >(cat 1>&2).
The tee command duplicates its input (in this case, the stderr of myCmd) to its stdout and to the given destination. The destination here is >(cat 1>&2), which will cause the data to be piped into the command cat 1>&2.
The cat command just passes its input straight to stdout. The 1>&2 redirects stdout to go to stderr.
Reading from the inside out:
The cat 1>&2 command redirects its stdin to stderr, so >(cat 1>&2) acts like /dev/stderr.
Hence tee >(cat 1>&2) duplicates its stdin to both stdout and stderr, acting like tee /dev/stderr.
We use 2> >(tee >(cat 1>&2)) to get 2 copies of stderr: one on stdout and one on stderr.
We can use the copy on stdout as normal, for example storing it in a variable. We can leave the copy on stderr to get printed to the terminal.
We can combine this with other redirections if we like, e.g.
# Create FD 3 that can be used so stdout still comes through
exec 3>&1
# Run the command, redirecting its stdout to the shell's stdout,
# duplicating its stderr and sending one copy to the shell's stderr
# and using the other to replace the command's stdout, which we then
# capture
{ ERROR=$( $# 2> >(tee >(cat 1>&2)) 1>&3) ; }
echo "copy of stderr: $ERROR"
Credit goes to #Etan Reisner for the fundamentals of the approach; however, it's better to use tee with /dev/stderr rather than /dev/tty in order to preserve normal behavior (if you send to /dev/tty, the outside world doesn't see it as stderr output, and can neither capture nor suppress it):
Here's the full idiom:
exec 3>&1 # Save original stdout in temp. fd #3.
# Redirect stderr to *captured* stdout, send stdout to *saved* stdout, also send
# captured stdout (and thus stderr) to original stderr.
errOutput=$("$#" 2>&1 1>&3 | tee /dev/stderr)
exec 3>&- # Close temp. fd.
echo "copy of stderr: $errOutput"

Resources