Can you redirect STDERR for an entire Bash session? - bash

There are lots of questions regarding redirecting stderr and stdout for a single command or script. What I'd like is to redirect any stderr messages from my Bash session to a log file.
I'd like an interactive bash session where all stderr is redirected to a file.

A horrible way to deal with your problem:
exec 3>&2
trap 'exec 2>>/path/to/your_file' DEBUG
PROMPT_COMMAND='exec 2>&3'
exec 3>&2: we first copy fd 2 to a new fd (here fd 3)
trap 'exec 2>/dev/null' DEBUG: Before each command is executed (and if the shell option extdebug is set, which is the default in interactive shells), the DEBUG trap is executed: here we redirect stderr to the file /path/to/your_file (make sure you give an absolute path).
Before each prompt is displayed, Bash executes the string in the PROMPT_COMMAND variable: here we redirect fd2 to fd3 (and fd3 was a copy of fd2 when it pointed to the terminal). This is necessary to print the prompt.
I wouldn't qualify this as a robust or nice method, yet it might do the job for your purpose.

Yes.
exec 2> elsewhere
or redirect the invoking command.

Not an exact answer but this is what I use. It will display STDERR on the screen and log all of STDOUT and STDERR to a log:
eval $1 2>&1 >>~/max.log | tee --append ~/bash.log

Related

SIGPIPE due to file descriptors and process substitution

I am trying to save some logs from bash functions which execute tools (some of them run in subshells). In addition I would like to print all errors to the terminal.
My code leads to a sigpipe and exit code 141 upon hitting ctr-c plus a strange log file. The pipe fail seems to be caused by the redirection of stdout to stderr within the trap, which breaks the stdout stream of the tee command. Interestingly the code terminates as expected with exit code 130 without the redirection used in the trap or the cat command.
I am still unable to fix and explain the resulting log file. Why are there some echos twice and why are the trap echos written to the file as well?
Why isn't the sigpipe caused earlier by the redirection within the function?
trap '
echo trap_stdout
echo trap_stderr >&2
' INT
fun(){
echo fun_stdout
echo fun_stderr >&2
( sleep 10 | cat )
}
echo > log
fun >> log 2> >(tee -a log)
log file
fun_stdout
fun_stderr
fun_stderr
trap_stdout
EDIT: working example according to oguz ismail answer
exec 3>> log
exec 4> >(tee -ai log >&2)
fun 2>&4 >&3
exec 3>&-
exec 4>&-
Why are there some echos twice
fun's stdout is redirected to log before its stderr is redirected to the FIFO created for tee, thus tee inherits a stdout that is redirected to log. I can prove that like so:
$ : > file 2> >(date)
$ cat file
Sat Jul 25 18:46:31 +03 2020
Changing the order of redirections will fix that. E.g.:
fun 2> >(tee -a log) >> log
and why are the trap echos written to the file as well?
If the trap set for SIGINT is triggered while the shell is still executing fun, its perfectly normal that the redirections associated with fun takes effect.
To connect the trap action's stdout and stderr to those of the main shell, you can do:
exec 3>&1 4>&2
handler() {
: # handle SIGINT here
} 1>&3 2>&4
trap handler INT
Or something alike; the idea is making copies of the main shell's stdout and stderr.
Why isn't the sigpipe caused earlier by the redirection within the function?
Because tee is alive while echo fun_stderr >&2 is being executed. And sleep does not write anything to its stdout, so it can not trigger a SIGPIPE.
The reason why this script terminates due to a SIGPIPE is that tee receives the SIGINT generated by the keyboard as well and terminates before the trap action associated with SIGINT is executed. As a result, while executing echo trap_stderr >&2, since its stderr is connected to a pipe that has been closed moments ago, the shell receives the SIGPIPE.
To avoid this, as already suggested, you can make tee ignore SIGINT. You don't need to set an empty trap for that though, the -i option is enough.
fun 2> >(tee -a -i log) >> log
The source of the SIGPIPE is that the SIGINT (initiated by ctrl/c) is sent to ALL processes running: both the "main" bash process (executing the 'fun' function), and the sub shell executing the 'tee -a'. As a result, on Ctrl/C, both get killed. When the main process tries to send 'trap_stderr' to te "tee" process, it get SIGPIPE, because the "tee" has already died.
Given the role of the 'tee -a', it make sense to protect it from the SIGINT, and allow it to run until 'fun' complete (or killed). Consider the following change to the last line
fun >> log 2> >(trap '' INT ; tee -a log >&2)
which will produce the log file:
Console (stderr)
fun_stderr
^Ctrap_stderr
Log File: (no duplicates)
fun_stdout
fun_stderr
trap_stdout
trap_stderr
The above will also address the second question is about duplicate lines in the log file. This is the result of using tee to send each stderr line to log file AND stdout. Given that stdout just got redirect (by the '>>log') to the 'log' file, both copy of the output are sent to the log file, and none to the terminal
Given the redirection are performed sequentially, changing the 'tee' line to sent the output to the original stderr (instead of the already redirect stdout) will show the output on the terminal (or whatever stderr is)

What's the difference between `command > output` and `command 2>&1 > output`?

I'm somewhat familiar with the common way of redirecting stdout to a file, and then redirecting stderr to stdout.
If I run a command such as ls > output.txt 2>&1, my guess is that under the hood, the shell is executing something like the following c code:
close(1)
open("output.txt") // assigned to fd 1
close(2)
dup2(1, 2)
Since fd 1 has already been replaced with output.txt, anything printed to stderr will be redirected to output.txt.
But, if I run ls 2>&1 > output.txt, I'm guessing that this is instead what happens:
close(2)
dup2(1, 2)
close(1)
open("output.txt")
But, since the shell prints out both stdout and stderr by default, is there any difference between ls 2>&1 output.txt and ls > output.txt? In both cases, stdout will be redirected to output.txt, while stderr will be printed to the console.
With ls >output.txt, the stderr from ls goes to the stderr inherited from the calling process. In contrast, with ls 2>&1 >output.txt, the stderr of ls is sent to the stdout of the calling process.
Let's try this with an example script that prints a line of output to each of stdout and stderr:
$ cat pr.sh
#!/bin/sh
echo "to stdout"
echo "to stderr" 1>&2
$ sh pr.sh >/dev/null
to stderr
$ sh pr.sh 2>/dev/null
to stdout
Now if we insert "2>&1" into the first command line, nothing appears different:
$ sh pr.sh 2>&1 >/dev/null
to stderr
But now let's run both of those inside a context where the inherited stdout is going someplace other than the console:
$ (sh pr.sh 2>&1 >/dev/null) >/dev/null
$ (sh pr.sh >/dev/null) >/dev/null
to stderr
The second command still prints because the inherited stderr is still going to the console. But the first prints nothing because the "2>&1" redirects the inner stderr to the outer stdout, which is going to /dev/null.
Although I've never used this construction, conceivably it could be useful in a situation where (in a script, most likely) you want to run a program, send its stdout to a file, but forward its stderr on to the caller as if it were "normal" output, perhaps because that program is being run along with some other programs and you want the first program's "error" output to be part of the same stream as the other programs' "normal" output. (Perhaps both programs are compilers, and you want to capture all the error messages, but they disagree about which stream errors are sent to.)

How to undo exec > /dev/null in bash?

I used
exec > /dev/null
to suppress output.
Is there a command to undo this? (Without restarting the script.)
To do it right, you need to copy the original FD 1 somewhere else before repointing it to /dev/null. In this case, I store a backup on FD 5:
exec 5>&1 >/dev/null
...
exec 1>&5
Another option is to redirect stdout within a block rather than using exec:
{
...
} >/dev/null
If you just want to get output again at the command prompt, you can do this:
exec >/dev/tty
If you are creating a script, and you want to have the output of a certain group of commands redirected, put those commands in braces:
{
command
command
} >/dev/null
Save the original output targets beforehand.
# $$ = the PID of the running script instance
STDOUT=`readlink -f /proc/$$/fd/1`
STDERR=`readlink -f /proc/$$/fd/2`
And restore them again using exec.
exec 1>$STDOUT 2>$STDERR
If you use /dev/tty for restoration as in the answers above, contrary to this, you won't be able to do redirections in call-level, e.g. bash script.sh &>/dev/null won't work.
Not really, as that would require changing the state of a running process. Even assuming you could, whatever you wrote before resetting standard output is truly, completely gone, as it was sent to the bit bucket.
To restore stdout I use
unset &1

How do I log stderr and stdout synchronously, but print stderr to screen only?

This is a task that I try to do pretty often.
I want to log both stderr and stdout to a log file. But I only want to print to console stderr.
I've tried with tee, but once I've merge stderr and stdout using "2>&1". I can not print stdout to the screen anymore since both my pipes are merged.
Here is a simple example of what I tried
./dosomething.sh | tee -a log 2>&1.
Now I have both stderr and stdout to the log and the screen.
Any Ideas?
Based on some reading on this web site, this question has been asked.
Write STDOUT & STDERR to a logfile, also write STDERR to screen
And also a question very similar here:
Save stdout, stderr and stdout+stderr synchronously
But neither of them are able to redirect both stdout+stderr to a log and stderr to the screen while stdoud and stderr are synchronously written to the log file.
I was able to get this working in bash:
(./tmp.sh 2> >(tee >(cat >&2) >&1)) > tmp.log
This does not work correctly in zsh (the prompt does not wait for the process to exit), and does not work at all in dash. A more portable solution may be to write a simple C program to do it.
I managed to get this working with this script in bash.
mkfifo stdout
mkfifo stderr
rm -f out
cat stderr | tee -a out &
cat stdout >> out &
(echo "stdout";
grep;
echo "an other stdout";
echo "again stdout";
stat) 2> stderr > stdout
rm -f stdout
rm -f stderr
The order of the output is preserved. With this script the process ends correctly.
Note: I used grep and stat without parameter to generate stdout.

Check whether stderr is a pipe in bash

I have a bash script that prompts the user for input with 'read'. If stdout or stderr is piped to something other than a terminal, I would like to suppress this step. Is that possible?
You can check whether a filedescriptor is a tty(attached to a terminal) with the command test -t <filedescriptor no.>. If it is, you can prompt the user. If it isn't, output is probably piped or redicted somewhere.
if test -t 1 ; then
echo stdout is a tty
fi

Resources