I am trying to save some logs from bash functions which execute tools (some of them run in subshells). In addition I would like to print all errors to the terminal.
My code leads to a sigpipe and exit code 141 upon hitting ctr-c plus a strange log file. The pipe fail seems to be caused by the redirection of stdout to stderr within the trap, which breaks the stdout stream of the tee command. Interestingly the code terminates as expected with exit code 130 without the redirection used in the trap or the cat command.
I am still unable to fix and explain the resulting log file. Why are there some echos twice and why are the trap echos written to the file as well?
Why isn't the sigpipe caused earlier by the redirection within the function?
trap '
echo trap_stdout
echo trap_stderr >&2
' INT
fun(){
echo fun_stdout
echo fun_stderr >&2
( sleep 10 | cat )
}
echo > log
fun >> log 2> >(tee -a log)
log file
fun_stdout
fun_stderr
fun_stderr
trap_stdout
EDIT: working example according to oguz ismail answer
exec 3>> log
exec 4> >(tee -ai log >&2)
fun 2>&4 >&3
exec 3>&-
exec 4>&-
Why are there some echos twice
fun's stdout is redirected to log before its stderr is redirected to the FIFO created for tee, thus tee inherits a stdout that is redirected to log. I can prove that like so:
$ : > file 2> >(date)
$ cat file
Sat Jul 25 18:46:31 +03 2020
Changing the order of redirections will fix that. E.g.:
fun 2> >(tee -a log) >> log
and why are the trap echos written to the file as well?
If the trap set for SIGINT is triggered while the shell is still executing fun, its perfectly normal that the redirections associated with fun takes effect.
To connect the trap action's stdout and stderr to those of the main shell, you can do:
exec 3>&1 4>&2
handler() {
: # handle SIGINT here
} 1>&3 2>&4
trap handler INT
Or something alike; the idea is making copies of the main shell's stdout and stderr.
Why isn't the sigpipe caused earlier by the redirection within the function?
Because tee is alive while echo fun_stderr >&2 is being executed. And sleep does not write anything to its stdout, so it can not trigger a SIGPIPE.
The reason why this script terminates due to a SIGPIPE is that tee receives the SIGINT generated by the keyboard as well and terminates before the trap action associated with SIGINT is executed. As a result, while executing echo trap_stderr >&2, since its stderr is connected to a pipe that has been closed moments ago, the shell receives the SIGPIPE.
To avoid this, as already suggested, you can make tee ignore SIGINT. You don't need to set an empty trap for that though, the -i option is enough.
fun 2> >(tee -a -i log) >> log
The source of the SIGPIPE is that the SIGINT (initiated by ctrl/c) is sent to ALL processes running: both the "main" bash process (executing the 'fun' function), and the sub shell executing the 'tee -a'. As a result, on Ctrl/C, both get killed. When the main process tries to send 'trap_stderr' to te "tee" process, it get SIGPIPE, because the "tee" has already died.
Given the role of the 'tee -a', it make sense to protect it from the SIGINT, and allow it to run until 'fun' complete (or killed). Consider the following change to the last line
fun >> log 2> >(trap '' INT ; tee -a log >&2)
which will produce the log file:
Console (stderr)
fun_stderr
^Ctrap_stderr
Log File: (no duplicates)
fun_stdout
fun_stderr
trap_stdout
trap_stderr
The above will also address the second question is about duplicate lines in the log file. This is the result of using tee to send each stderr line to log file AND stdout. Given that stdout just got redirect (by the '>>log') to the 'log' file, both copy of the output are sent to the log file, and none to the terminal
Given the redirection are performed sequentially, changing the 'tee' line to sent the output to the original stderr (instead of the already redirect stdout) will show the output on the terminal (or whatever stderr is)
Related
I'm somewhat familiar with the common way of redirecting stdout to a file, and then redirecting stderr to stdout.
If I run a command such as ls > output.txt 2>&1, my guess is that under the hood, the shell is executing something like the following c code:
close(1)
open("output.txt") // assigned to fd 1
close(2)
dup2(1, 2)
Since fd 1 has already been replaced with output.txt, anything printed to stderr will be redirected to output.txt.
But, if I run ls 2>&1 > output.txt, I'm guessing that this is instead what happens:
close(2)
dup2(1, 2)
close(1)
open("output.txt")
But, since the shell prints out both stdout and stderr by default, is there any difference between ls 2>&1 output.txt and ls > output.txt? In both cases, stdout will be redirected to output.txt, while stderr will be printed to the console.
With ls >output.txt, the stderr from ls goes to the stderr inherited from the calling process. In contrast, with ls 2>&1 >output.txt, the stderr of ls is sent to the stdout of the calling process.
Let's try this with an example script that prints a line of output to each of stdout and stderr:
$ cat pr.sh
#!/bin/sh
echo "to stdout"
echo "to stderr" 1>&2
$ sh pr.sh >/dev/null
to stderr
$ sh pr.sh 2>/dev/null
to stdout
Now if we insert "2>&1" into the first command line, nothing appears different:
$ sh pr.sh 2>&1 >/dev/null
to stderr
But now let's run both of those inside a context where the inherited stdout is going someplace other than the console:
$ (sh pr.sh 2>&1 >/dev/null) >/dev/null
$ (sh pr.sh >/dev/null) >/dev/null
to stderr
The second command still prints because the inherited stderr is still going to the console. But the first prints nothing because the "2>&1" redirects the inner stderr to the outer stdout, which is going to /dev/null.
Although I've never used this construction, conceivably it could be useful in a situation where (in a script, most likely) you want to run a program, send its stdout to a file, but forward its stderr on to the caller as if it were "normal" output, perhaps because that program is being run along with some other programs and you want the first program's "error" output to be part of the same stream as the other programs' "normal" output. (Perhaps both programs are compilers, and you want to capture all the error messages, but they disagree about which stream errors are sent to.)
update: I think the below code may be headed in the wrong direction -- but the question remains, can I open a pipe to log all output (file&console), pause that log and log to a new log (new file&console), and then re-attach to the FD for the original logger just by moving FDs around and not re-opening the original log file
Trying to improve my knowledge of FDs in bash. I'm trying to log all output of the main "meta" test.sh -- but log to a different file when I get to "sections" -- e.g. functions, sourced scripts, etc. And then go back to appending to the "meta" log.
I know I could pretty easily accomplish this with subshells -- or by opening the 'meta' log again and append from there, but can anyone help accomplish this by switching FDs around?
#!/bin/bash
rm *.log
NAMED_PIPE="$(mktemp -u /tmp/pipe.XXXX)"
mknod $NAMED_PIPE p
tee <$NAMED_PIPE "./meta.log" &
section () {
echo SECTION: stdout
echo SECTION: stderr >&2
}
# link stdout->3 & stderr->4 and save stdout & stderr
exec 3>&1 4>&2 &> "$NAMED_PIPE"
echo METAstr: stdout
echo METAstr: stderr >&2
# restore stdout & stderr
exec 1>&3 2>&4
# sleep 1 # I think an additional delay prevents the possible race condition I'm seeing
# exec 1>&3- 2>&4- ... I think this would restore but close 3 & 4?
# do I need another named pipe here?
section 2>&1 # | tee section.log
# re-link to same pipe
exec 3>&1 4>&2 &> "$NAMED_PIPE"
echo METAend: stdout
echo METAend: stderr >&2
Without trying to log 'section' all the meta output gets printed after the return of the script:
-bash-4.2# ./test.sh
SECTION: stdout
SECTION: stderr
-bash-4.2# METAstr: stdout
METAstr: stderr
METAend: stdout
METAend: stderr
And trying to log 'section' I think fouls up my FDs so the following exec hangs me up:
-bash-4.2# ./test.sh
METAstr: stdout
METAstr: stderr
SECTION: stdout
SECTION: stderr
EDIT1:
Contents of meta.log after running the script without trying to tee section:
[root#master tmp]# cat meta.log
METAstr: stdout
METAstr: stderr
METAend: stdout
METAend: stderr
It logs the ending messages, the tee does not exit until the script does
EDIT2:
Revision of EDIT1. I think It's a race condition. I think the FDs are being closed -- but they're not closed by the time the final echo commands happen.
I was just going to write the same thing. It's a race condition. The second exec closes the writing end in your process, signalling an EOF to tee. tee will want to exit when it gets the EOF. If it does exit by the time you call the last exec, the last exec will block. If it hasn't exited yet at that point, it will not block, because a reading end of the FIFO will still be open.
Any delay will make it more likely tee will have exited by that time.
Spawning a process makes it very likely. I found with stracing (which slows the program down a little) it's about 50/50.
I want to write a shell script that runs a command, writing its stderr to my terminal as it arrives. However, I also want to save stderr to a variable, so I can inspect it later.
How can I achieve this? Should I use tee, or a subshell, or something else?
I've tried this:
# Create FD 3 that can be used so stdout still comes through
exec 3>&1
# Run the command, piping stdout to normal stdout, but saving stderr.
{ ERROR=$( $# 2>&1 1>&3) ; }
echo "copy of stderr: $ERROR"
However, this doesn't write stderr to the console, it only saves it.
I've also tried:
{ $#; } 2> >(tee stderr.txt >&2 )
echo "stderr was:"
cat stderr.txt
However, I don't want the temporary file.
I often want to do this, and find myself reaching for /dev/stderr, but there can be problems with this approach; for example, Nix build scripts give "permission denied" errors if they try to write to /dev/stdout or /dev/stderr.
After reinventing this wheel a few times, my current approach is to use process substitution as follows:
myCmd 2> >(tee >(cat 1>&2))
Reading this from the outside in:
This will run myCmd, leaving its stdout as-is. The 2> will redirect the stderr of myCmd to a different destination; the destination here is >(tee >(cat 1>&2)) which will cause it to be piped into the command tee >(cat 1>&2).
The tee command duplicates its input (in this case, the stderr of myCmd) to its stdout and to the given destination. The destination here is >(cat 1>&2), which will cause the data to be piped into the command cat 1>&2.
The cat command just passes its input straight to stdout. The 1>&2 redirects stdout to go to stderr.
Reading from the inside out:
The cat 1>&2 command redirects its stdin to stderr, so >(cat 1>&2) acts like /dev/stderr.
Hence tee >(cat 1>&2) duplicates its stdin to both stdout and stderr, acting like tee /dev/stderr.
We use 2> >(tee >(cat 1>&2)) to get 2 copies of stderr: one on stdout and one on stderr.
We can use the copy on stdout as normal, for example storing it in a variable. We can leave the copy on stderr to get printed to the terminal.
We can combine this with other redirections if we like, e.g.
# Create FD 3 that can be used so stdout still comes through
exec 3>&1
# Run the command, redirecting its stdout to the shell's stdout,
# duplicating its stderr and sending one copy to the shell's stderr
# and using the other to replace the command's stdout, which we then
# capture
{ ERROR=$( $# 2> >(tee >(cat 1>&2)) 1>&3) ; }
echo "copy of stderr: $ERROR"
Credit goes to #Etan Reisner for the fundamentals of the approach; however, it's better to use tee with /dev/stderr rather than /dev/tty in order to preserve normal behavior (if you send to /dev/tty, the outside world doesn't see it as stderr output, and can neither capture nor suppress it):
Here's the full idiom:
exec 3>&1 # Save original stdout in temp. fd #3.
# Redirect stderr to *captured* stdout, send stdout to *saved* stdout, also send
# captured stdout (and thus stderr) to original stderr.
errOutput=$("$#" 2>&1 1>&3 | tee /dev/stderr)
exec 3>&- # Close temp. fd.
echo "copy of stderr: $errOutput"
This is a task that I try to do pretty often.
I want to log both stderr and stdout to a log file. But I only want to print to console stderr.
I've tried with tee, but once I've merge stderr and stdout using "2>&1". I can not print stdout to the screen anymore since both my pipes are merged.
Here is a simple example of what I tried
./dosomething.sh | tee -a log 2>&1.
Now I have both stderr and stdout to the log and the screen.
Any Ideas?
Based on some reading on this web site, this question has been asked.
Write STDOUT & STDERR to a logfile, also write STDERR to screen
And also a question very similar here:
Save stdout, stderr and stdout+stderr synchronously
But neither of them are able to redirect both stdout+stderr to a log and stderr to the screen while stdoud and stderr are synchronously written to the log file.
I was able to get this working in bash:
(./tmp.sh 2> >(tee >(cat >&2) >&1)) > tmp.log
This does not work correctly in zsh (the prompt does not wait for the process to exit), and does not work at all in dash. A more portable solution may be to write a simple C program to do it.
I managed to get this working with this script in bash.
mkfifo stdout
mkfifo stderr
rm -f out
cat stderr | tee -a out &
cat stdout >> out &
(echo "stdout";
grep;
echo "an other stdout";
echo "again stdout";
stat) 2> stderr > stdout
rm -f stdout
rm -f stderr
The order of the output is preserved. With this script the process ends correctly.
Note: I used grep and stat without parameter to generate stdout.
I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console