I typically use tee to receive a piped output data, echo it to standard output, and forward it to the actual intended recipient of the piped data. But sometimes this fails, and I cannot exactly understand why.
I'll try to demonstrate with a series of examples:
$ echo testing with this string | tee
testing with this string
So, just echoing some data to tee without arguments, is replicated/printed on the terminal/stdout. Note that this should be tee printing the output, as the output from echo is now "piped"/redirected, and therefore not present in stdout anymore (the same that happens here:
$ echo aa | echo bb
bb
... i.e. echo aa output got redirected to the next command, - which, being echo b, does not care about the input, and outputs just its own output.)
$ echo testing with this string | tee | python3 -c 'a=1'
$
Now here, piping data into tee without arguments, - and then piping, from tee, to a program that does not provide any output to terminal/stdout - prints nothing. I would have expected tee here to duplicate to stdout, and then forward to the next command in the pipeline, but apparently that does not happen.
$ echo testing with this string | tee /dev/stdout
testing with this string
testing with this string
Right, so if we pipe to tee with command line argument /dev/stdout, we get the printout twice - and as concluded earlier, it must be tee that produces both printed lines. That means, that when used without an argument, |tee basically does not open any file for duplicating, and simply forwards what it receives on its input, to its output; but as it is the last in the pipeline, its output is stdout in that case, so we get a single printout.
Here we get double printout, because
tee duplicated its input stream to /dev/stdout due to the argument (which ends up as the first printout); and then
forwarded the same input to its output, which here being stdout (as tee is again last in the pipeline), results with the second printout.
This also would explain why the previous ...| tee | python3 -c 'a=1' did not print anything: tee without arguments did not open any file for duplication, and merely forwarded to next command in the toolchain - and as the next one does not print any output either, no output is generated whatsoever.
Well, if the above understanding is correct, then this:
$ echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
$
... should print at least one line (from tee copying to /dev/stdout; the "forwarded" part will end up being "gulped" by the final command as it prints nothing), but it does not.
So, why does this happen - where am I going wrong in my understanding of what tee does?
And how can I use tee, to print to stdout, also when its output is forwarded to a command that doesn't print anything to stdout on its own?
You aren't misunderstanding tee, you're misunderstanding what stdout is. In a pipe, like echo testing | tee | python3 -c 'a=1', the tee command's stdout is not the terminal, it's the pipe going to the python command (and the echo command's stdout is the pipe going to tee).
So tee /dev/stdout sends two copies of its input (on stdin) to the exact same place: its stdout, whether that's the terminal, or a pipe, or whatever.
If you want to send a copy of the input to tee someplace other than down the pipe, you need to send it somewhere other than stdout. Where that is depends on where you actually want to send it (i.e. why you want to copy it). If you specifically want to send it to the terminal, you could do this:
echo testing | tee /dev/tty | python3 -c 'a=1'
...while if you want to send it to the outer context's stdout (which might or might not be a terminal), you can duplicate the outer context's stdin to a different file descriptor (#3 is handy for this), and then have tee write a copy to that:
{ echo testing | tee /dev/fd/3 | python3 -c 'a=1'; } 3>&1
Yet another option is to redirect it to stderr (aka FD #2, which is also the terminal by default, but redirectable separately from stdout) with tee /dev/fd/2.
Note that the various /dev entries I'm using here are supported by most unixish OSes, but they aren't universal. Check to see what your specific OS provides.
I think I got it, but am not sure if it is correct: I saw this: 19.8. Forgetting That Pipelines Make Subshells - bash Cookbook [Book].
So, if pipelines make subshells, then
echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
... is conceptually equal to:
echo testing with this string | (tee /dev/stdout | (python3 -c 'a=1'))
Note that the second pipe | redirects stdout of the subshell tee runs in, and as /dev/stdout is just an interface to stdout, it is redirected too, so we get nothing printed.
So, while stdout (and /dev/stdout) is local to the (sub)shell, /dev/tty is local to the terminal - and therefore the following:
$ echo testing with this string | tee /dev/tty | python3 -c 'a=1'
testing with this string
... in fact prints a line, as expected.
Related
In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output
In an answer to a question about piping and redirection, robert mentions that piping also captures the stdout of substituted processes in the pipeline, whilst redirection doesn't. Why is this so? What exactly is going on, that results in this behavior:
bash-4.1$ echo -e '1\n2' | tee >(head -n1) >redirect
1
bash-4.1$ cat redirect
1
2
bash-4.1$ echo -e '1\n2' | tee >(head -n1) | cat >pipe
bash-4.1$ cat pipe
1
2
1
I would've thought that both forms would produce the same result -- the latter one.
Reading an answer to a different question, it seemed plausible that reordering the redirect in the command might produce the desired result, but no matter the order, the result is always the same:
bash-4.1$ echo -e '1\n2' | tee >redirect >(head -n1)
1
bash-4.1$ cat redirect
1
2
bash-4.1$ echo -e '1\n2' | >redirect tee >(head -n1)
1
bash-4.1$ cat redirect
1
2
Why does the stdout redirect only affect tee, but pipe captures the substituted process head as well? Simply "By design"?
Just a thought related to the above question: I thought that redirecting to a file and piping the output would never make sense, but it does make sense with process substitution:
bash-4.1$ echo -e '1\n2\n3' | tee >(head -n1) >(tail -n1) >tee_out | cat >subst_out
bash-4.1$ cat tee_out
1
2
3
bash-4.1$ cat subst_out
1
3
The shell that runs head is spawned by the same shell that runs tee, which means tee and head both inherit the same file descriptor for standard output, which file descriptor is connected to the pipe to cat. That means both tee and head have their output piped to cat, resulting in the behavior you see.
For
echo -e '1\n2' | tee >(head -n1) > redirect
, after |, only tee's stdout is redirected to the file and head still outputs to the tty. To redirect both tee and head's stdout you can write
echo -e '1\n2' | { tee >(head -n1); } > redirect
or
{ echo -e '1\n2' | tee >(head -n1); } > redirect
For
echo -e '1\n2' | tee >(head -n1) | cat > pipe
, tee >(head -n1) as a whole their stdout is piped to cat. It's logically the same as echo -e '1\n2' | { tee >(head -n1); } > redirect.
TL;DR: When executing part of a pipeline, the shell performs pipe-redirection of stdin/stdout first and >/< redirection last. Command substitution happens in between those two, so pipeline-redirection of stdin/stdout is inherited, whilst >/< redirection is not. It's a design decision.
To be fair, I accepted chepner's answer because he was first and he was correct. However, I decided to add my own answer to document my process of understanding this issue by reading bash's sources, as chepner's answer doesn't explain why the >/< redirection isn't inherited.
It is helpful to understand the steps involved (grossly simplified), when a complex pipeline is encountered by the shell. I have simplified my original problem to this example:
$ echo x >(echo y) >file
y
$ cat file
x /dev/fd/63
$ echo x >(echo y) | cat >file
$ cat file
x /dev/fd/63
y
Redirection-only
When the shell encounters echo x >(echo y) >file, it first forks once to execute the complex command (this can be avoided for some cases, like builtins), and then the forked shell:
creates a pipe (for process substitution)
forks for second echo
fork: connects its stdin to pipe[1]
fork: exec's echo y; the exec'ed echo inherits:
stdin connected to pipe[1]
unchanged stdout
opens file
connects its stdout to file
exec's echo x /proc/<pid>/fd/<pipe id>; the exec'ed echo inherits:
stdin unchanged
stdout connected to file
Here, the second echo inherits the stdout of the forked shell, before that forked shell redirects its stdout to file. I see no absolute necessity for this order of actions in this context, but I assume it makes more sense this way.
Pipe-Redirect
When the shell encounters echo x >(echo y) | cat >file, it detects a pipeline and starts processing it (without forking):
parent: creates a pipe (corresponding to the only actual | in the full command)
parent: forks for left side of pipe
fork1: connects its stdout to pipe[0]
fork1: creates a pipe_subst (for process substitution)
fork1: forks for second echo
nested-fork: connects its stdin to pipe_subst[1]
nested-fork: exec's echo y; the exec'ed echo inherits:
stdin connected to pipe_subst[1] from the inner fork
stdout connected to pipe[0] from the outer fork
fork1: exec's echo x /proc/<pid>/fd/<pipe_subst id>; the exec'ed echo inherits:
stdin unchanged
stdout connected to pipe[0]
parent: forks for right side of pipe (this fork, again, can sometimes be avoided)
fork2: connects its stdin to pipe[1]
fork2: opens file
fork2: connects its stdout to file
fork2: exec's cat; the exec'ed cat inherits:
stdin connected to pipe[1]
stdout connected to file
Here, the pipe takes precedence, i.e. redirection of stdin/stdout due to the pipe is performed before any other actions take place in executing the pipeline elements. Thus both echo's inherit the stdout redirected to cat.
All of this is really a design-consequence of >file redirection being handled after process substitution. If >file redirection were handled before that (like pipe redirection is), then >file would also have been inherited by the substituted processes.
I'm looking for a way to "forward" stdin to stdout in a pipe, while in that step something is written to stderr. The example should clarify this:
echo "before.." | >&2 echo "some logging..."; [[forward stdin>stdout]] | cat
This should put "before.." to stdout, meanwhile "some logging..." to stderr.
How to do that? Or is there maybe another quite different approach to this?
Here's a solution based on your comments:
cat ~/.bashrc | tee >( cat -n >&2 ) | sort
cat ~/.bashrc represents the start of your pipeline, producing some data.
tee duplicates its input, writing to both stdout and any files listed as arguments.
>( ... ) is a bash construct that runs ... as a pipe subcommand but replaces itself by a filename (something that tee can open and write to).
cat -n represents modifying the input (adding line numbers).
>&2 redirects stdout to stderr.
sort represents the end of your pipeline (normal processing of the unchanged input).
Putting it all together, bash will
run cat ~/.bashrc, putting the contents of ~/.bashrc on stdout
... which is piped to the stdin of tee
run cat -n with stdout redirected to stderr and stdin redirected to a new pipe
run tee /dev/fd/63 (where /dev/fd/63 represents the other end of the cat -n pipe)
this is where it all comes together: tee reads its input and writes it to both its stdout and to the other pipe that goes to cat -n (and from there to stderr)
finally tee's stdout goes into sort
Redirections follows the simple command they refer to, thus
echo "before" >&1
echo "some logging..." >&2
should do the trick, if I understand what you're trying to do.
I have a program that returns answers on stdout and errors on stderr.
Unfortunately the program ends by emitting some text on stderr even if successful.
I would like to store the program output in a variable using command expansion as:
ans=$(prog) 2>&1 | grep -v success
This doesn't work. Tried putting 2>&1 in the parens, but as I suspected $ans then
gets the success text.
Any ideas?
Not sure, what you trying to get, but probably this is your command:
ans=$(prog 2>&1 | grep -v success)
If you want to filter 'success' only from standard error stream, you could use something like this:
ans=$({ ./foo 3>&2 2>&1 >&3- | grep -v success; } 2>&1)
And just in case, as noted in BashFAQ/002:
What you cannot do is capture stdout in one variable, and stderr in another, using only FD redirections. You must use a temporary file (or a named pipe) to achieve that one.
I have a program that outputs to stdout and stderr but doesn't make use of them in the correct way. Some errors go to stdout, some go do stderr, non error stuff goes to stderr and it prints way to much info on stdout. To fix this I want to make a pipeline to do:
Save all output of $cmd (from both stderr and stdout) to a file $logfile (don't print it to screen).
Filter out all warning and error messages on stderr and stdout (from warning|error to empty line) and colorize only "error" words (redirect output to stderr).
Save output of step 2 to a file $logfile:r.stderr.
Exit with the correct exit code from the command.
So far I have this:
$!/bin/zsh
# using zsh 4.2.0
setopt no_multios
# Don't error out if sed or grep don't find a match:
alias -g grep_err_warn="(sed -n '/error\|warning/I,/^$/p' || true)"
alias -g color_err="(grep --color -i -C 1000 error 1>&2 || true)"
alias -g filter='tee $logfile | grep_err_warn | tee $logfile:r.stderr | color_err'
# use {} around command to avoid possible race conditions:
{ eval $cmd } 2>&1 | filter
exit $pipestatus[1]
I've tried many things but can't get it to work. I've read "From Bash to Z Shell", many posts, etc. My problems currently are:
Only stdin goes into the filter
Note: the $cmd is a shell script that calls a binary with a /usr/bin/time -p prefix. This seems to cause issues with pipelines and is why I'm wrapping the command in {…} all the output goes into the pipe.
I don't have zsh available.
I did notice that your {..}'d statement is not correct.
You always need a semicolon before the closing `}'.
When I added that in bash, I could prove to my satisfaction that stderr was being redirected to stdout.
Try
{ eval $cmd ; } 2>&1 | filter
# ----------^
Also, you wrote
Save all output of $cmd (form stderr
and stdout) to a file $logfile
I don't see any mention of $logfile in your code.
You should be able to get all output into logfile (while losing the specficity of stderr stream), with
yourCommand 2>&1 | tee ${logFile} | ....
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
Don't use aliases in scripts, use functions (global aliases are especially looking for trouble). Not that you actually need functions here. You also don't need || true (unless you're running under set -e, in which case you should turn it off here). Other than that, your script looks ok; what is it choking on?
{ eval $cmd } |
tee $logfile |
sed -n '/error\|warning/I,/^$/p' |
tee $logfile:r.stderr |
grep --color -i -C 1000 error 1>&2
exit $pipestatus[1]
I'm also not sure what you meant by the sed expression; I don't quite understand your requirement 2.
The original post was mostly correct, except for an optimization by Gilles (to turn off set -e so the || true's are not needed.
#!/bin/zsh
# using zsh 4.2.0
setopt no_multios
#setopt no_errexit # set -e # don't turn this on
{ eval $cmd } 2>&1 |
tee $logfile |
sed -n '/error\|warning/I,/^$/p' |
tee $logfile:r.stderr |
grep --color -i -C 1000 error 1>&2
exit $pipestatus[1]
The part that confused me was the mixing of stdout and stderr led to them being interleaved and the sed -n '/error\|warning/I,/^$/p' (which prints out from and error || warning to the next empty line) was printing out a lot more than expected which made it seem like the command wasn't working.