In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output
Related
I typically use tee to receive a piped output data, echo it to standard output, and forward it to the actual intended recipient of the piped data. But sometimes this fails, and I cannot exactly understand why.
I'll try to demonstrate with a series of examples:
$ echo testing with this string | tee
testing with this string
So, just echoing some data to tee without arguments, is replicated/printed on the terminal/stdout. Note that this should be tee printing the output, as the output from echo is now "piped"/redirected, and therefore not present in stdout anymore (the same that happens here:
$ echo aa | echo bb
bb
... i.e. echo aa output got redirected to the next command, - which, being echo b, does not care about the input, and outputs just its own output.)
$ echo testing with this string | tee | python3 -c 'a=1'
$
Now here, piping data into tee without arguments, - and then piping, from tee, to a program that does not provide any output to terminal/stdout - prints nothing. I would have expected tee here to duplicate to stdout, and then forward to the next command in the pipeline, but apparently that does not happen.
$ echo testing with this string | tee /dev/stdout
testing with this string
testing with this string
Right, so if we pipe to tee with command line argument /dev/stdout, we get the printout twice - and as concluded earlier, it must be tee that produces both printed lines. That means, that when used without an argument, |tee basically does not open any file for duplicating, and simply forwards what it receives on its input, to its output; but as it is the last in the pipeline, its output is stdout in that case, so we get a single printout.
Here we get double printout, because
tee duplicated its input stream to /dev/stdout due to the argument (which ends up as the first printout); and then
forwarded the same input to its output, which here being stdout (as tee is again last in the pipeline), results with the second printout.
This also would explain why the previous ...| tee | python3 -c 'a=1' did not print anything: tee without arguments did not open any file for duplication, and merely forwarded to next command in the toolchain - and as the next one does not print any output either, no output is generated whatsoever.
Well, if the above understanding is correct, then this:
$ echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
$
... should print at least one line (from tee copying to /dev/stdout; the "forwarded" part will end up being "gulped" by the final command as it prints nothing), but it does not.
So, why does this happen - where am I going wrong in my understanding of what tee does?
And how can I use tee, to print to stdout, also when its output is forwarded to a command that doesn't print anything to stdout on its own?
You aren't misunderstanding tee, you're misunderstanding what stdout is. In a pipe, like echo testing | tee | python3 -c 'a=1', the tee command's stdout is not the terminal, it's the pipe going to the python command (and the echo command's stdout is the pipe going to tee).
So tee /dev/stdout sends two copies of its input (on stdin) to the exact same place: its stdout, whether that's the terminal, or a pipe, or whatever.
If you want to send a copy of the input to tee someplace other than down the pipe, you need to send it somewhere other than stdout. Where that is depends on where you actually want to send it (i.e. why you want to copy it). If you specifically want to send it to the terminal, you could do this:
echo testing | tee /dev/tty | python3 -c 'a=1'
...while if you want to send it to the outer context's stdout (which might or might not be a terminal), you can duplicate the outer context's stdin to a different file descriptor (#3 is handy for this), and then have tee write a copy to that:
{ echo testing | tee /dev/fd/3 | python3 -c 'a=1'; } 3>&1
Yet another option is to redirect it to stderr (aka FD #2, which is also the terminal by default, but redirectable separately from stdout) with tee /dev/fd/2.
Note that the various /dev entries I'm using here are supported by most unixish OSes, but they aren't universal. Check to see what your specific OS provides.
I think I got it, but am not sure if it is correct: I saw this: 19.8. Forgetting That Pipelines Make Subshells - bash Cookbook [Book].
So, if pipelines make subshells, then
echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
... is conceptually equal to:
echo testing with this string | (tee /dev/stdout | (python3 -c 'a=1'))
Note that the second pipe | redirects stdout of the subshell tee runs in, and as /dev/stdout is just an interface to stdout, it is redirected too, so we get nothing printed.
So, while stdout (and /dev/stdout) is local to the (sub)shell, /dev/tty is local to the terminal - and therefore the following:
$ echo testing with this string | tee /dev/tty | python3 -c 'a=1'
testing with this string
... in fact prints a line, as expected.
This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file
This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file
In my bash script I use grep in different logs like this:
LOGS1=$(grep -E -i 'err|warn' /opt/backup/exports.log /opt/backup/imports.log && grep "tar:" /opt/backup/h2_backups.log /opt/backup/st_backups.log)
if [ -n "$LOGS1" ] ]; then
COLOUR="yellow"
MESSAGE="Logs contain warnings. Backups may be incomplete. Invetigate these warnings:\n$LOGS"
Instead of checking if each log exsist (there are many more logs than this) I want check stderr while the script runs to see if I get any output. If one of the logs does not exists it will produce an error like this: grep: /opt/backup/st_backups.log: No such file or directory
I've tried to read sterr with commands like command 2> >(grep "file" >&2 but that does not seem to work.
I know I can pipe the output to a file, but I rather just handle the stderr when there is any output instead of reading the file. OR is there any reason why pipe to file is better?
Send the standard error (file descriptor 2) to standard output(file descriptor 1) and assign it to var Q:
$ Q=$(grep text file 2>&1)
$ echo $Q
grep: file: No such file or directory
This is default behaviour, stderr is normally set to your terminal (and unbuffered) so you see errors as you pipe stdout somewhere. If you want to merge stderr with stdout then this is the syntax,
command >file 2>&1
There seem to be two bash idioms for redirecting STDOUT and STDERR to a file:
fooscript &> foo
... and ...
fooscript > foo 2>&1
What's the difference? It seems to me that the first one is just a shortcut for the second one, but my coworker contends that the second one will produce no output even if there's an error with the initial redirect, whereas the first one will spit redirect errors to STDOUT.
EDIT: Okay... it seems like people are not understanding what I am asking, so I will try to clarify:
Can anyone give me an example where the two specific lines lines written above will yield different behavior?
From the bash manual:
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
The phrase "semantically equivalent" should settle the issue with your coworker.
The situation where the two lines have different behavior is when your script is not running in bash but some simpler shell in the sh family, e.g. dash (which I believe is used as /bin/sh in some Linux distros because it is more lightweight than bash). In that case,
fooscript &> foo
is interpreted as two commands: the first one runs fooscript in the background, and the second one truncates the file foo. The command
fooscript > foo 2>&1
runs fooscript in the foreground and redirects its output and standard error to the file foo. In bash I think the lines will always do the same thing.
The main reason to use 2>&1, in my experience, is when you want to append all output to a file rather than overwrite the file. With &> syntax, you can't append. So with 2>&1, you can write something like program >> alloutput.log 2>&1 and get stdout and stderr output appended to the log file.
&>foo is less typing than >foo 2>&1, and less error-prone (you can't get it in the wrong order), but achieves the same result.
2>&1 is confusing, because you need to put it after the 1> redirect, unless stdout is being redirected to a | pipe, in which case it goes before...
$ some-command 2>&1 >foo # does the unexpected
$ some-command >foo 2>&1 # does the same as
$ some-command &>foo # this and
$ some-command >&foo # compatible with other shells, but trouble if the filename is numeric
$ some-command 2>&1 | less # but the redirect goes *before* the pipe here...
&> foo # Will take all and redirect all output to foo.
2>&1 # will redirect stderr to stdout.
2>&1 depends on the order in which it is specified on the command line. Where &> sends both stdout and stderr to wherever, 2>&1 sends stderr to where stdout is currently going at that point in the command line. Thus:
command > file 2>&1
is different than:
command 2>&1 > file
where the former is redirecting both stdout and stderr to file, the latter redirects stderr to where stdout is going before it is redirected to the file (in this case, probably the terminal.) This is useful if you wanted to do something like:
command 2>&1 > file | less
Where you want to use less to page through the output of stderr and store the output of stdout to a file.
2>&1 might be useful for cases where you aren't redirecting stdout to somewhere else, but rather you just want stderr to be sent to the same place (such as the console) as stdout (perhaps if the command you are running is already sending stderr somewhere else other than the console).
I know its a pretty old posting..but sharing what could answer the question :
* ..will yield different behavior?
Scenario:
When you are trying to use "tee" and want to preserve the exit code using process substitution in bash...
someScript.sh 2>&1 >( tee "toALog") # this fails to capture the out put to log
where as :
someScript.sh >& >( tee "toALog") # works fine!