I've been trying to experiment and see what the difference would be between
command >file 2> file
and
command >file 2>&1
I haven't been able to. I understand that the second says to send error to where file descriptor 1 (stdout) is already going, and the first would create a new empty file for it, but how can this be seen?
Also, where can I find more information to learn about file descriptors/io syntax and how it works?
The difference is that >file 2>&1 opens the file just once, but then allows access to that single connection to the file (technically, the "open file description" in the kernel) via both file descriptor #1 (stdout) and #2 (stderr). Since writes to both stdout and stderr are going via the same connection ("open file description"), they write to the file in a consistent, coordinated way (and similar coordination applies to files opened for input on multiple descriptors).
>file 2>file, on the other hand, opens the file twice (creating two separate open file descriptions in the kernel), so writing to the file via the two file descriptors is not coordinated, and they can basically step on each others' feet.
An example may help to clarify what I mean. Here's a short subshell command that prints something to stdout, then a bit to stderr, then more to stdout. Try it first with >file 2>&1 and it does what you'd expect:
$ (echo abc; echo 123456 >&2; echo def) >file 2>&1
$ cat file
abc
123456
def
No surprise there, right? Now let's try it with separate connections to the file:
$ (echo abc; echo 123456 >&2; echo def) >file 2>file
$ cat file
1234def
That's probably not what you were expecting. What's happened here is that the first echo command sent "abc" followed by a newline character to stdout, and it got written into the first four bytes of the file. The second echo then sent "123456" followed by a newline to stderr; since the stderr connection was separate, it was still pointed to the beginning of the file, so it got written into the first seven bytes of the file (overwriting the "abc<newline>" that was already there). Then the third echo sent "def" and a newline to stdout; since the stdout connection was pointed to byte #5 of the file (one byte past where the last write to that connection ended), it gets written starting there, which overwrites the "56<newline>" part of what the second echo wrote there.
So having the same file open multiple times can lead to really confusing results. This is why you should always use >file 2>&1 instead of >file 2>file.
Here is a way of seeing one different
$ some-command > file > file
something to stderr if some-commands outputs
$ set -o noclobber
$ some-command > file > file
bash: file: cannot overwrite existing file
$ rm file
$ some-command > f1 2>&1
# no error
Related
In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output
ftp -v -n <<! > /tmp/ftp$$ 2>&1
open $TARGET_HOST
user $TARGET_USER $TARGET_PWORD
binary
cd $TARGET_PUT_DIR
put $RESULTS_OUT_DIR/$FILE $FILE
bye
!
I inderstand that <<! is a "here-document" and is passing the commands to ftp until it reaches the delimiter "!" but I can't seem to wrap my head around this redirection:
> /tmp/ftp$$ 2>&1
Could someone please explain what is happening here?
First, the heredoc could be listed last without affecting what happens. Heredocs are traditionally written last but the <<NAME but can actually be written anywhere within the command. The order of << relative to the two > redirections doesn't matter since the former changes stdin and the latter change stdout and stderr.
It'd be clearer if it were written:
ftp -v -n > /tmp/ftp$$ 2>&1 <<!
...
!
Second, to explain the output redirections:
> /tmp/ftp$$ redirects stdout to a file named /tmp/ftp1234, where 1234 is the PID of the current shell process. It's an ad hoc way of making a temporary file with a relatively unique name. If the shell script were run several times in parallel each copy would write to a different temp file.
2>&1 redirects stderr (fd 2) to stdout (fd 1). In other words, it sends error messages to the same file /tmp/ftp$$.
I have output which is coming from fd3 from a program of which I am redirecting to a file as such:
program 3> output.log
In this instance I only need the first line provided by the program to be written to the log and do not want to keep a write handle open to this file for the life of the program.
How can I read only the first line? I think I can use the shell command read but I don't know how to use it for anything other than stdout. Note that I do not want to redirect fd3 to stdout to then use read as I am capturing stdout to another log.
You can capture the first line of an arbitrary file descriptor in this way:
$ (printf '%s\n' foo bar >&3) 3> >(head -n1)
foo
This prints two lines to FD 3 and redirects that to standard input of head. If you want to store that result to a file simply redirect within the process substitution.
This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 5 years ago.
When I run the following Bash script, I would expect it to print Hello. Instead, it prints a blank line and exits.
echo 'Hello' | echo
Why doesn't piping output from echo to echo work?
echo prints all of its arguments. It does not read from stdin. So the second echo prints all of its arguments (none) and exits, ignoring the Hello on stdin.
For a program that reads its stdin and prints that to stdout, use cat:
$ echo Hello | cat
Hello
In this case the pipe you are using are more correctly known as anonymous pipes, because they have no name (there are also named pipes). Anonymous pipes only work between related processes, for example processes with the same parent.
Pipes are part of the IO system resulting from the C runtime-library. These streams are buffered (there is an exception) by default. Basically a pipe is just connecting the output buffer from one process to the input buffer of another.
The first three streams used (called file descriptors) are numbered 0, 1, and 2. The first, 0, is known as standard input, or stdin (the name used in C). By default this is connected to the keyboard, but it can be redirected either using the < symbol or the program name being on the right side of a pipe.
The second, 1, is known as standard output, or stdout. By default this is connected to the terminal screen, but can be redirected by using the > symbol or the program name being on the left side of a pipe.
So:
echo 'Hello' | echo
takes the standard output from echo and passes it to the standard input of echo. But echo does not read stdin! So nothing happens.
Filter programs process the filenames specified on the command-line. If no filenames are given then they read stdin. Examples include cat, grep, and sed, but not echo. For example:
echo 'Hello' | cat
will display 'Hello', and the cat is useless (it often is).
echo 'Hello' | cat file1
will ignore the output from echo and just display the contents of file1. Remember that stdin is only read if no filename is given.
What do you think this displays?
echo 'Hello' | cat < file1 file2
and why?
Finally, the third stream, 2, is called standard error, or stderr, and this one is unbuffered. It is ignored by pipes, because they only operate between stdin and stdout. However, you can redirect stderr to use stdout (see man dup2):
myprog 2>&1 | anotherprog
The 2>&1 means "redirect file descriptor 2 to the same place as fie descriptor 1".
The above is normal behaviour, however a program can override all that if it wants to. It could read from file descriptor 2, for example. I have omitted a lot of other detail, including other forms of redirection such as process substitution and here documents.
Piping can be done only for commands taking inputs from stdin. But echo does not takes from stdin. It will take input from argument and print it. So this wont work. Inorder to echo you can do something like echo $(echo 'hello')
It is because echo (both builtin and /bin/echo) don't read anything from stdin.
Use cat instead:
echo 'Hello' | cat
Hello
Or without pipes:
cat <<< 'Hello'
I want to redirect the output of stdout and stderr to a common file:
./foo.sh >stdout_and_stderr.txt 2>&1
But also redirect just stderr to a separate file. I tried variations of:
./foo.sh >stdout_and_stderr.txt 2>stderr.txt 2>&1
but none of them work quite right in bash, e.g. stderr only gets redirected to one of the output files. It's important that the combined file preserves the line ordering of the first code snippet, so no dumping to separate files and later combining.
Is there a neat solution to this in bash?
You can use an additional file descriptor and tee:
{ foo.sh 2>&1 1>&3- | tee stderr.txt; } > stdout_and_stderr.txt 3>&1
Be aware that line buffering may cause the stdout output to appear out of order. If this is a problem, there are ways to overcome that including the use of unbuffer.
Using process substitution, you can get a moderate approximation to what you're after:
file1=stdout.stderr
file2=stderr.only
: > $file1 # Zap the file before starting
./foo.sh >> $file1 2> >(tee $file2 >> $file1)
This names the files since one of the names is repeated. The standard output is written to $file1. Standard error is written to the pipeline, which runs tee and writes one copy of the input (which was standard error output) to $file2, and also writes a second copy to $file1 as well. The >> redirections mean that the file is opened with O_APPEND so that each write will be done at the end, regardless of what the other process has also written.
As noted in comments, the output will, in general, be interleaved differently in this than it would if you simply ran ./foo.sh at the terminal. There are multiple sets of buffering going on to ensure that is what happens. You might also get partial lines because of the ways lines break over buffer size boundaries.
This comment from #jonathan-leffler should be an answer:
Note that your first command (./foo.sh 2>&1 >file) sends errors to the original standard output, and the standard output (but not the redirected standard error) to the file.
If you wanted both in file, you'd have to use ./foo.sh >file 2>&1, reversing the order of the redirections.
They're interpreted reading left to right.