Weird behavior with busybox grep and wget in alpine linux [duplicate] - shell

To combine stderr and stdout into the stdout stream, we append this to a command:
2>&1
e.g. to see the first few errors from compiling g++ main.cpp:
g++ main.cpp 2>&1 | head
What does 2>&1 mean, in detail?

File descriptor 1 is the standard output (stdout).
File descriptor 2 is the standard error (stderr).
At first, 2>1 may look like a good way to redirect stderr to stdout. However, it will actually be interpreted as "redirect stderr to a file named 1".
& indicates that what follows and precedes is a file descriptor, and not a filename. Thus, we use 2>&1. Consider >& to be a redirect merger operator.

To redirect stdout to file.txt:
echo test > file.txt
This is equivalent to:
echo test 1> file.txt
To redirect stderr to file.txt:
echo test 2> file.txt
So >& is the syntax to redirect a stream to another file descriptor:
0 is stdin
1 is stdout
2 is stderr
To redirect stdout to stderr:
echo test 1>&2 # equivalently, echo test >&2
To redirect stderr to stdout:
echo test 2>&1
Thus, in 2>&1:
2> redirects stderr to an (unspecified) file.
&1 redirects stderr to stdout.

Some tricks about redirection
Some syntax particularity about this may have important behaviours. There is some little samples about redirections, STDERR, STDOUT, and arguments ordering.
1 - Overwriting or appending?
Symbol > means redirection.
> means send to as a whole completed file, overwriting target if exist (see noclobber bash feature at #3 later).
>> means send in addition to would append to target if exist.
In any case, the file would be created if they not exist.
2 - The shell command line is order dependent!!
For testing this, we need a simple command which will send something on both outputs:
$ ls -ld /tmp /tnt
ls: cannot access /tnt: No such file or directory
drwxrwxrwt 118 root root 196608 Jan 7 11:49 /tmp
$ ls -ld /tmp /tnt >/dev/null
ls: cannot access /tnt: No such file or directory
$ ls -ld /tmp /tnt 2>/dev/null
drwxrwxrwt 118 root root 196608 Jan 7 11:49 /tmp
(Expecting you don't have a directory named /tnt, of course ;). Well, we have it!!
So, let's see:
$ ls -ld /tmp /tnt >/dev/null
ls: cannot access /tnt: No such file or directory
$ ls -ld /tmp /tnt >/dev/null 2>&1
$ ls -ld /tmp /tnt 2>&1 >/dev/null
ls: cannot access /tnt: No such file or directory
The last command line dumps STDERR to the console, and it seem not to be the expected behaviour... But...
If you want to make some post filtering about standard output, error output or both:
$ ls -ld /tmp /tnt | sed 's/^.*$/<-- & --->/'
ls: cannot access /tnt: No such file or directory
<-- drwxrwxrwt 118 root root 196608 Jan 7 12:02 /tmp --->
$ ls -ld /tmp /tnt 2>&1 | sed 's/^.*$/<-- & --->/'
<-- ls: cannot access /tnt: No such file or directory --->
<-- drwxrwxrwt 118 root root 196608 Jan 7 12:02 /tmp --->
$ ls -ld /tmp /tnt >/dev/null | sed 's/^.*$/<-- & --->/'
ls: cannot access /tnt: No such file or directory
$ ls -ld /tmp /tnt >/dev/null 2>&1 | sed 's/^.*$/<-- & --->/'
$ ls -ld /tmp /tnt 2>&1 >/dev/null | sed 's/^.*$/<-- & --->/'
<-- ls: cannot access /tnt: No such file or directory --->
Notice that the last command line in this paragraph is exactly same as in previous paragraph, where I wrote seem not to be the expected behaviour (so, this could even be an expected behaviour).
Well, there is a little tricks about redirections, for doing different operation on both outputs:
$ ( ls -ld /tmp /tnt | sed 's/^/O: /' >&9 ) 9>&2 2>&1 | sed 's/^/E: /'
O: drwxrwxrwt 118 root root 196608 Jan 7 12:13 /tmp
E: ls: cannot access /tnt: No such file or directory
Note: &9 descriptor would occur spontaneously because of ) 9>&2.
Addendum: nota! With the new version of bash (>4.0) there is a new feature and more sexy syntax for doing this kind of things:
$ ls -ld /tmp /tnt 2> >(sed 's/^/E: /') > >(sed 's/^/O: /')
O: drwxrwxrwt 17 root root 28672 Nov 5 23:00 /tmp
E: ls: cannot access /tnt: No such file or directory
And finally for such a cascading output formatting:
$ ((ls -ld /tmp /tnt |sed 's/^/O: /' >&9 ) 2>&1 |sed 's/^/E: /') 9>&1| cat -n
1 O: drwxrwxrwt 118 root root 196608 Jan 7 12:29 /tmp
2 E: ls: cannot access /tnt: No such file or directory
Addendum: nota! Same new syntax, in both ways:
$ cat -n <(ls -ld /tmp /tnt 2> >(sed 's/^/E: /') > >(sed 's/^/O: /'))
1 O: drwxrwxrwt 17 root root 28672 Nov 5 23:00 /tmp
2 E: ls: cannot access /tnt: No such file or directory
Where STDOUT go through a specific filter, STDERR to another and finally both outputs merged go through a third command filter.
2b - Using |& instead
Syntax command |& ... could be used as an alias for command 2>&1 | .... Same rules about command line order applies. More details at What is the meaning of operator |& in bash?
3 - A word about noclobber option and >| syntax
That's about overwriting:
While set -o noclobber instruct bash to not overwrite any existing file, the >| syntax let you pass through this limitation:
$ testfile=$(mktemp /tmp/testNoClobberDate-XXXXXX)
$ date > $testfile ; cat $testfile
Mon Jan 7 13:18:15 CET 2013
$ date > $testfile ; cat $testfile
Mon Jan 7 13:18:19 CET 2013
$ date > $testfile ; cat $testfile
Mon Jan 7 13:18:21 CET 2013
The file is overwritten each time, well now:
$ set -o noclobber
$ date > $testfile ; cat $testfile
bash: /tmp/testNoClobberDate-WW1xi9: cannot overwrite existing file
Mon Jan 7 13:18:21 CET 2013
$ date > $testfile ; cat $testfile
bash: /tmp/testNoClobberDate-WW1xi9: cannot overwrite existing file
Mon Jan 7 13:18:21 CET 2013
Pass through with >|:
$ date >| $testfile ; cat $testfile
Mon Jan 7 13:18:58 CET 2013
$ date >| $testfile ; cat $testfile
Mon Jan 7 13:19:01 CET 2013
Unsetting this option and/or inquiring if already set.
$ set -o | grep noclobber
noclobber on
$ set +o noclobber
$ set -o | grep noclobber
noclobber off
$ date > $testfile ; cat $testfile
Mon Jan 7 13:24:27 CET 2013
$ rm $testfile
4 - Last trick and more...
For redirecting both output from a given command, we see that a right syntax could be:
$ ls -ld /tmp /tnt >/dev/null 2>&1
for this special case, there is a shortcut syntax: &> ... or >&
$ ls -ld /tmp /tnt &>/dev/null
$ ls -ld /tmp /tnt >&/dev/null
Nota: if 2>&1 exist, 1>&2 is a correct syntax too:
$ ls -ld /tmp /tnt 2>/dev/null 1>&2
4b- Now, I will let you think about:
$ ls -ld /tmp /tnt 2>&1 1>&2 | sed -e s/^/++/
++/bin/ls: cannot access /tnt: No such file or directory
++drwxrwxrwt 193 root root 196608 Feb 9 11:08 /tmp/
$ ls -ld /tmp /tnt 1>&2 2>&1 | sed -e s/^/++/
/bin/ls: cannot access /tnt: No such file or directory
drwxrwxrwt 193 root root 196608 Feb 9 11:08 /tmp/
4c- If you're interested in more information
You could read the fine manual by hitting:
man -Len -Pless\ +/^REDIRECTION bash
in a bash console ;-)

I found this brilliant post on redirection: All about redirections
Redirect both standard output and standard error to a file
$ command &>file
This one-liner uses the &> operator to redirect both output streams - stdout and stderr - from command to file. This is Bash's shortcut for quickly redirecting both streams to the same destination.
Here is how the file descriptor table looks like after Bash has redirected both streams:
As you can see, both stdout and stderr now point to file. So anything written to stdout and stderr gets written to file.
There are several ways to redirect both streams to the same destination. You can redirect each stream one after another:
$ command >file 2>&1
This is a much more common way to redirect both streams to a file. First stdout is redirected to file, and then stderr is duplicated to be the same as stdout. So both streams end up pointing to file.
When Bash sees several redirections it processes them from left to right. Let's go through the steps and see how that happens. Before running any commands, Bash's file descriptor table looks like this:
Now Bash processes the first redirection >file. We've seen this before and it makes stdout point to file:
Next Bash sees the second redirection 2>&1. We haven't seen this redirection before. This one duplicates file descriptor 2 to be a copy of file descriptor 1 and we get:
Both streams have been redirected to file.
However be careful here! Writing
command >file 2>&1
is not the same as writing:
$ command 2>&1 >file
The order of redirects matters in Bash! This command redirects only the standard output to the file. The stderr will still print to the terminal. To understand why that happens, let's go through the steps again. So before running the command, the file descriptor table looks like this:
Now Bash processes redirections left to right. It first sees 2>&1 so it duplicates stderr to stdout. The file descriptor table becomes:
Now Bash sees the second redirect, >file, and it redirects stdout to file:
Do you see what happens here? Stdout now points to file, but the stderr still points to the terminal! Everything that gets written to stderr still gets printed out to the screen! So be very, very careful with the order of redirects!
Also note that in Bash, writing
$ command &>file
is exactly the same as:
$ command >&file

The numbers refer to the file descriptors (fd).
Zero is stdin
One is stdout
Two is stderr
2>&1 redirects fd 2 to 1.
This works for any number of file descriptors if the program uses them.
You can look at /usr/include/unistd.h if you forget them:
/* Standard file descriptors. */
#define STDIN_FILENO 0 /* Standard input. */
#define STDOUT_FILENO 1 /* Standard output. */
#define STDERR_FILENO 2 /* Standard error output. */
That said I have written C tools that use non-standard file descriptors for custom logging so you don't see it unless you redirect it to a file or something.

That construct sends the standard error stream (stderr) to the current location of standard output (stdout) - this currency issue appears to have been neglected by the other answers.
You can redirect any output handle to another by using this method but it's most often used to channel stdout and stderr streams into a single stream for processing.
Some examples are:
# Look for ERROR string in both stdout and stderr.
foo 2>&1 | grep ERROR
# Run the less pager without stderr screwing up the output.
foo 2>&1 | less
# Send stdout/err to file (with append) and terminal.
foo 2>&1 |tee /dev/tty >>outfile
# Send stderr to normal location and stdout to file.
foo >outfile1 2>&1 >outfile2
Note that that last one will not direct stderr to outfile2 - it redirects it to what stdout was when the argument was encountered (outfile1) and then redirects stdout to outfile2.
This allows some pretty sophisticated trickery.

I found this very helpful if you are a beginner read this
Update:
In Linux or Unix System there are two places programs send output to: Standard output (stdout) and Standard Error (stderr).You can redirect these output to any file.
Like if you do this ls -a > output.txt
Nothing will be printed in console all output (stdout) is redirected to output file.
And if you try print the content of any file that does not exits means output will be an error like if you print test.txt that not present in current directory
cat test.txt > error.txt
Output will be
cat: test.txt :No such file or directory
But error.txt file will be empty because we redirecting the stdout to a file not stderr.
so we need file descriptor( A file descriptor is nothing more than a positive integer that represents an open file. You can say descriptor is unique id of file) to tell shell which type of output we are sending to file .In Unix /Linux system 1 is for stdout and 2 for stderr.
so now if you do this ls -a 1> output.txt means you are sending Standard output (stdout) to output.txt.
and if you do this cat test.txt 2> error.txt means you are sending Standard Error (stderr) to error.txt .
&1 is used to reference the value of the file descriptor 1 (stdout).
Now to the point 2>&1 means “Redirect the stderr to the same place we are redirecting the stdout”
Now you can do this <br
cat maybefile.txt > output.txt 2>&1
both Standard output (stdout) and Standard Error (stderr) will redirected to output.txt.
Thanks to Ondrej K. for pointing out

2 is the console standard error.
1 is the console standard output.
This is the standard Unix, and Windows also follows the POSIX.
E.g. when you run
perl test.pl 2>&1
the standard error is redirected to standard output, so you can see both outputs together:
perl test.pl > debug.log 2>&1
After execution, you can see all the output, including errors, in the debug.log.
perl test.pl 1>out.log 2>err.log
Then standard output goes to out.log, and standard error to err.log.
I suggest you to try to understand these.

2>&1 is a POSIX shell construct. Here is a breakdown, token by token:
2: "Standard error" output file descriptor.
>&: Duplicate an Output File Descriptor operator (a variant of Output Redirection operator >). Given [x]>&[y], the file descriptor denoted by x is made to be a copy of the output file descriptor y.
1 "Standard output" output file descriptor.
The expression 2>&1 copies file descriptor 1 to location 2, so any output written to 2 ("standard error") in the execution environment goes to the same file originally described by 1 ("standard output").
Further explanation:
File Descriptor: "A per-process unique, non-negative integer used to identify an open file for the purpose of file access."
Standard output/error: Refer to the following note in the Redirection section of the shell documentation:
Open files are represented by decimal numbers starting with zero. The largest possible value is implementation-defined; however, all implementations shall support at least 0 to 9, inclusive, for use by the application. These numbers are called "file descriptors". The values 0, 1, and 2 have special meaning and conventional uses and are implied by certain redirection operations; they are referred to as standard input, standard output, and standard error, respectively. Programs usually take their input from standard input, and write output on standard output. Error messages are usually written on standard error. The redirection operators can be preceded by one or more digits (with no intervening characters allowed) to designate the file descriptor number.

To answer your question: It takes any error output (normally sent to stderr) and writes it to standard output (stdout).
This is helpful with, for example 'more' when you need paging for all output. Some programs like printing usage information into stderr.
To help you remember
1 = standard output (where programs print normal output)
2 = standard error (where programs print errors)
"2>&1" simply points everything sent to stderr, to stdout instead.
I also recommend reading this post on error redirecting where this subject is covered in full detail.

From a programmer's point of view, it means precisely this:
dup2(1, 2);
See the man page.
Understanding that 2>&1 is a copy also explains why ...
command >file 2>&1
... is not the same as ...
command 2>&1 >file
The first will send both streams to file, whereas the second will send errors to stdout, and ordinary output into file.

People, always remember paxdiablo's hint about the current location of the redirection target... It is important.
My personal mnemonic for the 2>&1 operator is this:
Think of & as meaning 'and' or 'add' (the character is an ampers-and, isn't it?)
So it becomes: 'redirect 2 (stderr) to where 1 (stdout) already/currently is and add both streams'.
The same mnemonic works for the other frequently used redirection too, 1>&2:
Think of & meaning and or add... (you get the idea about the ampersand, yes?)
So it becomes: 'redirect 1 (stdout) to where 2 (stderr) already/currently is and add both streams'.
And always remember: you have to read chains of redirections 'from the end', from right to left (not from left to right).

Redirecting Input
Redirection of input causes the file whose name
results from the expansion of word to be opened for reading on file
descriptor n, or the standard input (file descriptor 0) if n is
not specified.
The general format for redirecting input is:
[n]<word
Redirecting Output
Redirection of output causes the file whose
name results from the expansion of word to be opened for writing on
file descriptor n, or the standard output (file descriptor 1) if n
is not specified. If the file does not exist it is created; if it
does exist it is truncated to zero size.
The general format for redirecting output is:
[n]>word
Moving File Descriptors
The redirection operator,
[n]<&digit-
moves the file descriptor digit to file descriptor n, or the
standard input (file descriptor 0) if n is not specified.
digit is closed after being duplicated to n.
Similarly, the redirection operator
[n]>&digit-
moves the file descriptor digit to file descriptor n, or the
standard output (file descriptor 1) if n is not specified.
Ref:
man bash
Type /^REDIRECT to locate to the redirection section, and learn more...
An online version is here: 3.6 Redirections
PS:
Lots of the time, man was the powerful tool to learn Linux.

unix_commands 2>&1
This is used to print errors to the terminal.
When errors are produced, they are written to the "standard error" buffer at memory address &2, and 2 references and streams from that buffer.
When outputs are produced, they are written to the "standard output" buffer at memory address &1, and 1 references and streams from that buffer.
So going back to the command. Anytime the program unix_commands produces an error, it writes that into the errors buffer. So we create a pointer to that buffer 2, and redirect > the errors into the outputs buffer &1. At this point we're done, because anything in the outputs buffer is read and printed by the terminal.

Provided that /foo does not exist on your system and /tmp does…
$ ls -l /tmp /foo
will print the contents of /tmp and print an error message for /foo
$ ls -l /tmp /foo > /dev/null
will send the contents of /tmp to /dev/null and print an error message for /foo
$ ls -l /tmp /foo 1> /dev/null
will do exactly the same (note the 1)
$ ls -l /tmp /foo 2> /dev/null
will print the contents of /tmp and send the error message to /dev/null
$ ls -l /tmp /foo 1> /dev/null 2> /dev/null
will send both the listing as well as the error message to /dev/null
$ ls -l /tmp /foo > /dev/null 2> &1
is shorthand

This is just like passing the error to the stdout or the terminal.
That is, cmd is not a command:
$cmd 2>filename
cat filename
command not found
The error is sent to the file like this:
2>&1
Standard error is sent to the terminal.

0 for input, 1 for stdout and 2 for stderr.
One Tip:
somecmd >1.txt 2>&1 is correct, while somecmd 2>&1 >1.txt is totally wrong with no effect!

You need to understand this in terms of pipe.
$ (whoami;ZZZ) 2>&1 | cat
logan
ZZZ: command not found
As you can see both stdout and stderr of LHS of pipe is fed into the RHS (of pipe).
This is the same as
$ (whoami;ZZZ) |& cat
logan
ZZZ: command not found

Note that 1>&2 cannot be used interchangeably with 2>&1.
Imagine your command depends on piping, for example:
docker logs 1b3e97c49e39 2>&1 | grep "some log"
grepping will happen across both stderr and stdout since stderr is basically merged into stdout.
However, if you try:
docker logs 1b3e97c49e39 1>&2 | grep "some log",
grepping will not really search anywhere at all because Unix pipe is connecting processes via connecting stdout | stdin, and stdout in the second case was redirected to stderr in which Unix pipe has no interest.

Related

What is the meaning of these redirects in Bash: "8>&1" "9>&2" and "1>&9"? [duplicate]

Can someone tell me why this does not work? I'm playing around with file descriptors, but feel a little lost.
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The first three lines run fine, but the last two error out. Why?
File descriptors 0, 1 and 2 are for stdin, stdout and stderr respectively.
File descriptors 3, 4, .. 9 are for additional files. In order to use them, you need to open them first. For example:
exec 3<> /tmp/foo #open fd 3.
echo "test" >&3
exec 3>&- #close fd 3.
For more information take a look at Advanced Bash-Scripting Guide: Chapter 20. I/O Redirection.
It's an old question but one thing needs clarification.
While the answers by Carl Norum and dogbane are correct, the assumption is to change your script to make it work.
What I'd like to point out is that you don't need to change the script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
It works if you invoke it differently:
./fdtest 3>&1 4>&1
which means to redirect file descriptors 3 and 4 to 1 (which is standard output).
The point is that the script is perfectly fine in wanting to write to descriptors other than just 1 and 2 (stdout and stderr) if those descriptors are provided by the parent process.
Your example is actually quite interesting because this script can write to 4 different files:
./fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
Now you have the output in 4 separate files:
$ for f in file*; do echo $f:; cat $f; done
file1.txt:
This
file2.txt:
is
file3.txt:
a
file4.txt:
test.
What is more interesting about it is that your program doesn't have to have write permissions for those files, because it doesn't actually open them.
For example, when I run sudo -s to change user to root, create a directory as root, and try to run the following command as my regular user (rsp in my case) like this:
# su rsp -c '../fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt'
I get an error:
bash: file1.txt: Permission denied
But if I do the redirection outside of su:
# su rsp -c '../fdtest' >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
(note the difference in single quotes) it works and I get:
# ls -alp
total 56
drwxr-xr-x 2 root root 4096 Jun 23 15:05 ./
drwxrwxr-x 3 rsp rsp 4096 Jun 23 15:01 ../
-rw-r--r-- 1 root root 5 Jun 23 15:05 file1.txt
-rw-r--r-- 1 root root 39 Jun 23 15:05 file2.txt
-rw-r--r-- 1 root root 2 Jun 23 15:05 file3.txt
-rw-r--r-- 1 root root 6 Jun 23 15:05 file4.txt
which are 4 files owned by root in a directory owned by root - even though the script didn't have permissions to create those files.
Another example would be using chroot jail or a container and run a program inside where it wouldn't have access to those files even if it was run as root and still redirect those descriptors externally where you need, without actually giving access to the entire file system or anything else to this script.
The point is that you have discovered a very interesting and useful mechanism. You don't have to open all the files inside of your script as was suggested in other answers. Sometimes it is useful to redirect them during the script invocation.
To sum it up, this:
echo "This"
is actually equivalent to:
echo "This" >&1
and running the program as:
./program >file.txt
is the same as:
./program 1>file.txt
The number 1 is just a default number and it is stdout.
But even this program:
#!/bin/bash
echo "This"
can produce a "Bad descriptor" error. How? When run as:
./fdtest2 >&-
The output will be:
./fdtest2: line 2: echo: write error: Bad file descriptor
Adding >&- (which is the same as 1>&-) means closing the standard output. Adding 2>&- would mean closing the stderr.
You can even do a more complicated thing. Your original script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
when run with just:
./fdtest
prints:
This
is
./fdtest: line 4: 3: Bad file descriptor
./fdtest: line 5: 4: Bad file descriptor
But you can make descriptors 3 and 4 work, but number 1 fail by running:
./fdtest 3>&1 4>&1 1>&-
It outputs:
./fdtest: line 2: echo: write error: Bad file descriptor
is
a
test.
If you want descriptors both 1 and 2 fail, run it like this:
./fdtest 3>&1 4>&1 1>&- 2>&-
You get:
a
test.
Why? Didn't anything fail? It did but with no stderr (file descriptor number 2) you didn't see the error messages!
I think it's very useful to experiment this way to get a feeling of how the descriptors and their redirection work.
Your script is a very interesting example indeed - and I argue that it is not broken at all, you were just using it wrong! :)
It's failing because those file descriptors don't point to anything! The normal default file descriptors are the standard input 0, the standard output 1, and the standard error stream 2. Since your script isn't opening any other files, there are no other valid file descriptors. You can open a file in bash using exec. Here's a modification of your example:
#!/bin/bash
exec 3> out1 # open file 'out1' for writing, assign to fd 3
exec 4> out2 # open file 'out2' for writing, assign to fd 4
echo "This" # output to fd 1 (stdout)
echo "is" >&2 # output to fd 2 (stderr)
echo "a" >&3 # output to fd 3
echo "test." >&4 # output to fd 4
And now we'll run it:
$ ls
script
$ ./script
This
is
$ ls
out1 out2 script
$ cat out*
a
test.
$
As you can see, the extra output was sent to the requested files.
To add on to the answer from rsp and respond the question in the comments of that answer from #MattClimbs.
You can test if the file descriptor is open or not by attempting to redirect to it early and if it fails, open the desired numbered file descriptor to something like /dev/null. I do this regularly within scripts and leverage the additional file descriptors to pass back additional details or responses beyond return #.
script.sh
#!/bin/bash
2>/dev/null >&3 || exec 3>/dev/null
2>/dev/null >&4 || exec 4>/dev/null
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The stderr is redirected to /dev/null to discard the possible bash: #: Bad file descriptor response and the || is used to process the following command exec #>/dev/null when the previous one exits with a non zero status. In the event that the file descriptor is already opened, the two tests would return a zero status and the exec ... command would not be executed.
Calling the script without any redirections yields:
# ./script.sh
This
is
In this case, the redirections for a and test are shipped off to /dev/null
Calling the script with a redirection defined yields:
# ./script.sh 3>temp.txt 4>>temp.txt
This
is
# cat temp.txt
a
test.
The first redirection 3>temp.txt overwrites the file temp.txt while 4>>temp.txt appends to the file.
In the end, you can define default files to redirect to within the script if you want something other than /dev/null or you can change the execution method of the script and redirect those extra file descriptors anywhere you want.

Redirect stdout and stderr to file permanently but keep printing them

By doing the following(1):
exec >> log_file
exec 2>&1
ls
stdout and stderr is permanently redirected for the next commands but not anymore displayed on terminal. Here, ls output will be in log_file but not displayed in terminal
By doing the following(2):
command | tee log_file
command output will be both in log_file and printed on terminal but this will work only for command and not for the next commands as method 1.
How to redirect permanently stdout and stderr output in a file for a given terminal like method 1 and, keep stdout and stderr printed on the terminal instance like method 2 ?
I currently use this in my scripts:
exec > >(tee -a "${FILE_Log}" )
exec 2> >(tee -a "${FILE_Log}" >&2)
Basically you are telling bash to send the output (both stdout and stderr) to tee's stdin, and since tee is running in the subshell (within the parentheses), it will live as long as your script does.
Put that somewhere near the top, then any and all command output, echo, print, and, printf will be logged.
This saves having to create a LOG() function and constantly piping commands to tee
Hope that helps!
Using tee and redirection combined:
(exec 2>&1) | tee file.txt
Here's an example
[root#box ~]# (ll 2>&1) | tee tmp.txt
total 4
-rw-------. 1 root root 1007 Apr 26 2017 anaconda-ks.cfg
[root#box ~]# cat tmp.txt
total 4
-rw-------. 1 root root 1007 Apr 26 2017 anaconda-ks.cfg
The reason the command is inside paranthesis is to keep the order of the prints, since stdout is buffered. It causes the ll command to be executed in a subshell, and then return the output of stdout + stderr in-order.

What is /dev/null 2>&1? [duplicate]

This question already has answers here:
What does " 2>&1 " mean?
(19 answers)
Closed 26 days ago.
I found this piece of code in /etc/cron.daily/apf
#!/bin/bash
/etc/apf/apf -f >> /dev/null 2>&1
/etc/apf/apf -s >> /dev/null 2>&1
It's flushing and reloading the firewall.
I don't understand the >> /dev/null 2>&1 part.
What is the purpose of having this in the cron? It's overriding my firewall rules.
Can I safely remove this cron job?
>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.
(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)
2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.
Let's break >> /dev/null 2>&1 statement into parts:
Part 1: >> output redirection
This is used to redirect the program output and append the output at the end of the file. More...
Part 2: /dev/null special file
This is a Pseudo-devices special file.
Command ls -l /dev/null will give you details of this file:
crw-rw-rw-. 1 root root 1, 3 Mar 20 18:37 /dev/null
Did you observe crw? Which means it is a pseudo-device file which is of character-special-file type that provides serial access.
/dev/null accepts and discards all input; produces no output (always returns an end-of-file indication on a read). Reference: Wikipedia
Part 3: 2>&1 (Merges output from stream 2 with stream 1)
Whenever you execute a program, the operating system always opens three files, standard input, standard output, and standard error as we know whenever a file is opened, the operating system (from kernel) returns a non-negative integer called a file descriptor. The file descriptor for these files are 0, 1, and 2, respectively.
So 2>&1 simply says redirect standard error to standard output.
& means whatever follows is a file descriptor, not a filename.
In short, by using this command you are telling your program not to shout while executing.
What is the importance of using 2>&1?
If you don't want to produce any output, even in case of some error produced in the terminal. To explain more clearly, let's consider the following example:
$ ls -l > /dev/null
For the above command, no output was printed in the terminal, but what if this command produces an error:
$ ls -l file_doesnot_exists > /dev/null
ls: cannot access file_doesnot_exists: No such file or directory
Despite I'm redirecting output to /dev/null, it is printed in the terminal. It is because we are not redirecting error output to /dev/null, so in order to redirect error output as well, it is required to add 2>&1:
$ ls -l file_doesnot_exists > /dev/null 2>&1
This is the way to execute a program quietly, and hide all its output.
/dev/null is a special filesystem object that discards everything written into it. Redirecting a stream into it means hiding your program's output.
The 2>&1 part means "redirect the error stream into the output stream", so when you redirect the output stream, error stream gets redirected as well. Even if your program writes to stderr now, that output would be discarded as well.
Let me explain a bit by bit.
0,1,2
0: standard input
1: standard output
2: standard error
>>
>> in command >> /dev/null 2>&1 appends the command output to /dev/null.
command >> /dev/null 2>&1
After command:
command
=> 1 output on the terminal screen
=> 2 output on the terminal screen
After redirect:
command >> /dev/null
=> 1 output to /dev/null
=> 2 output on the terminal screen
After /dev/null 2>&1
command >> /dev/null 2>&1
=> 1 output to /dev/null
=> 2 output is redirected to 1 which is now to /dev/null
/dev/null is a standard file that discards all you write to it, but reports that the write operation succeeded.
1 is standard output and 2 is standard error.
2>&1 redirects standard error to standard output. &1 indicates file descriptor (standard output), otherwise (if you use just 1) you will redirect standard error to a file named 1. [any command] >>/dev/null 2>&1 redirects all standard error to standard output, and writes all of that to /dev/null.
I use >> /dev/null 2>&1 for a silent cronjob. A cronjob will do the job, but not send a report to my email.
As far as I know, don't remove /dev/null. It's useful, especially when you run cPanel, it can be used for throw-away cronjob reports.
As described by the others, writing to /dev/null eliminates the output of a program. Usually cron sends an email for every output from the process started with a cronjob. So by writing the output to /dev/null you prevent being spammed if you have specified your adress in cron.
instead of using >/dev/null 2>&1
Could you use : wget -O /dev/null -o /dev/null example.com
what i can see on the other forum it says. "Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all."
and the other solution is : wget -q --spider mysite.com
https://serverfault.com/questions/619542/piping-wget-output-to-dev-null-in-cron/619546#619546
I normally used the command in connection with the log files… purpose would be to catch any errors to evaluate/troubleshoot issues when running scripts on multiple servers simultaneously.
sh -vxe cmd > cmd.logfile 2>&1
Edit /etc/conf.apf. Set DEVEL_MODE="0". DEVEL_MODE set to 1 will add a cron job to stop apf after 5 minutes.

How do file descriptors work?

Can someone tell me why this does not work? I'm playing around with file descriptors, but feel a little lost.
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The first three lines run fine, but the last two error out. Why?
File descriptors 0, 1 and 2 are for stdin, stdout and stderr respectively.
File descriptors 3, 4, .. 9 are for additional files. In order to use them, you need to open them first. For example:
exec 3<> /tmp/foo #open fd 3.
echo "test" >&3
exec 3>&- #close fd 3.
For more information take a look at Advanced Bash-Scripting Guide: Chapter 20. I/O Redirection.
It's an old question but one thing needs clarification.
While the answers by Carl Norum and dogbane are correct, the assumption is to change your script to make it work.
What I'd like to point out is that you don't need to change the script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
It works if you invoke it differently:
./fdtest 3>&1 4>&1
which means to redirect file descriptors 3 and 4 to 1 (which is standard output).
The point is that the script is perfectly fine in wanting to write to descriptors other than just 1 and 2 (stdout and stderr) if those descriptors are provided by the parent process.
Your example is actually quite interesting because this script can write to 4 different files:
./fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
Now you have the output in 4 separate files:
$ for f in file*; do echo $f:; cat $f; done
file1.txt:
This
file2.txt:
is
file3.txt:
a
file4.txt:
test.
What is more interesting about it is that your program doesn't have to have write permissions for those files, because it doesn't actually open them.
For example, when I run sudo -s to change user to root, create a directory as root, and try to run the following command as my regular user (rsp in my case) like this:
# su rsp -c '../fdtest >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt'
I get an error:
bash: file1.txt: Permission denied
But if I do the redirection outside of su:
# su rsp -c '../fdtest' >file1.txt 2>file2.txt 3>file3.txt 4>file4.txt
(note the difference in single quotes) it works and I get:
# ls -alp
total 56
drwxr-xr-x 2 root root 4096 Jun 23 15:05 ./
drwxrwxr-x 3 rsp rsp 4096 Jun 23 15:01 ../
-rw-r--r-- 1 root root 5 Jun 23 15:05 file1.txt
-rw-r--r-- 1 root root 39 Jun 23 15:05 file2.txt
-rw-r--r-- 1 root root 2 Jun 23 15:05 file3.txt
-rw-r--r-- 1 root root 6 Jun 23 15:05 file4.txt
which are 4 files owned by root in a directory owned by root - even though the script didn't have permissions to create those files.
Another example would be using chroot jail or a container and run a program inside where it wouldn't have access to those files even if it was run as root and still redirect those descriptors externally where you need, without actually giving access to the entire file system or anything else to this script.
The point is that you have discovered a very interesting and useful mechanism. You don't have to open all the files inside of your script as was suggested in other answers. Sometimes it is useful to redirect them during the script invocation.
To sum it up, this:
echo "This"
is actually equivalent to:
echo "This" >&1
and running the program as:
./program >file.txt
is the same as:
./program 1>file.txt
The number 1 is just a default number and it is stdout.
But even this program:
#!/bin/bash
echo "This"
can produce a "Bad descriptor" error. How? When run as:
./fdtest2 >&-
The output will be:
./fdtest2: line 2: echo: write error: Bad file descriptor
Adding >&- (which is the same as 1>&-) means closing the standard output. Adding 2>&- would mean closing the stderr.
You can even do a more complicated thing. Your original script:
#!/bin/bash
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
when run with just:
./fdtest
prints:
This
is
./fdtest: line 4: 3: Bad file descriptor
./fdtest: line 5: 4: Bad file descriptor
But you can make descriptors 3 and 4 work, but number 1 fail by running:
./fdtest 3>&1 4>&1 1>&-
It outputs:
./fdtest: line 2: echo: write error: Bad file descriptor
is
a
test.
If you want descriptors both 1 and 2 fail, run it like this:
./fdtest 3>&1 4>&1 1>&- 2>&-
You get:
a
test.
Why? Didn't anything fail? It did but with no stderr (file descriptor number 2) you didn't see the error messages!
I think it's very useful to experiment this way to get a feeling of how the descriptors and their redirection work.
Your script is a very interesting example indeed - and I argue that it is not broken at all, you were just using it wrong! :)
It's failing because those file descriptors don't point to anything! The normal default file descriptors are the standard input 0, the standard output 1, and the standard error stream 2. Since your script isn't opening any other files, there are no other valid file descriptors. You can open a file in bash using exec. Here's a modification of your example:
#!/bin/bash
exec 3> out1 # open file 'out1' for writing, assign to fd 3
exec 4> out2 # open file 'out2' for writing, assign to fd 4
echo "This" # output to fd 1 (stdout)
echo "is" >&2 # output to fd 2 (stderr)
echo "a" >&3 # output to fd 3
echo "test." >&4 # output to fd 4
And now we'll run it:
$ ls
script
$ ./script
This
is
$ ls
out1 out2 script
$ cat out*
a
test.
$
As you can see, the extra output was sent to the requested files.
To add on to the answer from rsp and respond the question in the comments of that answer from #MattClimbs.
You can test if the file descriptor is open or not by attempting to redirect to it early and if it fails, open the desired numbered file descriptor to something like /dev/null. I do this regularly within scripts and leverage the additional file descriptors to pass back additional details or responses beyond return #.
script.sh
#!/bin/bash
2>/dev/null >&3 || exec 3>/dev/null
2>/dev/null >&4 || exec 4>/dev/null
echo "This"
echo "is" >&2
echo "a" >&3
echo "test." >&4
The stderr is redirected to /dev/null to discard the possible bash: #: Bad file descriptor response and the || is used to process the following command exec #>/dev/null when the previous one exits with a non zero status. In the event that the file descriptor is already opened, the two tests would return a zero status and the exec ... command would not be executed.
Calling the script without any redirections yields:
# ./script.sh
This
is
In this case, the redirections for a and test are shipped off to /dev/null
Calling the script with a redirection defined yields:
# ./script.sh 3>temp.txt 4>>temp.txt
This
is
# cat temp.txt
a
test.
The first redirection 3>temp.txt overwrites the file temp.txt while 4>>temp.txt appends to the file.
In the end, you can define default files to redirect to within the script if you want something other than /dev/null or you can change the execution method of the script and redirect those extra file descriptors anywhere you want.

What does " 2>&1 " mean?

To combine stderr and stdout into the stdout stream, we append this to a command:
2>&1
e.g. to see the first few errors from compiling g++ main.cpp:
g++ main.cpp 2>&1 | head
What does 2>&1 mean, in detail?
File descriptor 1 is the standard output (stdout).
File descriptor 2 is the standard error (stderr).
At first, 2>1 may look like a good way to redirect stderr to stdout. However, it will actually be interpreted as "redirect stderr to a file named 1".
& indicates that what follows and precedes is a file descriptor, and not a filename. Thus, we use 2>&1. Consider >& to be a redirect merger operator.
To redirect stdout to file.txt:
echo test > file.txt
This is equivalent to:
echo test 1> file.txt
To redirect stderr to file.txt:
echo test 2> file.txt
So >& is the syntax to redirect a stream to another file descriptor:
0 is stdin
1 is stdout
2 is stderr
To redirect stdout to stderr:
echo test 1>&2 # equivalently, echo test >&2
To redirect stderr to stdout:
echo test 2>&1
Thus, in 2>&1:
2> redirects stderr to an (unspecified) file.
&1 redirects stderr to stdout.
Some tricks about redirection
Some syntax particularity about this may have important behaviours. There is some little samples about redirections, STDERR, STDOUT, and arguments ordering.
1 - Overwriting or appending?
Symbol > means redirection.
> means send to as a whole completed file, overwriting target if exist (see noclobber bash feature at #3 later).
>> means send in addition to would append to target if exist.
In any case, the file would be created if they not exist.
2 - The shell command line is order dependent!!
For testing this, we need a simple command which will send something on both outputs:
$ ls -ld /tmp /tnt
ls: cannot access /tnt: No such file or directory
drwxrwxrwt 118 root root 196608 Jan 7 11:49 /tmp
$ ls -ld /tmp /tnt >/dev/null
ls: cannot access /tnt: No such file or directory
$ ls -ld /tmp /tnt 2>/dev/null
drwxrwxrwt 118 root root 196608 Jan 7 11:49 /tmp
(Expecting you don't have a directory named /tnt, of course ;). Well, we have it!!
So, let's see:
$ ls -ld /tmp /tnt >/dev/null
ls: cannot access /tnt: No such file or directory
$ ls -ld /tmp /tnt >/dev/null 2>&1
$ ls -ld /tmp /tnt 2>&1 >/dev/null
ls: cannot access /tnt: No such file or directory
The last command line dumps STDERR to the console, and it seem not to be the expected behaviour... But...
If you want to make some post filtering about standard output, error output or both:
$ ls -ld /tmp /tnt | sed 's/^.*$/<-- & --->/'
ls: cannot access /tnt: No such file or directory
<-- drwxrwxrwt 118 root root 196608 Jan 7 12:02 /tmp --->
$ ls -ld /tmp /tnt 2>&1 | sed 's/^.*$/<-- & --->/'
<-- ls: cannot access /tnt: No such file or directory --->
<-- drwxrwxrwt 118 root root 196608 Jan 7 12:02 /tmp --->
$ ls -ld /tmp /tnt >/dev/null | sed 's/^.*$/<-- & --->/'
ls: cannot access /tnt: No such file or directory
$ ls -ld /tmp /tnt >/dev/null 2>&1 | sed 's/^.*$/<-- & --->/'
$ ls -ld /tmp /tnt 2>&1 >/dev/null | sed 's/^.*$/<-- & --->/'
<-- ls: cannot access /tnt: No such file or directory --->
Notice that the last command line in this paragraph is exactly same as in previous paragraph, where I wrote seem not to be the expected behaviour (so, this could even be an expected behaviour).
Well, there is a little tricks about redirections, for doing different operation on both outputs:
$ ( ls -ld /tmp /tnt | sed 's/^/O: /' >&9 ) 9>&2 2>&1 | sed 's/^/E: /'
O: drwxrwxrwt 118 root root 196608 Jan 7 12:13 /tmp
E: ls: cannot access /tnt: No such file or directory
Note: &9 descriptor would occur spontaneously because of ) 9>&2.
Addendum: nota! With the new version of bash (>4.0) there is a new feature and more sexy syntax for doing this kind of things:
$ ls -ld /tmp /tnt 2> >(sed 's/^/E: /') > >(sed 's/^/O: /')
O: drwxrwxrwt 17 root root 28672 Nov 5 23:00 /tmp
E: ls: cannot access /tnt: No such file or directory
And finally for such a cascading output formatting:
$ ((ls -ld /tmp /tnt |sed 's/^/O: /' >&9 ) 2>&1 |sed 's/^/E: /') 9>&1| cat -n
1 O: drwxrwxrwt 118 root root 196608 Jan 7 12:29 /tmp
2 E: ls: cannot access /tnt: No such file or directory
Addendum: nota! Same new syntax, in both ways:
$ cat -n <(ls -ld /tmp /tnt 2> >(sed 's/^/E: /') > >(sed 's/^/O: /'))
1 O: drwxrwxrwt 17 root root 28672 Nov 5 23:00 /tmp
2 E: ls: cannot access /tnt: No such file or directory
Where STDOUT go through a specific filter, STDERR to another and finally both outputs merged go through a third command filter.
2b - Using |& instead
Syntax command |& ... could be used as an alias for command 2>&1 | .... Same rules about command line order applies. More details at What is the meaning of operator |& in bash?
3 - A word about noclobber option and >| syntax
That's about overwriting:
While set -o noclobber instruct bash to not overwrite any existing file, the >| syntax let you pass through this limitation:
$ testfile=$(mktemp /tmp/testNoClobberDate-XXXXXX)
$ date > $testfile ; cat $testfile
Mon Jan 7 13:18:15 CET 2013
$ date > $testfile ; cat $testfile
Mon Jan 7 13:18:19 CET 2013
$ date > $testfile ; cat $testfile
Mon Jan 7 13:18:21 CET 2013
The file is overwritten each time, well now:
$ set -o noclobber
$ date > $testfile ; cat $testfile
bash: /tmp/testNoClobberDate-WW1xi9: cannot overwrite existing file
Mon Jan 7 13:18:21 CET 2013
$ date > $testfile ; cat $testfile
bash: /tmp/testNoClobberDate-WW1xi9: cannot overwrite existing file
Mon Jan 7 13:18:21 CET 2013
Pass through with >|:
$ date >| $testfile ; cat $testfile
Mon Jan 7 13:18:58 CET 2013
$ date >| $testfile ; cat $testfile
Mon Jan 7 13:19:01 CET 2013
Unsetting this option and/or inquiring if already set.
$ set -o | grep noclobber
noclobber on
$ set +o noclobber
$ set -o | grep noclobber
noclobber off
$ date > $testfile ; cat $testfile
Mon Jan 7 13:24:27 CET 2013
$ rm $testfile
4 - Last trick and more...
For redirecting both output from a given command, we see that a right syntax could be:
$ ls -ld /tmp /tnt >/dev/null 2>&1
for this special case, there is a shortcut syntax: &> ... or >&
$ ls -ld /tmp /tnt &>/dev/null
$ ls -ld /tmp /tnt >&/dev/null
Nota: if 2>&1 exist, 1>&2 is a correct syntax too:
$ ls -ld /tmp /tnt 2>/dev/null 1>&2
4b- Now, I will let you think about:
$ ls -ld /tmp /tnt 2>&1 1>&2 | sed -e s/^/++/
++/bin/ls: cannot access /tnt: No such file or directory
++drwxrwxrwt 193 root root 196608 Feb 9 11:08 /tmp/
$ ls -ld /tmp /tnt 1>&2 2>&1 | sed -e s/^/++/
/bin/ls: cannot access /tnt: No such file or directory
drwxrwxrwt 193 root root 196608 Feb 9 11:08 /tmp/
4c- If you're interested in more information
You could read the fine manual by hitting:
man -Len -Pless\ +/^REDIRECTION bash
in a bash console ;-)
I found this brilliant post on redirection: All about redirections
Redirect both standard output and standard error to a file
$ command &>file
This one-liner uses the &> operator to redirect both output streams - stdout and stderr - from command to file. This is Bash's shortcut for quickly redirecting both streams to the same destination.
Here is how the file descriptor table looks like after Bash has redirected both streams:
As you can see, both stdout and stderr now point to file. So anything written to stdout and stderr gets written to file.
There are several ways to redirect both streams to the same destination. You can redirect each stream one after another:
$ command >file 2>&1
This is a much more common way to redirect both streams to a file. First stdout is redirected to file, and then stderr is duplicated to be the same as stdout. So both streams end up pointing to file.
When Bash sees several redirections it processes them from left to right. Let's go through the steps and see how that happens. Before running any commands, Bash's file descriptor table looks like this:
Now Bash processes the first redirection >file. We've seen this before and it makes stdout point to file:
Next Bash sees the second redirection 2>&1. We haven't seen this redirection before. This one duplicates file descriptor 2 to be a copy of file descriptor 1 and we get:
Both streams have been redirected to file.
However be careful here! Writing
command >file 2>&1
is not the same as writing:
$ command 2>&1 >file
The order of redirects matters in Bash! This command redirects only the standard output to the file. The stderr will still print to the terminal. To understand why that happens, let's go through the steps again. So before running the command, the file descriptor table looks like this:
Now Bash processes redirections left to right. It first sees 2>&1 so it duplicates stderr to stdout. The file descriptor table becomes:
Now Bash sees the second redirect, >file, and it redirects stdout to file:
Do you see what happens here? Stdout now points to file, but the stderr still points to the terminal! Everything that gets written to stderr still gets printed out to the screen! So be very, very careful with the order of redirects!
Also note that in Bash, writing
$ command &>file
is exactly the same as:
$ command >&file
The numbers refer to the file descriptors (fd).
Zero is stdin
One is stdout
Two is stderr
2>&1 redirects fd 2 to 1.
This works for any number of file descriptors if the program uses them.
You can look at /usr/include/unistd.h if you forget them:
/* Standard file descriptors. */
#define STDIN_FILENO 0 /* Standard input. */
#define STDOUT_FILENO 1 /* Standard output. */
#define STDERR_FILENO 2 /* Standard error output. */
That said I have written C tools that use non-standard file descriptors for custom logging so you don't see it unless you redirect it to a file or something.
That construct sends the standard error stream (stderr) to the current location of standard output (stdout) - this currency issue appears to have been neglected by the other answers.
You can redirect any output handle to another by using this method but it's most often used to channel stdout and stderr streams into a single stream for processing.
Some examples are:
# Look for ERROR string in both stdout and stderr.
foo 2>&1 | grep ERROR
# Run the less pager without stderr screwing up the output.
foo 2>&1 | less
# Send stdout/err to file (with append) and terminal.
foo 2>&1 |tee /dev/tty >>outfile
# Send stderr to normal location and stdout to file.
foo >outfile1 2>&1 >outfile2
Note that that last one will not direct stderr to outfile2 - it redirects it to what stdout was when the argument was encountered (outfile1) and then redirects stdout to outfile2.
This allows some pretty sophisticated trickery.
I found this very helpful if you are a beginner read this
Update:
In Linux or Unix System there are two places programs send output to: Standard output (stdout) and Standard Error (stderr).You can redirect these output to any file.
Like if you do this ls -a > output.txt
Nothing will be printed in console all output (stdout) is redirected to output file.
And if you try print the content of any file that does not exits means output will be an error like if you print test.txt that not present in current directory
cat test.txt > error.txt
Output will be
cat: test.txt :No such file or directory
But error.txt file will be empty because we redirecting the stdout to a file not stderr.
so we need file descriptor( A file descriptor is nothing more than a positive integer that represents an open file. You can say descriptor is unique id of file) to tell shell which type of output we are sending to file .In Unix /Linux system 1 is for stdout and 2 for stderr.
so now if you do this ls -a 1> output.txt means you are sending Standard output (stdout) to output.txt.
and if you do this cat test.txt 2> error.txt means you are sending Standard Error (stderr) to error.txt .
&1 is used to reference the value of the file descriptor 1 (stdout).
Now to the point 2>&1 means “Redirect the stderr to the same place we are redirecting the stdout”
Now you can do this <br
cat maybefile.txt > output.txt 2>&1
both Standard output (stdout) and Standard Error (stderr) will redirected to output.txt.
Thanks to Ondrej K. for pointing out
2 is the console standard error.
1 is the console standard output.
This is the standard Unix, and Windows also follows the POSIX.
E.g. when you run
perl test.pl 2>&1
the standard error is redirected to standard output, so you can see both outputs together:
perl test.pl > debug.log 2>&1
After execution, you can see all the output, including errors, in the debug.log.
perl test.pl 1>out.log 2>err.log
Then standard output goes to out.log, and standard error to err.log.
I suggest you to try to understand these.
2>&1 is a POSIX shell construct. Here is a breakdown, token by token:
2: "Standard error" output file descriptor.
>&: Duplicate an Output File Descriptor operator (a variant of Output Redirection operator >). Given [x]>&[y], the file descriptor denoted by x is made to be a copy of the output file descriptor y.
1 "Standard output" output file descriptor.
The expression 2>&1 copies file descriptor 1 to location 2, so any output written to 2 ("standard error") in the execution environment goes to the same file originally described by 1 ("standard output").
Further explanation:
File Descriptor: "A per-process unique, non-negative integer used to identify an open file for the purpose of file access."
Standard output/error: Refer to the following note in the Redirection section of the shell documentation:
Open files are represented by decimal numbers starting with zero. The largest possible value is implementation-defined; however, all implementations shall support at least 0 to 9, inclusive, for use by the application. These numbers are called "file descriptors". The values 0, 1, and 2 have special meaning and conventional uses and are implied by certain redirection operations; they are referred to as standard input, standard output, and standard error, respectively. Programs usually take their input from standard input, and write output on standard output. Error messages are usually written on standard error. The redirection operators can be preceded by one or more digits (with no intervening characters allowed) to designate the file descriptor number.
To answer your question: It takes any error output (normally sent to stderr) and writes it to standard output (stdout).
This is helpful with, for example 'more' when you need paging for all output. Some programs like printing usage information into stderr.
To help you remember
1 = standard output (where programs print normal output)
2 = standard error (where programs print errors)
"2>&1" simply points everything sent to stderr, to stdout instead.
I also recommend reading this post on error redirecting where this subject is covered in full detail.
From a programmer's point of view, it means precisely this:
dup2(1, 2);
See the man page.
Understanding that 2>&1 is a copy also explains why ...
command >file 2>&1
... is not the same as ...
command 2>&1 >file
The first will send both streams to file, whereas the second will send errors to stdout, and ordinary output into file.
unix_commands 2>&1
This is used to print errors to the terminal.
When errors are produced, they are written to the "standard error" buffer at memory address &2, and 2 references and streams from that buffer.
When outputs are produced, they are written to the "standard output" buffer at memory address &1, and 1 references and streams from that buffer.
So going back to the command. Anytime the program unix_commands produces an error, it writes that into the errors buffer. So we create a pointer to that buffer 2, and redirect > the errors into the outputs buffer &1. At this point we're done, because anything in the outputs buffer is read and printed by the terminal.
People, always remember paxdiablo's hint about the current location of the redirection target... It is important.
My personal mnemonic for the 2>&1 operator is this:
Think of & as meaning 'and' or 'add' (the character is an ampers-and, isn't it?)
So it becomes: 'redirect 2 (stderr) to where 1 (stdout) already/currently is and add both streams'.
The same mnemonic works for the other frequently used redirection too, 1>&2:
Think of & meaning and or add... (you get the idea about the ampersand, yes?)
So it becomes: 'redirect 1 (stdout) to where 2 (stderr) already/currently is and add both streams'.
And always remember: you have to read chains of redirections 'from the end', from right to left (not from left to right).
Redirecting Input
Redirection of input causes the file whose name
results from the expansion of word to be opened for reading on file
descriptor n, or the standard input (file descriptor 0) if n is
not specified.
The general format for redirecting input is:
[n]<word
Redirecting Output
Redirection of output causes the file whose
name results from the expansion of word to be opened for writing on
file descriptor n, or the standard output (file descriptor 1) if n
is not specified. If the file does not exist it is created; if it
does exist it is truncated to zero size.
The general format for redirecting output is:
[n]>word
Moving File Descriptors
The redirection operator,
[n]<&digit-
moves the file descriptor digit to file descriptor n, or the
standard input (file descriptor 0) if n is not specified.
digit is closed after being duplicated to n.
Similarly, the redirection operator
[n]>&digit-
moves the file descriptor digit to file descriptor n, or the
standard output (file descriptor 1) if n is not specified.
Ref:
man bash
Type /^REDIRECT to locate to the redirection section, and learn more...
An online version is here: 3.6 Redirections
PS:
Lots of the time, man was the powerful tool to learn Linux.
Provided that /foo does not exist on your system and /tmp does…
$ ls -l /tmp /foo
will print the contents of /tmp and print an error message for /foo
$ ls -l /tmp /foo > /dev/null
will send the contents of /tmp to /dev/null and print an error message for /foo
$ ls -l /tmp /foo 1> /dev/null
will do exactly the same (note the 1)
$ ls -l /tmp /foo 2> /dev/null
will print the contents of /tmp and send the error message to /dev/null
$ ls -l /tmp /foo 1> /dev/null 2> /dev/null
will send both the listing as well as the error message to /dev/null
$ ls -l /tmp /foo > /dev/null 2> &1
is shorthand
This is just like passing the error to the stdout or the terminal.
That is, cmd is not a command:
$cmd 2>filename
cat filename
command not found
The error is sent to the file like this:
2>&1
Standard error is sent to the terminal.
0 for input, 1 for stdout and 2 for stderr.
One Tip:
somecmd >1.txt 2>&1 is correct, while somecmd 2>&1 >1.txt is totally wrong with no effect!
You need to understand this in terms of pipe.
$ (whoami;ZZZ) 2>&1 | cat
logan
ZZZ: command not found
As you can see both stdout and stderr of LHS of pipe is fed into the RHS (of pipe).
This is the same as
$ (whoami;ZZZ) |& cat
logan
ZZZ: command not found
Note that 1>&2 cannot be used interchangeably with 2>&1.
Imagine your command depends on piping, for example:
docker logs 1b3e97c49e39 2>&1 | grep "some log"
grepping will happen across both stderr and stdout since stderr is basically merged into stdout.
However, if you try:
docker logs 1b3e97c49e39 1>&2 | grep "some log",
grepping will not really search anywhere at all because Unix pipe is connecting processes via connecting stdout | stdin, and stdout in the second case was redirected to stderr in which Unix pipe has no interest.

Resources