Why is there a difference between >>& and &>>, but NOT >& and &>? - bash

From the bash man pages, under the section "Redirection":
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred.
That made me wonder, why is the first preferred? I see that &>> works, but >>& does not, so the preference makes sense. So why does >>& not work? Is it ambiguous?
Here's what I'm running
$ bash --version
GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu)

&>dest is unambiguous, whereas >&dest looks like fdup() syntax ...and can be construed as fdup syntax when you have a numeric filename.
Bash, since 4.1 or later, allows the destination of a fdup operation to be parameterized -- thus, not just 2>&1, but also 2>&$stderr_fd. Putting the & after the > puts us into the namespace of syntax used for fdup() operations; keeping it beforehand is unambiguous.
Example: I want to redirect the stdout and stderr of the command cmd into my file. If my file's name is 0, >& causes issues.
cmd >& 0 becomes cmd 1>&0. That is, redirect stdout to fd#0.
cmd &> 0 becomes cmd > 0 2>&1. That is, redirect stdout to the file named 0, then redirect stderr to fd#1.
[n]>&word is POSIX-standardized; &>word is not found anywhere in the POSIX standard.
The only forms for duplicating existing file descriptors defined by the POSIX standard are [n]<&word and [n]>&word. In the grammar, these are given the token names GREATAND and LESSAND.
There are no POSIX-defined tokens for &> or &< – they are merely syntactic sugar for the commonly used operation of "I don't want to see anything on my screen", or "send stdout and stderr to a file".
This is useful, because cmd 2>&1 > myFile - surprisingly to those new to bash - doesn't work as intended for the "clean screen" goal, whereas cmd > myFile 2>&1, does.
So.. why does &>> work when >>& does not? Because whoever wrote &>> didn't feel the need to create an ambiguity, and consciously chose to not allow >>&.
fdup() synonyms with >> would not add value.
To explain a bit more about why [n]>&word synonyms are pointless -- keep in mind that the difference between >bar and >>bar is the presence of the O_APPEND flag and absence of O_TRUNC in the flags argument to open(). However, when you're providing a file descriptor number -- and thus performing an fdup() from an old FD number to a new one -- the file is already open; the flags thus cannot be changed. Even the direction -- > vs < -- is purely informational to the reader.

Related

Can the pipe operator be used with the stdout redirection operator?

We know that:
The pipe operator | is used to take the standard output of left side command as the standard input for the right side process.
The stdout redirection operator > is used to redirect the stdout to a file
And the question is, why cannot ls -la | > file redirect the output of ls -la to file? (I tried, and the file is empty)
Is it because that the stdout redirection operator > is not a process?
Is it because that the stdout redirection operator > is not a process?
In short, yes.
In a bit more detail, stdout, stderr and stdin are special file descriptors (FDs), but these remarks work for every FD: each FD refers to exactly one resource. It can be a file, a directory, a pipe, a device (such as terminal, a hard drive etc) and more. One FD, one resource. It is not possible for stdout to output to both a pipe and a file at the same time. What tee does is takes stdin (typically from a pipe, but not necessarily), opens a new FD associated with the filename provided as its argument, and writes whatever it gets from stdin to both stdout and the new FD. This copying of content from one to two FDs is not available from bash directly.
EDIT: I tried answering the question as originally posted. As it stands now, DevSolar's comment is actually more on point: why does > file, without a command, make an empty file in bash?
The answer is in Shell Command Language specification, under 2.9.1 Simple commands. In the first step, the redirection is detected. In the second step, no fields remain, so there is no command to be executed. In step 3, redirections are performed in a subshell; however, since there is no command, standard input is simply discarded, and the empty standard output of no-command is used to (try to) create a new file.

How to temporarily override a named file descriptor in bash?

I have a quite unusual problem in one of my bash scripts. I want to do something (in fact I want to create/remve LVs, this is MWE) like this:
#! /bin/bash
# ...
exec {flock}>/tmp/lock
# Do something with fd ${flock} e.g.
flock -n ${flock} || exit 1
# ...
lvs ${flock}>&-
# ...
The problem is the ${flock}>&-. Why do I want this? The LVM tools complain with a warning about any opened file descriptors except for stdin, stdout and stderr. So when I drop this small redirecting part, the script works but writes out a warning message.
Thus I wanted to redirect the fd $flock only for the LVM command to be closed. I do not want to close the file but only redirect for this single command invocation.
In my case $flock is set to 10 (first free fd greater or equal to 10, see man bash). However I do not get the corresponding fd remapped as sketched above. Instead the 10 is considered a parameter of the (lvs) command and the stdout should be redirected. Of course this is not what I intend.
If I hardcode 10>&- this works but is very bad style. For now I switched to completely hardcode the fd in the whole file. Nevertheless I would like to know how it would be done correctly.
Don't use the dollar
Your example code has:
lvs ${flock}>&-
The correct syntax is:
lvs {flock}>&-
From the redirection section of the Bash manual:
Each redirection that may be preceded by a file descriptor number may instead be preceded by a word of the form {varname}. In this case, for each redirection operator except >&- and <&-, the shell will allocate a file descriptor greater than 10 and assign it to {varname}. If >&- or <&- is preceded by {varname}, the value of varname defines the file descriptor to close. If {varname} is supplied, the redirection persists beyond the scope of the command, allowing the shell programmer to manage the file descriptor himself.
(emphasis mine)

Whats the difference between redirections "1>/dev/null 2>&1" and " 2>&1 1>/dev/null"?

Whats the difference between redirections
1>/dev/null 2>&1
and
2>&1 1>/dev/null
It seems 1st one displays output to stdout but not the second one.
Can somebody explain !
Thanks
Expanding on Lokendra26's answer a bit: /dev/null is a special file on your system, a device for discarding anything written to it. It's common to send output there if you don't want to see it. "File" in this case, and unix terminology in general, can be both a normal disk file, or a device like the null device or your terminal.
The "1" and "2" are file descriptors, designators for places to send output. Programs use FD 1, "standard output", as the target for ordinary output, and FD 2, "standard error", for error output. These file descriptors can point to different files at different times. Normally they both point at your terminal, so you se output from your programs written there.
The & operator is more than just for disambiguation. It actually means "look up whatever this FD points to at this point".
It is important to understand these details in order to understand the difference between the two redirections you are asking about.
1>/dev/null 2>&1 this is actually two statements, processed in sequence. First, point "standard output" at the null device (thus discarding anything written to it). Second, point "standard error" at whatever "standard output" is pointing to, in this case /dev/null. The end result is that output from both file descriptors will be discarded.
2>&1 1>/dev/null is likewise two statements. First, point "standard error" at whatever "standard output" is pointing to. Normally this will be your terminal, as I wrote above. Second, point "standard output" at /dev/null. End result - only "standard output" is discarded, "standard error" will still print to your terminal.
OK. This is a really old post. See a related post here. I wanted to answer this comment in the other post "If & indicates a file descriptor then why is there no & before 2?" by user6708151, but my reputation is too low. I'll explain the way I interpret the shell redirection syntax. No one has mentioned it yet, not to my knowledge.
So, we have the syntax [x][operator][y], where x and y are file descriptors, and operator is a redirection operator. For example, [command] 1>/dev/stderr discards any output from [command]. Here, x is a file descriptor variable, and y a file descriptor value. An example will make this clear.
[command] 2>&1 means that the shell is putting the file descriptor value of variable 1 to variable 2, and in effect redirecting error output to stdout. The symbol & is nessesary here because it signifies that 1 is a variable not a file name instead.
To answer the orignial question, refer to the tables below. Note that the shell parse redirections from left to right, and you also need to read the table from left to right.
1>/dev/null 2>&1
Variable
Default value
1>/dev/null
1>/dev/null 2>&1
stdin
0
0
0
stdout
1
/dev/null
/dev/null
stderr
2
2
/dev/null
2>&1 1>/dev/null
Variable
Default value
2>&1
2>&1 1>/dev/null
stdin
0
0
0
stdout
1
1
/dev/null
stderr
2
1
1
The last column is the end result for each redirection.
In Unix ">" stands for the redirection of the output to a file or elsewhere.
"1>" - Stands for output from stdout pipeline
"2>" - Stands for output from stderr (error) pipeline (Errors goes into this pipeline)
So from your question,
"1>/dev/null" : Tell the system to direct standard output to null(file) kept at /dev
"2>&1" : Tell the system to redirect the output of stderr pipeline to the place where the output of stdout pipeline is going. So In this case you get both the stdout and the stderr output written inside a single file.
While "2>&1 1>" this doesn't make sense to me as it would be equivalent to "2>&1"
PS. If you are confused about the &1 part then that is used to resolve the ambiguity which may occur when 1 is the name of a file.
Hope that makes sense to you.

Capture stdout for a long time in bash

I'm using a script which is calling another, like this :
# stuff...
OUT="$(./scriptB)"
# do stuff with the variable OUT
Basically, the scriptB script displays text in multiple time. Ie : it displays a line, 2s late another, 3s later another and so on.
With the snippet i use, i only get the first output of my command, i miss a lot.
How can i get the whole output, by capturing stdout for a given time ? Something like :
begin capture
./scriptB
stop capture
I don't mind if the output is not shown on screen.
Thanks.
If I understand your question, then I believe you can use the tee command, like
./scriptB | tee $HOME/scriptB.log
It will display the stdout from scriptB and write stdout to the log file at the same time.
Some of your output seems to be coming on the STDERR stream. So we have to redirect that as needed. As in my comment, you can do
{ ./scriptB ; } > /tmp/scriptB.log 2>&1
Which can almost certainly be reduced to
./scriptB > /tmp/scriptB.log 2>&1
And in newer versions of bash, can further be reduced to
./scriptB >& /tmp/scriptB.log
AND finally, as your original question involved storing the output to a variable, you can do
OUT=$(./scriptB > /tmp/scriptB.log 2>&1)
The notation 2>&1 says, take the file descriptor 2 of this process (STDERR) and tie it (&) into the file descriptor 1 of the process (STDOUT).
The alternate notation provided ( ... >& file) is a shorthand for the 2>&1.
Personally, I'd recommend using the 2>&1 syntax, as this is understood by all Bourne derived shells (not [t]csh).
As an aside, all processes by default have 3 file descriptors created when the process is created, 0=STDIN, 1=STDOUT, 2=STDERR. Manipulation of those streams is usually as simple as illustrated here. More advanced (rare) manipulations are possible. Post a separate question if you need to know more.
IHTH

What does >& mean?

I was a little confused by this expression:
gcc -c -g program.c >& compiler.txt
I know &>filename will redirect both stdout and stderr to file filename. But in this case the ampersand is after the greater than sign. It looks like it's of the form M>&N, where M and N are file descriptors.
In the snippet above, do M=1 and N='compiler.txt'? How exactly is this different from:
gcc -c -g program.c > compiler.txt (ampersand removed)
My understanding is that each open file is associated with a file descriptor greater than 2. Is this correct?
If so, is a file name interchangeable with its file descriptor as the target of redirection?
This is the same as &>. From the bash manpage:
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and
the standard error output (file descriptor 2) to be redirected to the
file whose name is the expansion of word.
There are two formats for redirecting standard output and standard
error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equiva-
lent to
>word 2>&1
&> vs >&: the preferred version is &> (clobber)
Regarding:
&>
>&
both will clobber the file - truncate the file to 0 bytes before writing to it, just like > file would do in the STDIN-only case.
However, the bash manual Redirections section adds that:
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
When using the second form, word may not expand to a number or -. If it does, other redirection operators apply (see Duplicating File Descriptors below) for compatibility reasons.
(Note: in zsh both are equivalent.)
It's very good practice to train your fingers to use the first (&>) form, because:
Use &>> as >>& is not supported by bash (append)
There's only one append form:
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
(see Duplicating File Descriptors below).
Note:
The clobber usage of &> over >& in the section above is again recommended given that there is only one way for appending in bash.
zsh allows both &>> and >>& forms.
Slightly off-topic, but this is why I stick to the long form like >/dev/null 2>&1 all the time.
That is confusing enough for people who do it backwards in things like cron without adding >&, &>, >>&, &>>, ... on top of it.
I frequently need to add notes in crontab -l reminding people that "2>&1 > /dev/null" is not a best practice.
As long as you remember that any "final destination" is given first, then it should be the same syntax on any Unix or Unix-like system using any Bourne-like shell, appending or otherwise.
Plus, since &> is effectively a Bash-ism (may have originated in csh? don't recall, but it is not POSIX either way), it is not always supported on locked-down commercial UNIX systems.

Resources