What does >& mean? - bash

I was a little confused by this expression:
gcc -c -g program.c >& compiler.txt
I know &>filename will redirect both stdout and stderr to file filename. But in this case the ampersand is after the greater than sign. It looks like it's of the form M>&N, where M and N are file descriptors.
In the snippet above, do M=1 and N='compiler.txt'? How exactly is this different from:
gcc -c -g program.c > compiler.txt (ampersand removed)
My understanding is that each open file is associated with a file descriptor greater than 2. Is this correct?
If so, is a file name interchangeable with its file descriptor as the target of redirection?

This is the same as &>. From the bash manpage:
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and
the standard error output (file descriptor 2) to be redirected to the
file whose name is the expansion of word.
There are two formats for redirecting standard output and standard
error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equiva-
lent to
>word 2>&1

&> vs >&: the preferred version is &> (clobber)
Regarding:
&>
>&
both will clobber the file - truncate the file to 0 bytes before writing to it, just like > file would do in the STDIN-only case.
However, the bash manual Redirections section adds that:
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
When using the second form, word may not expand to a number or -. If it does, other redirection operators apply (see Duplicating File Descriptors below) for compatibility reasons.
(Note: in zsh both are equivalent.)
It's very good practice to train your fingers to use the first (&>) form, because:
Use &>> as >>& is not supported by bash (append)
There's only one append form:
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
(see Duplicating File Descriptors below).
Note:
The clobber usage of &> over >& in the section above is again recommended given that there is only one way for appending in bash.
zsh allows both &>> and >>& forms.

Slightly off-topic, but this is why I stick to the long form like >/dev/null 2>&1 all the time.
That is confusing enough for people who do it backwards in things like cron without adding >&, &>, >>&, &>>, ... on top of it.
I frequently need to add notes in crontab -l reminding people that "2>&1 > /dev/null" is not a best practice.
As long as you remember that any "final destination" is given first, then it should be the same syntax on any Unix or Unix-like system using any Bourne-like shell, appending or otherwise.
Plus, since &> is effectively a Bash-ism (may have originated in csh? don't recall, but it is not POSIX either way), it is not always supported on locked-down commercial UNIX systems.

Related

Why is there a difference between >>& and &>>, but NOT >& and &>?

From the bash man pages, under the section "Redirection":
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred.
That made me wonder, why is the first preferred? I see that &>> works, but >>& does not, so the preference makes sense. So why does >>& not work? Is it ambiguous?
Here's what I'm running
$ bash --version
GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu)
&>dest is unambiguous, whereas >&dest looks like fdup() syntax ...and can be construed as fdup syntax when you have a numeric filename.
Bash, since 4.1 or later, allows the destination of a fdup operation to be parameterized -- thus, not just 2>&1, but also 2>&$stderr_fd. Putting the & after the > puts us into the namespace of syntax used for fdup() operations; keeping it beforehand is unambiguous.
Example: I want to redirect the stdout and stderr of the command cmd into my file. If my file's name is 0, >& causes issues.
cmd >& 0 becomes cmd 1>&0. That is, redirect stdout to fd#0.
cmd &> 0 becomes cmd > 0 2>&1. That is, redirect stdout to the file named 0, then redirect stderr to fd#1.
[n]>&word is POSIX-standardized; &>word is not found anywhere in the POSIX standard.
The only forms for duplicating existing file descriptors defined by the POSIX standard are [n]<&word and [n]>&word. In the grammar, these are given the token names GREATAND and LESSAND.
There are no POSIX-defined tokens for &> or &< – they are merely syntactic sugar for the commonly used operation of "I don't want to see anything on my screen", or "send stdout and stderr to a file".
This is useful, because cmd 2>&1 > myFile - surprisingly to those new to bash - doesn't work as intended for the "clean screen" goal, whereas cmd > myFile 2>&1, does.
So.. why does &>> work when >>& does not? Because whoever wrote &>> didn't feel the need to create an ambiguity, and consciously chose to not allow >>&.
fdup() synonyms with >> would not add value.
To explain a bit more about why [n]>&word synonyms are pointless -- keep in mind that the difference between >bar and >>bar is the presence of the O_APPEND flag and absence of O_TRUNC in the flags argument to open(). However, when you're providing a file descriptor number -- and thus performing an fdup() from an old FD number to a new one -- the file is already open; the flags thus cannot be changed. Even the direction -- > vs < -- is purely informational to the reader.

Capture stdout for a long time in bash

I'm using a script which is calling another, like this :
# stuff...
OUT="$(./scriptB)"
# do stuff with the variable OUT
Basically, the scriptB script displays text in multiple time. Ie : it displays a line, 2s late another, 3s later another and so on.
With the snippet i use, i only get the first output of my command, i miss a lot.
How can i get the whole output, by capturing stdout for a given time ? Something like :
begin capture
./scriptB
stop capture
I don't mind if the output is not shown on screen.
Thanks.
If I understand your question, then I believe you can use the tee command, like
./scriptB | tee $HOME/scriptB.log
It will display the stdout from scriptB and write stdout to the log file at the same time.
Some of your output seems to be coming on the STDERR stream. So we have to redirect that as needed. As in my comment, you can do
{ ./scriptB ; } > /tmp/scriptB.log 2>&1
Which can almost certainly be reduced to
./scriptB > /tmp/scriptB.log 2>&1
And in newer versions of bash, can further be reduced to
./scriptB >& /tmp/scriptB.log
AND finally, as your original question involved storing the output to a variable, you can do
OUT=$(./scriptB > /tmp/scriptB.log 2>&1)
The notation 2>&1 says, take the file descriptor 2 of this process (STDERR) and tie it (&) into the file descriptor 1 of the process (STDOUT).
The alternate notation provided ( ... >& file) is a shorthand for the 2>&1.
Personally, I'd recommend using the 2>&1 syntax, as this is understood by all Bourne derived shells (not [t]csh).
As an aside, all processes by default have 3 file descriptors created when the process is created, 0=STDIN, 1=STDOUT, 2=STDERR. Manipulation of those streams is usually as simple as illustrated here. More advanced (rare) manipulations are possible. Post a separate question if you need to know more.
IHTH

shell >& operator?

I have a question about what I think is an operator or argument passer but google hasn't turned up anything. The script this is contained in is
#!/bin/sh
ln mopac.in FOR005
mopac >& FOR006
mv FOR006 mopac.out
When I call "mopac mopac.in", the program runs fine, but, for my needs, mopac is called within another program by using this script, but it seems like the input file is failing to pass so mopac is not running. I don't understand what the ">&" is supposed to do so I am having problems troubleshooting.
Thanks.
>& FILE is deprecated bash (from csh) shorthand for > FILE 2>&1, that is, redirect both standard output and standard error. (If /bin/sh is not bash, as is true on a number of Linux distributions, this will elicit an error.) Older bash (before 3.0) preferred this form, so most newer bash still understand it, although possibly very recent bash has finally removed it as they seem to finally be removing deprecated constructs of late.
Your script there is not passing mopac.in at all, but appears to be assuming that mopac will read its input from FOR005, so uses ln to make it available there. Perhaps you should change the script to read mopac.in as a parameter, just as you're running it directly.
Explanation here : http://tldp.org/LDP/abs/html/io-redirection.html
>&j
# Redirects, by default, file descriptor 1 (stdout) to j.
# All stdout gets sent to file pointed to by j.

Do some programs not accept process substitution for input files?

I'm trying to use process substitution for an input file to a program, and it isn't working. Is it because some programs don't allow process substitution for input files?
The following doesn't work:
bash -c "cat meaningless_name"
>sequence1
gattacagattacagattacagattacagattacagattacagattacagattaca
>sequence2
gattacagattacagattacagattacagattacagattacagattacagattaca
bash -c "clustalw -align -infile=<(cat meaningless_name) -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Less verbose output, finishing with:
No sequences in file. No alignment!
But the following controls do work:
bash -c "clustalw -align -infile=meaningless_name -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
bash -c "cat <(cat meaningless_name) > meaningless_name2"
diff meaningless_name meaningless_name2
(No output: the two files are the same)
bash -c "clustalw -align -infile=meaningless_name2 -outfile=output_alignment.aln -newtree=output_tree.dnd"
(Verbose output, finishing with:
CLUSTAL-Alignment file created [output_alignment.aln]
Which suggest that process substitution itself works, but that the clustalw program itself doesn't like process substitution - perhaps because it creates a non-standard file, or creates files with an unusual filename.
Is it common for programs to not accept process substitution? How would I check whether this is the issue?
I'm running GNU bash version 4.0.33(1)-release (x86_64-pc-linux-gnu) on Ubuntu 9.10. Clustalw is version 2.0.10.
Process substitution creates a named pipe. You can't seek into a named pipe.
Yes. I've noticed the same thing in other programs. For instance, it doesn't work in emacs either. It gives "File exists but can not be read". And it's definitely a special file, for me /proc/self/fd/some_number. And it doesn't work reliably in either less nor most, with default settings.
For most:
most <(/bin/echo 'abcdef')
and shorter displays nothing. Longer values truncate the beginning. less apparently works, but only if you specify -f.
I find zsh's = much more useful in practice. It's syntactically the same, except = instead of <. But it just creates a temporary file, so support doesn't depend on the program.
EDIT:
I found zsh uses TMPPREFIX to choose the temporary filename. So even if you don't want your real /tmp to be tmpfs, you can mount one for zsh.

Diff output from two programs without temporary files

Say I have too programs a and b that I can run with ./a and ./b.
Is it possible to diff their outputs without first writing to temporary files?
Use <(command) to pass one command's output to another program as if it were a file name. Bash pipes the program's output to a pipe and passes a file name like /dev/fd/63 to the outer command.
diff <(./a) <(./b)
Similarly you can use >(command) if you want to pipe something into a command.
This is called "Process Substitution" in Bash's man page.
Adding to both the answers, if you want to see a side by side comparison, use vimdiff:
vimdiff <(./a) <(./b)
Something like this:
One option would be to use named pipes (FIFOs):
mkfifo a_fifo b_fifo
./a > a_fifo &
./b > b_fifo &
diff a_fifo b_fifo
... but John Kugelman's solution is much cleaner.
For anyone curious, this is how you perform process substitution in using the Fish shell:
Bash:
diff <(./a) <(./b)
Fish:
diff (./a | psub) (./b | psub)
Unfortunately the implementation in fish is currently deficient; fish will either hang or use a temporary file on disk. You also cannot use psub for output from your command.
Adding a little more to the already good answers (helped me!):
The command docker outputs its help to STD_ERR (i.e. file descriptor 2)
I wanted to see if docker attach and docker attach --help gave the same output
$ docker attach
$ docker attach --help
Having just typed those two commands, I did the following:
$ diff <(!-2 2>&1) <(!! 2>&1)
!! is the same as !-1 which means run the command 1 before this one - the last command
!-2 means run the command two before this one
2>&1 means send file_descriptor 2 output (STD_ERR) to the same place as file_descriptor 1 output (STD_OUT)
Hope this has been of some use.
For zsh, using =(command) automatically creates a temporary file and replaces =(command) with the path of the file itself. With normal Process Substitution, $(command) is replaced with the output of the command.
This zsh feature is very useful and can be used like so to compare the output of two commands using a diff tool, for example Beyond Compare:
bcomp =(ulimit -Sa | sort) =(ulimit -Ha | sort)
For Beyond Compare, note that you must use bcomp for the above (instead of bcompare) since bcomp launches the comparison and waits for it to complete. If you use bcompare, that launches comparison and immediately exits due to which the temporary files created to store the output of the commands disappear.
Read more here: http://zsh.sourceforge.net/Intro/intro_7.html
Also notice this:
Note that the shell creates a temporary file, and deletes it when the command is finished.
and the following which is the difference between $(...) and =(...) :
If you read zsh's man page, you may notice that <(...) is another form of process substitution which is similar to =(...). There is an important difference between the two. In the <(...) case, the shell creates a named pipe (FIFO) instead of a file. This is better, since it does not fill up the file system; but it does not work in all cases. In fact, if we had replaced =(...) with <(...) in the examples above, all of them would have stopped working except for fgrep -f <(...). You can not edit a pipe, or open it as a mail folder; fgrep, however, has no problem with reading a list of words from a pipe. You may wonder why diff <(foo) bar doesn't work, since foo | diff - bar works; this is because diff creates a temporary file if it notices that one of its arguments is -, and then copies its standard input to the temporary file.

Resources