What does >| do in bash? - bash

In this article, How To Create a Patch File for a RPM, there is this command:
diff -ru base-1.4.4-orig base-1.4.4 >| $HOME/rpmbuild/SOURCES/base-1.4.4-f12.patch
Since the output is written to a file, the simple redirection operator > works fine for me.
Does this operator mean redirect to a pipe? If so, how is a redirect to a pipe different from just a redirect to a file or just a pipe to a process?

By executing the command
set -o noclobber
or the equivalent
set -C
you can cause bash to refuse to write to existing files when redirecting output.
Using >| rather than > overrides that setting.
References:
The set builtin
Redirecting Output
Or run info bash (assuming it's installed on your system) and search for >|:
s>\|
(If you're familiar with csh and/or tcsh, bash's >| (greater-than vertical-bar) is similar to csh's >! (greater-than exclamation-mark).

From the bash manpage:
If the redirection operator is >|, or the redirection
operator is > and the noclobber option to the set builtin command is not enabled, the redirection > is attempted even if the file named
by word exists.

Related

How can I duplicate standard input (stdin) to multiple subprocesses in a bash script?

I want to redirect stdin to multiple scripts, in order to test an in-development git hook while leaving the old one in place. I know I should use tee somehow, I don't see how I can use the basic >, < and pipe | redirection features of bash to do this. Furthermore, how can I redirect the stdin of a script? I don't want to use read, because that only reads one line at a time and I'd have to re-execute all subprocesses for each line.
You could use tee with normal files (possibly temp files via mktemp), then cat those files to your various scripts. More directly, you could replace those normal files with Named Pipes created with mkfifo. But you can do it in one pipe using Bash's powerful Process Substitution >( cmd ) and <( cmd ) features to replace the file tee expects with your subprocesses.
Use <&0 for the first tee to get the script's stdin. Edit: as chepner pointed out, tee by default inherits the shell's stdin.
The final result is this wrapper script:
#!/bin/bash
set +o pipefail
tee >(testscipt >> testscript.out.log 2>> testscript.err.log) | oldscript
Some notes:
use set +o pipefail to disable Bash's pipefail feature if it was previously enabled. When enabled, Bash will report errors from within the pipe. When disabled, it will only report errors of the last command, which is what we want here to keep our testscript invisible to the wrapper (we want it to behave as if it was just calling oldscript to avoid disruption.
redirect the stdout of testscript, otherwise it'll be forwarded to the the next command in the pipeline which is probably not what you want. Redirect stderr too while you're at it.
Any number of tees can be pipe chained like this to duplicate your input (but don't copy the <&0 stdin redirection from the initial one) (initial <&0 has been removed)

When can I use |& in Bash? Is it usable in other shells?

I often see commands like:
command1 |& command2
So what we are doing here is to redirect stderr to stdin, so that both of them "become" stdin to the next element in the pipe.
Or better described in the Bash Reference manual -> Pipelines:
A pipeline is a sequence of simple commands separated by one of the
control operators ‘|’ or ‘|&’.
...
If |& is used, command1’s standard error, in addition to its
standard output, is connected to command2’s standard input through the
pipe; it is shorthand for 2>&1 |. This implicit redirection of the
standard error to the standard output is performed after any
redirections specified by the command.
However, I wonder: is this syntax usable in all Bash versions? That is, is it always interchangeable with 2>&1 |? Can it also be used in any other shell (ksh, csh...)?
This feature is for Bash 4+ only:
|& (bash4)
Source
dd. The parser now understands `|&' as a synonym for `2>&1 |', which redirects the standard
error for a command through a pipe.
Source
Also note that the checkbashisms script will flag it as well:
'\s\|\&' => q<pipelining is not POSIX>,
Source

bash standard input redirection doesn't support wildcards?

I have some files to process by redirecting them to standard input, but bash complains about the wildcards.
someprogram < data/*
The bash error is bash: data/*: ambiguous redirect, Is there any work arounds to accomplish this instead of using cat to read the files and pipe the contents to the program.
No, this is not possible without using cat. Bash will open only a single file as stdin for a command. Btw, the is a useful use of cat :)
cat * | cmd
is the way to go here.

bash > prohibit to use redirection (>)

My environments:
CentOS 6.5
bash 4.1.2(1)
Sometimes when I intend to add something to a file,
instead of
$ echo "xxx" >> mymemo.txt
I type mistakenly
$ echo "xxx" > mymemo.txt
resulting in loosing memos in mymemo.txt.
I am wondering if there is a way to prohibit to use redirection (>), but allow to use redirection (>>)?
You can use set -o noclobber in your .bashrc or .profile
If set, bash prevents you from overwriting existing files when redirecting.
mint#mint ~ $ echo "foo" > test
bash: test: cannot overwrite existing file

Is there a command-line shortcut for ">/dev/null 2>&1"

It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.

Resources