Double redirection test - bash

What does this print your shell?
echo foo | while read line; do echo $line; done < <(echo bar)
I would expect it to evaluate to echo foo | bar or foo < <(bar), both of which would result in an error message.
In Bash 4.1.5 it looks like the pipe is simply discarded:
bar
In Dash:
sh: Syntax error: redirection unexpected

Dash doesn't support process substitution (<()).
The behavior you're seeing is consistent if you use syntax that's supported by each of the shells you're comparing. Try this:
echo hello | cat < inputfile
You should see the contents of "inputfile" and not "hello". Of several shells I tried, only Z shell showed both.
This is what POSIX says regarding pipelines and redirection:
The standard output of command1 shall be connected to the standard input of command2. The standard input, standard output, or both of a command shall be considered to be assigned by the pipeline before any redirection specified by redirection operators that are part of the command (see Redirection ).
I interpret this to mean that in the case of the example above, the pipeline assigns stdin to cat then the redirection overrides it.

Related

how far does a redirection go in a command line?

I would like to undestand where a stdin redirection goes in an expression like < <(cmd)
as a test to learn more about bash I tried to write a bash function with while .. do .. done and to have it working I had to use a trial and error method , mainly because I did not know the behavior or the first redirection in < <(find ...):
while read L ;do basename "$L";((k++));done < <(
find -maxdepth 1 -type d -name '.*' |
sort
)
With only <(find ... it does not work . I suppose it's because stdout of find command line goes to a tmp file ( so I read ) ; so I added one more < to "push" further the copy of stdout . That I can understand , but how can I know that stdout copied in the tmp file does not stop at the first command encountered : basename and goes as far as stdin of while command ?
<(...) by itself is a process substitution. It behaves like a file name, except the "contents" of the file it names are the output of the command. For example,
$ echo <(echo foo)
/dev/fd/63
$ cat <(echo foo; echo bar)
foo
bar
A while loop doesn't take arguments, which is why you get a syntax error without the input redirection.
$ while read x; do echo "$x"; done <(echo foo; echo bar)
bash: syntax error near unexpected token `<(echo foo; echo bar)'
With the input redirection, the while loop (which is a command, and like any other command, has its own standard input) uses the process substitution as its standard input.
$ while read x; do echo "$x"; done < <(echo foo; echo bar)
foo
bar
while doesn't actually use its standard input, but any command in the while loop inherits its standard input from the loop. That includes read, so each execution of read gets a different line from the file until the file is exhausted, at which point read has an exit status of 1 and the loop terminates.
how far does a redirection go in a command line?
The redirection goes for the whole duration of a command it is applied to.
There is shell grammar which defines the basic stuff that are inside a "command line". You can peak at POSIX shell standard and Bash documentation.
A command is one of the following:
Simple command (see Simple Commands)
Pipeline (see Pipelines)
List compound-list (see Lists)
Compound command (see Compound Commands)
Function definition (see Function Definition Command)
A command may be a compound command, which may be a looping construct, which may be a while looping construct. A while is a single, one command.
The redirection is redirected for the whole duration of a command, and inherited for any commands inside that command.
while
redirected here
do
redirected here
done < redirection
if
redirected here
then
redirected here
else
redirected here
fi < redirection
etc.

Why does input redirection work differently when in command substitution?

Consider the following case:
$ echo "abc" > file
$ var=$(< file)
$ echo "$var"
abc
Inside the command substitution, we use a redirect and a file, and the content of the file is correctly captured by the variable.
However, all the following examples produce no output:
$ < file
$ < file | cat
$ < file > file2
$ cat file2
In all these cases the content of the command is not redirected to the output.
So why is there a difference when the redirect is placed inside the command substitution or not? Does the redirect have a different function when inside vs outside a command substitution block?
$(< file) is not a redirection; it is just a special case of a command substitution that uses the same syntax as an input redirection.
In general, an input redirection must be associated with a command. There is one case that arguably could be considered an exception, which is
$ > file
It's not technically a redirection, since nothing is redirected to the file, but file is still opened in write mode, which truncates it to 0 bytes.

Bash script - stdout file descriptor?

I have the following in my script:
OUTFILE=./output.log
echo "foo" >> ${OUTFILE}
It works just fine when OUTFILE is an actual file path. However, sometimes I'd like to see the output on stdout by modifying OUTFILE but it doesn't work. I tried with 1 and &1, both quoted and unquoted, as well as leaving it empty.
It just keeps telling me this:
./foo.sh: line 2: ${OUTFILE}: ambiguous redirect
Use /dev/stdout as your filename for this. (See the Portability of “> /dev/stdout”.)
You can't use &1 in the variable because of parsing order issues. The redirection tokens are searched for before variable expansion is performed. So when you use:
$ o='&1'
$ echo >$o
the shell scans for redirection operators and sees the > redirection operator and not the >& operator. (And there isn't a >>& operator to begin with anyway so your appending example wouldn't work regardless. Though newer versions of bash do have an &>> operator for >> file 2>&1 use.)
Im guessing you want to do one of these
Print to file
OUTFILE=./output.log
echo "foo" >> "${OUTFILE}"
Print to stdout
OUTFILE=/dev/stdout
echo "foo" >> "${OUTFILE}"
or just
echo "foo"
Print to file and stdout
OUTFILE=./output.log
echo "foo" | tee "${OUTFILE}"

Piping not working with echo command [duplicate]

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 5 years ago.
When I run the following Bash script, I would expect it to print Hello. Instead, it prints a blank line and exits.
echo 'Hello' | echo
Why doesn't piping output from echo to echo work?
echo prints all of its arguments. It does not read from stdin. So the second echo prints all of its arguments (none) and exits, ignoring the Hello on stdin.
For a program that reads its stdin and prints that to stdout, use cat:
$ echo Hello | cat
Hello
In this case the pipe you are using are more correctly known as anonymous pipes, because they have no name (there are also named pipes). Anonymous pipes only work between related processes, for example processes with the same parent.
Pipes are part of the IO system resulting from the C runtime-library. These streams are buffered (there is an exception) by default. Basically a pipe is just connecting the output buffer from one process to the input buffer of another.
The first three streams used (called file descriptors) are numbered 0, 1, and 2. The first, 0, is known as standard input, or stdin (the name used in C). By default this is connected to the keyboard, but it can be redirected either using the < symbol or the program name being on the right side of a pipe.
The second, 1, is known as standard output, or stdout. By default this is connected to the terminal screen, but can be redirected by using the > symbol or the program name being on the left side of a pipe.
So:
echo 'Hello' | echo
takes the standard output from echo and passes it to the standard input of echo. But echo does not read stdin! So nothing happens.
Filter programs process the filenames specified on the command-line. If no filenames are given then they read stdin. Examples include cat, grep, and sed, but not echo. For example:
echo 'Hello' | cat
will display 'Hello', and the cat is useless (it often is).
echo 'Hello' | cat file1
will ignore the output from echo and just display the contents of file1. Remember that stdin is only read if no filename is given.
What do you think this displays?
echo 'Hello' | cat < file1 file2
and why?
Finally, the third stream, 2, is called standard error, or stderr, and this one is unbuffered. It is ignored by pipes, because they only operate between stdin and stdout. However, you can redirect stderr to use stdout (see man dup2):
myprog 2>&1 | anotherprog
The 2>&1 means "redirect file descriptor 2 to the same place as fie descriptor 1".
The above is normal behaviour, however a program can override all that if it wants to. It could read from file descriptor 2, for example. I have omitted a lot of other detail, including other forms of redirection such as process substitution and here documents.
Piping can be done only for commands taking inputs from stdin. But echo does not takes from stdin. It will take input from argument and print it. So this wont work. Inorder to echo you can do something like echo $(echo 'hello')
It is because echo (both builtin and /bin/echo) don't read anything from stdin.
Use cat instead:
echo 'Hello' | cat
Hello
Or without pipes:
cat <<< 'Hello'

Pipe | Redirection < > Precedence

I want to make clear when does pipe | or redirection < > takes precedence in a command?
This is my thought but need confirmation this is how it works.
Example 1:
sort < names | head
The pipe runs first: names|head then it sorts what is returned from names|head
Example 2:
ls | sort > out.txt
This one seems straight forward by testing, ls|sort then redirects to out.txt
Example 3:
Fill in the blank? Can you have both a < and a > with a | ???
In terms of syntactic grouping, > and < have higher precedence; that is, these two commands are equivalent:
sort < names | head
( sort < names ) | head
as are these two:
ls | sort > out.txt
ls | ( sort > out.txt )
But in terms of sequential ordering, | is performed first; so, this command:
cat in.txt > out1.txt | cat > out2.txt
will populate out1.txt, not out2.txt, because the > out1.txt is performed after the |, and therefore supersedes it (so no output is piped out to cat > out2.txt).
Similarly, this command:
cat < in1.txt | cat < in2.txt
will print in2.txt, not in1.txt, because the < in2.txt is performed after the |, and therefore supersedes it (so no input is piped in from cat < in1.txt).
From man bash (as are the other quotes):
SHELL GRAMMAR
Simple Commands
A simple command is a sequence of optional variable assignments followed by
blank-separated words and redirections, and terminated by a control
operator. The first word specifies the command to be executed, and is
passed as argument zero. The remaining words are passed as arguments
to the invoked command.
The return value of a simple command is its exit status, or 128+n if
the command is terminated by signal n.
Pipelines
A pipeline is a sequence of one or more commands separated by one of
the control operators | or |&. The format for a pipeline is:
[time [-p]] [ ! ] command [ [|⎪|&] command2 ... ]
In other words, you can have any number of redirections for a (simple) command; you can also use that as part of a pipeline. Or, put another way, redirection binds more tightly than pipe.
There are a couple of ways to get work around this (although they're rarely either necessary or aesthetic):
1. You can make a "compound command" and redirect into it:
Compound Commands
A compound command is one of the following:
(list) list is executed in a subshell environment (see
COMMAND EXECUTION ENVIRONMENT below). Variable
assignments and builtin commands that affect the
shell's environment do not remain in effect after the
command completes. The return status is the exit status of list.
{ list; }
list is simply executed in the current shell environment. list
must be terminated with a newline or semicolon. This is known as a
group command. The return status is the exit status of list. Note
that unlike the metacharacters ( and ), { and } are reserved words
and must occur where a reserved word is permitted to be recognized.
Since they do not cause a word break, they must be separated from
list by whitespace or another shell metacharacter.
So:
$ echo foo > input
$ { cat | sed 's/^/I saw a line: /'; } < input
I saw a line: foo
2. You can redirect to a pipe using "process substitution":
Process Substitution
Process substitution is supported on systems that support named pipes
(FIFOs) or the /dev/fd method of naming open files. It takes the form of
<(list) or >(list). The process list is run with its input or output
connected to a FIFO or some file in /dev/fd. The name of this file is
passed as an argument to the current command as the result of the
expansion. If the >(list) form is used, writing to the file will provide
input for list. If the <(list) form is used, the file passed as an argument
should be read to obtain the output of list.
So:
rici#...$ cat > >(sed 's/^/I saw a line: /') < <(echo foo; echo bar)
I saw a line: foo
rici#...$ I saw a line: bar
(Why the prompt appears before the output terminates, and what to do about it are left as exercises).
This is pretty much what I understand after doing some reading (including ruakh's answer)
First of all, if you redirect multiple times, all the redirections are performed, but only the last redirection will take effect (assuming none of the earlier redirections cause error)
e.g. cat < in1.txt < in2.txt is equivalent to cat < in2.txt, unless in1.txt does not exist in which case this command will fail (since < in1.txt is performed first)
Similarly, with cat in.txt > out1.txt > out2.txt, only out2.txt would contain the contents of out2.txt, but since > out1.txt was performed first, out1.txt would be created if it doesn't exist.
What pipe does is connect the stdout of previous command to the stdin of the next command, and that connection comes before any other redirections (from Bash manual).
So you can think of
cat in1.txt > out1.txt | cat > out2.txt
as
cat in1.txt > pipe > out1.txt; cat < pipe > out2.txt
And applying the multiple redirection rule mentioned before, we can simplify this to
cat in1.txt > out1.txt; cat < pipe > out2.txt
Result: The content of in1.txt is copied to out1.txt, since nothing was written to pipe
Using another of [ruakh][3]'s example,
cat < in1.txt | cat < in2.txt
is roughly equivalent to
cat > pipe < in1.txt; cat < pipe < in2.txt
which is effectively
cat > pipe < in1.txt; cat < in2.txt
Result: This time something is written to the pipe, but since the second cat reads from in2.txt instead of pipe, only the content of in2.txt is printed out. If the pipe is in the middle of the same side (> or <) redirection, it will be ingored.
It's a little unorthodox, but perfectly legal, to place the < anywhere you like, so I prefer this as it better illustrates the left-to-right data flow:
<input.txt sort | head >output.txt
The only time you cannot do this is with built-in control structure commands (for, if, while).
# Unfortunately, NOT LEGAL
<input.txt while read line; do ...; done
Note that all of these are equivalent commands, but to avoid confusion you should use only the first or the last one:
<input.txt grep -l foobar
grep <input.txt -l foobar
grep -l <input.txt foobar
grep -l foobar <input.txt
Because the file name must always come directly after the redirection operator, I prefer to leave out the optional space between the < and the file name.
Corrections:
Example 1:
sort < names | head
In this case, input redirect runs first (names are sorted), then the result of that is piped to head.
In general you can read from left to right. The standard idiom works as follows:
Use of input redirection "<" tells the program reads from a file instead of stdin
Use of output redirection ">" tells the program to output to a file instead of stdout
Use of pipe "program_a | program_b" takes everything that would normally be output by program_a to stdout, and feeds it all directly to program_b as if it was read from stdin.

Resources