Piping not working with echo command [duplicate] - bash

This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 5 years ago.
When I run the following Bash script, I would expect it to print Hello. Instead, it prints a blank line and exits.
echo 'Hello' | echo
Why doesn't piping output from echo to echo work?

echo prints all of its arguments. It does not read from stdin. So the second echo prints all of its arguments (none) and exits, ignoring the Hello on stdin.
For a program that reads its stdin and prints that to stdout, use cat:
$ echo Hello | cat
Hello

In this case the pipe you are using are more correctly known as anonymous pipes, because they have no name (there are also named pipes). Anonymous pipes only work between related processes, for example processes with the same parent.
Pipes are part of the IO system resulting from the C runtime-library. These streams are buffered (there is an exception) by default. Basically a pipe is just connecting the output buffer from one process to the input buffer of another.
The first three streams used (called file descriptors) are numbered 0, 1, and 2. The first, 0, is known as standard input, or stdin (the name used in C). By default this is connected to the keyboard, but it can be redirected either using the < symbol or the program name being on the right side of a pipe.
The second, 1, is known as standard output, or stdout. By default this is connected to the terminal screen, but can be redirected by using the > symbol or the program name being on the left side of a pipe.
So:
echo 'Hello' | echo
takes the standard output from echo and passes it to the standard input of echo. But echo does not read stdin! So nothing happens.
Filter programs process the filenames specified on the command-line. If no filenames are given then they read stdin. Examples include cat, grep, and sed, but not echo. For example:
echo 'Hello' | cat
will display 'Hello', and the cat is useless (it often is).
echo 'Hello' | cat file1
will ignore the output from echo and just display the contents of file1. Remember that stdin is only read if no filename is given.
What do you think this displays?
echo 'Hello' | cat < file1 file2
and why?
Finally, the third stream, 2, is called standard error, or stderr, and this one is unbuffered. It is ignored by pipes, because they only operate between stdin and stdout. However, you can redirect stderr to use stdout (see man dup2):
myprog 2>&1 | anotherprog
The 2>&1 means "redirect file descriptor 2 to the same place as fie descriptor 1".
The above is normal behaviour, however a program can override all that if it wants to. It could read from file descriptor 2, for example. I have omitted a lot of other detail, including other forms of redirection such as process substitution and here documents.

Piping can be done only for commands taking inputs from stdin. But echo does not takes from stdin. It will take input from argument and print it. So this wont work. Inorder to echo you can do something like echo $(echo 'hello')

It is because echo (both builtin and /bin/echo) don't read anything from stdin.
Use cat instead:
echo 'Hello' | cat
Hello
Or without pipes:
cat <<< 'Hello'

Related

output redirection syntax in shell

I've been trying to experiment and see what the difference would be between
command >file 2> file
and
command >file 2>&1
I haven't been able to. I understand that the second says to send error to where file descriptor 1 (stdout) is already going, and the first would create a new empty file for it, but how can this be seen?
Also, where can I find more information to learn about file descriptors/io syntax and how it works?
The difference is that >file 2>&1 opens the file just once, but then allows access to that single connection to the file (technically, the "open file description" in the kernel) via both file descriptor #1 (stdout) and #2 (stderr). Since writes to both stdout and stderr are going via the same connection ("open file description"), they write to the file in a consistent, coordinated way (and similar coordination applies to files opened for input on multiple descriptors).
>file 2>file, on the other hand, opens the file twice (creating two separate open file descriptions in the kernel), so writing to the file via the two file descriptors is not coordinated, and they can basically step on each others' feet.
An example may help to clarify what I mean. Here's a short subshell command that prints something to stdout, then a bit to stderr, then more to stdout. Try it first with >file 2>&1 and it does what you'd expect:
$ (echo abc; echo 123456 >&2; echo def) >file 2>&1
$ cat file
abc
123456
def
No surprise there, right? Now let's try it with separate connections to the file:
$ (echo abc; echo 123456 >&2; echo def) >file 2>file
$ cat file
1234def
That's probably not what you were expecting. What's happened here is that the first echo command sent "abc" followed by a newline character to stdout, and it got written into the first four bytes of the file. The second echo then sent "123456" followed by a newline to stderr; since the stderr connection was separate, it was still pointed to the beginning of the file, so it got written into the first seven bytes of the file (overwriting the "abc<newline>" that was already there). Then the third echo sent "def" and a newline to stdout; since the stdout connection was pointed to byte #5 of the file (one byte past where the last write to that connection ended), it gets written starting there, which overwrites the "56<newline>" part of what the second echo wrote there.
So having the same file open multiple times can lead to really confusing results. This is why you should always use >file 2>&1 instead of >file 2>file.
Here is a way of seeing one different
$ some-command > file > file
something to stderr if some-commands outputs
$ set -o noclobber
$ some-command > file > file
bash: file: cannot overwrite existing file
$ rm file
$ some-command > f1 2>&1
# no error

How to read only the first line of the output from a file descriptor?

I have output which is coming from fd3 from a program of which I am redirecting to a file as such:
program 3> output.log
In this instance I only need the first line provided by the program to be written to the log and do not want to keep a write handle open to this file for the life of the program.
How can I read only the first line? I think I can use the shell command read but I don't know how to use it for anything other than stdout. Note that I do not want to redirect fd3 to stdout to then use read as I am capturing stdout to another log.
You can capture the first line of an arbitrary file descriptor in this way:
$ (printf '%s\n' foo bar >&3) 3> >(head -n1)
foo
This prints two lines to FD 3 and redirects that to standard input of head. If you want to store that result to a file simply redirect within the process substitution.

redirect stdout and stderr to one file, copy of just stderr to another

I want to redirect the output of stdout and stderr to a common file:
./foo.sh >stdout_and_stderr.txt 2>&1
But also redirect just stderr to a separate file. I tried variations of:
./foo.sh >stdout_and_stderr.txt 2>stderr.txt 2>&1
but none of them work quite right in bash, e.g. stderr only gets redirected to one of the output files. It's important that the combined file preserves the line ordering of the first code snippet, so no dumping to separate files and later combining.
Is there a neat solution to this in bash?
You can use an additional file descriptor and tee:
{ foo.sh 2>&1 1>&3- | tee stderr.txt; } > stdout_and_stderr.txt 3>&1
Be aware that line buffering may cause the stdout output to appear out of order. If this is a problem, there are ways to overcome that including the use of unbuffer.
Using process substitution, you can get a moderate approximation to what you're after:
file1=stdout.stderr
file2=stderr.only
: > $file1 # Zap the file before starting
./foo.sh >> $file1 2> >(tee $file2 >> $file1)
This names the files since one of the names is repeated. The standard output is written to $file1. Standard error is written to the pipeline, which runs tee and writes one copy of the input (which was standard error output) to $file2, and also writes a second copy to $file1 as well. The >> redirections mean that the file is opened with O_APPEND so that each write will be done at the end, regardless of what the other process has also written.
As noted in comments, the output will, in general, be interleaved differently in this than it would if you simply ran ./foo.sh at the terminal. There are multiple sets of buffering going on to ensure that is what happens. You might also get partial lines because of the ways lines break over buffer size boundaries.
This comment from #jonathan-leffler should be an answer:
Note that your first command (./foo.sh 2>&1 >file) sends errors to the original standard output, and the standard output (but not the redirected standard error) to the file.
If you wanted both in file, you'd have to use ./foo.sh >file 2>&1, reversing the order of the redirections.
They're interpreted reading left to right.

Why this echo construct does not work?

When I do this:
$ /bin/echo 123 | /bin/echo
I get no o/p. Why is that?
You ask why it doesn't work. In fact, it does work; it does exactly what you told it to do. Apparently that's not what you expected. I think you expected it to print 123, but you didn't actually say so.
(Note: "stdin" is standard input; "stdout" is standard output.)
/bin/echo 123 | /bin/echo
Here's what happens. The echo command is executed with the argument 123. It writes "123", followed by a newline, to its stdout.
stdout is redirected, via the pipe (|) to the stdin of the second echo command. Since the echo command ignores its stdin, the output of the first echo is quietly discarded. Since you didn't give the second echo command any arguments, it doesn't print anything. (Actually, /bin/echo with no arguments typically prints a single blank line; did you see that?)
Normally pipes (|) are used with filters, programs that read from stdin and write to stdout. cat is probably the simplest filter; it just reads its input and writes it, unchanged, to its output (which means that some-command | cat can be written as just some-command).
An example of a non-trivial filter is rev, which copies stdin to stdout while reversing the characters in each line.
echo 123 | rev
prints
321
rev is a filter; echo is not. echo does print to stdout, so it makes sense to have it on the left side of a pipe, but it doesn't read from stdin, so it doesn't make sense to use it on the right side of a pipe.
"echo" reads from command line, not standard input. So pipeline is not working here.
From bash manual:
echo [-neE] [arg ...]
Output the args, separated by spaces, terminated with a newline.
So your first echo did print "123" to stand output, however, the second echo didn't make use of it so "123" is dropped. Then an empty line is printed as if you run "echo".
You can use cat as Keith Thompson suggested:
echo 123|cat
/bin/echo 123 < /bin/echo
the pipe = concate
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-4.html
Pipes let you use the output of a program as the input of another one

piping in linux

i have a file called test which contains the word "hello" in it.
shouldn't
echo test | cat
output hello? since its taking the output from the echo test, which is test, as the input for cat. so essentially im doing cat test.
but the actual output is test, im really confused.
Your pipes sends test to cat as the input, not as the argument. You could do:
cat `echo test`
to control the argument to cat with echo.
echo prints its arguments. cat prints a file which is by default standard input. When you pipe echo's standard output is connected to cat's standard input.
Correct is simply cat test.
From cat --help
If no FILE or when FILE is -, read standard input.
In your case, cat reads from stdin, which is test and outputs that.
In some cases you might want the argument to be passed through the pipe. This is how you would do that:
echo test | xargs cat
which will output the contents of the file named "test".

Resources