I'm looking for a way to "forward" stdin to stdout in a pipe, while in that step something is written to stderr. The example should clarify this:
echo "before.." | >&2 echo "some logging..."; [[forward stdin>stdout]] | cat
This should put "before.." to stdout, meanwhile "some logging..." to stderr.
How to do that? Or is there maybe another quite different approach to this?
Here's a solution based on your comments:
cat ~/.bashrc | tee >( cat -n >&2 ) | sort
cat ~/.bashrc represents the start of your pipeline, producing some data.
tee duplicates its input, writing to both stdout and any files listed as arguments.
>( ... ) is a bash construct that runs ... as a pipe subcommand but replaces itself by a filename (something that tee can open and write to).
cat -n represents modifying the input (adding line numbers).
>&2 redirects stdout to stderr.
sort represents the end of your pipeline (normal processing of the unchanged input).
Putting it all together, bash will
run cat ~/.bashrc, putting the contents of ~/.bashrc on stdout
... which is piped to the stdin of tee
run cat -n with stdout redirected to stderr and stdin redirected to a new pipe
run tee /dev/fd/63 (where /dev/fd/63 represents the other end of the cat -n pipe)
this is where it all comes together: tee reads its input and writes it to both its stdout and to the other pipe that goes to cat -n (and from there to stderr)
finally tee's stdout goes into sort
Redirections follows the simple command they refer to, thus
echo "before" >&1
echo "some logging..." >&2
should do the trick, if I understand what you're trying to do.
Related
I typically use tee to receive a piped output data, echo it to standard output, and forward it to the actual intended recipient of the piped data. But sometimes this fails, and I cannot exactly understand why.
I'll try to demonstrate with a series of examples:
$ echo testing with this string | tee
testing with this string
So, just echoing some data to tee without arguments, is replicated/printed on the terminal/stdout. Note that this should be tee printing the output, as the output from echo is now "piped"/redirected, and therefore not present in stdout anymore (the same that happens here:
$ echo aa | echo bb
bb
... i.e. echo aa output got redirected to the next command, - which, being echo b, does not care about the input, and outputs just its own output.)
$ echo testing with this string | tee | python3 -c 'a=1'
$
Now here, piping data into tee without arguments, - and then piping, from tee, to a program that does not provide any output to terminal/stdout - prints nothing. I would have expected tee here to duplicate to stdout, and then forward to the next command in the pipeline, but apparently that does not happen.
$ echo testing with this string | tee /dev/stdout
testing with this string
testing with this string
Right, so if we pipe to tee with command line argument /dev/stdout, we get the printout twice - and as concluded earlier, it must be tee that produces both printed lines. That means, that when used without an argument, |tee basically does not open any file for duplicating, and simply forwards what it receives on its input, to its output; but as it is the last in the pipeline, its output is stdout in that case, so we get a single printout.
Here we get double printout, because
tee duplicated its input stream to /dev/stdout due to the argument (which ends up as the first printout); and then
forwarded the same input to its output, which here being stdout (as tee is again last in the pipeline), results with the second printout.
This also would explain why the previous ...| tee | python3 -c 'a=1' did not print anything: tee without arguments did not open any file for duplication, and merely forwarded to next command in the toolchain - and as the next one does not print any output either, no output is generated whatsoever.
Well, if the above understanding is correct, then this:
$ echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
$
... should print at least one line (from tee copying to /dev/stdout; the "forwarded" part will end up being "gulped" by the final command as it prints nothing), but it does not.
So, why does this happen - where am I going wrong in my understanding of what tee does?
And how can I use tee, to print to stdout, also when its output is forwarded to a command that doesn't print anything to stdout on its own?
You aren't misunderstanding tee, you're misunderstanding what stdout is. In a pipe, like echo testing | tee | python3 -c 'a=1', the tee command's stdout is not the terminal, it's the pipe going to the python command (and the echo command's stdout is the pipe going to tee).
So tee /dev/stdout sends two copies of its input (on stdin) to the exact same place: its stdout, whether that's the terminal, or a pipe, or whatever.
If you want to send a copy of the input to tee someplace other than down the pipe, you need to send it somewhere other than stdout. Where that is depends on where you actually want to send it (i.e. why you want to copy it). If you specifically want to send it to the terminal, you could do this:
echo testing | tee /dev/tty | python3 -c 'a=1'
...while if you want to send it to the outer context's stdout (which might or might not be a terminal), you can duplicate the outer context's stdin to a different file descriptor (#3 is handy for this), and then have tee write a copy to that:
{ echo testing | tee /dev/fd/3 | python3 -c 'a=1'; } 3>&1
Yet another option is to redirect it to stderr (aka FD #2, which is also the terminal by default, but redirectable separately from stdout) with tee /dev/fd/2.
Note that the various /dev entries I'm using here are supported by most unixish OSes, but they aren't universal. Check to see what your specific OS provides.
I think I got it, but am not sure if it is correct: I saw this: 19.8. Forgetting That Pipelines Make Subshells - bash Cookbook [Book].
So, if pipelines make subshells, then
echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
... is conceptually equal to:
echo testing with this string | (tee /dev/stdout | (python3 -c 'a=1'))
Note that the second pipe | redirects stdout of the subshell tee runs in, and as /dev/stdout is just an interface to stdout, it is redirected too, so we get nothing printed.
So, while stdout (and /dev/stdout) is local to the (sub)shell, /dev/tty is local to the terminal - and therefore the following:
$ echo testing with this string | tee /dev/tty | python3 -c 'a=1'
testing with this string
... in fact prints a line, as expected.
In an answer to a question about piping and redirection, robert mentions that piping also captures the stdout of substituted processes in the pipeline, whilst redirection doesn't. Why is this so? What exactly is going on, that results in this behavior:
bash-4.1$ echo -e '1\n2' | tee >(head -n1) >redirect
1
bash-4.1$ cat redirect
1
2
bash-4.1$ echo -e '1\n2' | tee >(head -n1) | cat >pipe
bash-4.1$ cat pipe
1
2
1
I would've thought that both forms would produce the same result -- the latter one.
Reading an answer to a different question, it seemed plausible that reordering the redirect in the command might produce the desired result, but no matter the order, the result is always the same:
bash-4.1$ echo -e '1\n2' | tee >redirect >(head -n1)
1
bash-4.1$ cat redirect
1
2
bash-4.1$ echo -e '1\n2' | >redirect tee >(head -n1)
1
bash-4.1$ cat redirect
1
2
Why does the stdout redirect only affect tee, but pipe captures the substituted process head as well? Simply "By design"?
Just a thought related to the above question: I thought that redirecting to a file and piping the output would never make sense, but it does make sense with process substitution:
bash-4.1$ echo -e '1\n2\n3' | tee >(head -n1) >(tail -n1) >tee_out | cat >subst_out
bash-4.1$ cat tee_out
1
2
3
bash-4.1$ cat subst_out
1
3
The shell that runs head is spawned by the same shell that runs tee, which means tee and head both inherit the same file descriptor for standard output, which file descriptor is connected to the pipe to cat. That means both tee and head have their output piped to cat, resulting in the behavior you see.
For
echo -e '1\n2' | tee >(head -n1) > redirect
, after |, only tee's stdout is redirected to the file and head still outputs to the tty. To redirect both tee and head's stdout you can write
echo -e '1\n2' | { tee >(head -n1); } > redirect
or
{ echo -e '1\n2' | tee >(head -n1); } > redirect
For
echo -e '1\n2' | tee >(head -n1) | cat > pipe
, tee >(head -n1) as a whole their stdout is piped to cat. It's logically the same as echo -e '1\n2' | { tee >(head -n1); } > redirect.
TL;DR: When executing part of a pipeline, the shell performs pipe-redirection of stdin/stdout first and >/< redirection last. Command substitution happens in between those two, so pipeline-redirection of stdin/stdout is inherited, whilst >/< redirection is not. It's a design decision.
To be fair, I accepted chepner's answer because he was first and he was correct. However, I decided to add my own answer to document my process of understanding this issue by reading bash's sources, as chepner's answer doesn't explain why the >/< redirection isn't inherited.
It is helpful to understand the steps involved (grossly simplified), when a complex pipeline is encountered by the shell. I have simplified my original problem to this example:
$ echo x >(echo y) >file
y
$ cat file
x /dev/fd/63
$ echo x >(echo y) | cat >file
$ cat file
x /dev/fd/63
y
Redirection-only
When the shell encounters echo x >(echo y) >file, it first forks once to execute the complex command (this can be avoided for some cases, like builtins), and then the forked shell:
creates a pipe (for process substitution)
forks for second echo
fork: connects its stdin to pipe[1]
fork: exec's echo y; the exec'ed echo inherits:
stdin connected to pipe[1]
unchanged stdout
opens file
connects its stdout to file
exec's echo x /proc/<pid>/fd/<pipe id>; the exec'ed echo inherits:
stdin unchanged
stdout connected to file
Here, the second echo inherits the stdout of the forked shell, before that forked shell redirects its stdout to file. I see no absolute necessity for this order of actions in this context, but I assume it makes more sense this way.
Pipe-Redirect
When the shell encounters echo x >(echo y) | cat >file, it detects a pipeline and starts processing it (without forking):
parent: creates a pipe (corresponding to the only actual | in the full command)
parent: forks for left side of pipe
fork1: connects its stdout to pipe[0]
fork1: creates a pipe_subst (for process substitution)
fork1: forks for second echo
nested-fork: connects its stdin to pipe_subst[1]
nested-fork: exec's echo y; the exec'ed echo inherits:
stdin connected to pipe_subst[1] from the inner fork
stdout connected to pipe[0] from the outer fork
fork1: exec's echo x /proc/<pid>/fd/<pipe_subst id>; the exec'ed echo inherits:
stdin unchanged
stdout connected to pipe[0]
parent: forks for right side of pipe (this fork, again, can sometimes be avoided)
fork2: connects its stdin to pipe[1]
fork2: opens file
fork2: connects its stdout to file
fork2: exec's cat; the exec'ed cat inherits:
stdin connected to pipe[1]
stdout connected to file
Here, the pipe takes precedence, i.e. redirection of stdin/stdout due to the pipe is performed before any other actions take place in executing the pipeline elements. Thus both echo's inherit the stdout redirected to cat.
All of this is really a design-consequence of >file redirection being handled after process substitution. If >file redirection were handled before that (like pipe redirection is), then >file would also have been inherited by the substituted processes.
I know variations of this question have been asked and answered several times before, but I'm either misunderstanding the solutions, or am trying to do something eccentric. My instinct is that it shouldn't require tee but maybe I'm completely wrong...
Given a command like this:
sh
echo "hello"
I want to send it to STDERR so that it can be logged/seen on the console, and so that it can be sent to another command. For example, if I run:
sh
echo "hello" SOLUTION>&2 > myfile.txt
(SOLUTION> being whatever the answer to my problem is)
I want:
hello to be shown in the console like any other STDERR message
The file myfile.txt to contain hello
There's no need to redirect it to stderr. Just use tee to send it to the file while also sending to stdout, which will go to the terminal.
echo "hello" | tee myfile.txt
If you want to pipe the output to another command without writing it to a file, then you could use
echo "hello" | tee /dev/stderr | other_command
You could also write a shell function that does the equivalent of tee /dev/stderr:
$ tee_to_stderr() {
while read -r line; do
printf "%s\n" "$line";
printf "%s\n" "$line" >&2
done
}
$ echo "hello" | tee_to_stderr | wc
hello
1 1 6
This doesn't work well with binary output, but since you intend to use this to display text on the terminal that shouldn't be a concern.
tee copies stdin to the files on its command line, and also to stdout.
echo hello | tee myfile.txt >&2
This will save hello in myfile.txt and also print it to stderr.
I want to write to only stderr using /dev/tty . If I write directly to /dev/tty (with tee), it seems like that will print out on stdout. Is that correct? How can I specify that I want to print to stderr?
Currently the line in bash looks like
echo "foo" >&2 | tee /dev/tty | logger -it "my_script"
If we split apart your command apart with the result of each command after the #
echo "foo" >&2 # echo "foo" and redirect to fd 2 (/dev/sdterr)
| #pipe stdout to
tee /dev/tty #both send stdout to file /dev/tty, which is terminal file that can output both stdout and stderr depending on what you pass to it (so you probably want /dev/stdout/ or /dev/stderr directly instead) and pass it along to the next pipe
| #pipe stdout to
logger -it "my_script"
So it depends on what you want to do (In the above foo gets redirected to stderr and nothing gets piped to tee)
If you want to print foo to stderr and pass stdout to your script you can just do
echo "foo" | tee /dev/stderr | yourscirpt
Then tee will print to stderr and foo will get piped as stdout to yourscript.
I have a bashscript that spawns processes on two different machines over ssh and then cat's the output of one into a text file. How can I have the output ALSO displayed in the terminal as it's running?
Look at the tee utility (man tee).
The tee command is great when you want to save a stream to a file and continue processing it. However, if you want to send stdout to two separate programs, you can use a while read loop and echo the output to stdout and stderr and then stream stdout to one program and stderr to another.
echo input |
while read foo; do
echo "$foo"
echo "$foo" >&2
done 2> >( command1 1>&2 ) | command2
Here is a demo where the string "input" is prepended with a number to show where the outputs are going, and then sent as input to two perl programs that simply prepend the stream name.
echo input |
while read foo; do
echo "1: $foo"
echo "2: $foo" >&2
done 2> >( perl -wpe 's//STDERR: /;' 1>&2) | perl -wpe 's//STDOUT: /;'
output is
STDERR: 2: input
STDOUT: 1: input
Caveat - the while/read/echo thing may not preserve line endings and binary text, and long lines will cause problems. As with many things, bash may not be the best solution. Here is a perl solution for anything but really huge files:
echo input |
perl -wne 'print STDERR; print;' 2> >( command1 >&2) | command2