pipe bash output to two different streams - bash

I have a bashscript that spawns processes on two different machines over ssh and then cat's the output of one into a text file. How can I have the output ALSO displayed in the terminal as it's running?

Look at the tee utility (man tee).

The tee command is great when you want to save a stream to a file and continue processing it. However, if you want to send stdout to two separate programs, you can use a while read loop and echo the output to stdout and stderr and then stream stdout to one program and stderr to another.
echo input |
while read foo; do
echo "$foo"
echo "$foo" >&2
done 2> >( command1 1>&2 ) | command2
Here is a demo where the string "input" is prepended with a number to show where the outputs are going, and then sent as input to two perl programs that simply prepend the stream name.
echo input |
while read foo; do
echo "1: $foo"
echo "2: $foo" >&2
done 2> >( perl -wpe 's//STDERR: /;' 1>&2) | perl -wpe 's//STDOUT: /;'
output is
STDERR: 2: input
STDOUT: 1: input
Caveat - the while/read/echo thing may not preserve line endings and binary text, and long lines will cause problems. As with many things, bash may not be the best solution. Here is a perl solution for anything but really huge files:
echo input |
perl -wne 'print STDERR; print;' 2> >( command1 >&2) | command2

Related

How could I pipe both stderr and stdout to the same file, coloring stderr red but not stdout? [duplicate]

Bash supports colors, i.e. \033[31m switches to red and \033[0m switches back to uncolored.
I would like to make a small bash-wrapper that reliably puts out stderr in red, i.e. it should put \033[31m before and \033[0m after everything that comes from stderr.
I'm not sure that this is even possible, because when two parallel processes (or even a single process) writes to both stdout and stderr there would have to be a way to distinguish the two by a character-by-character basis.
Colorizing text is simple enough: read each line and echo it with appropriate escape sequences at beginning and end. But colorizing standard error gets tricky because standard error doesn’t get passed to pipes.
Here’s one approach that works by swapping standard error and standard output, then filtering standard output.
Here is our test command:
#!/bin/bash
echo hi
echo 'Error!' 1>&2
And the wrapper script:
#!/bin/bash
(# swap stderr and stdout
exec 3>&1 # copy stdout to fd 3
exec 1>&2 # copy stderr to fd 1
exec 2>&3- # move saved stdout on fd 3 over to 2
"${#}") | while read line; do
echo -e "\033[31m${line}\033[0m"
done
Then:
$ ./wrapper ./test-command
hi
Error! # <- shows up red
Unfortunately, all output from the wrapper command comes out of stderr, not stdout, so you can’t pipe the output into any further scripts. You can probably get around this by creating a temporary fifo… but hopefully this little wrapper script is enough to meet your needs.
Based on andrewdotn's wrapper
Changes:
Puts the stderr output back to stderr
Avoid echo -e processing content in the lines
wrapper
#!/bin/bash
"${#}" 2> >(
while read line; do
echo -ne "\033[31m" 1>&2
echo -n "${line}" 1>&2
echo -e "\033[0m" 1>&2
done
)
Issues:
The output lines end up grouped, rather than mixed stdout/stderr
Test script:
#!/bin/bash
echo Hi
echo "\033[32mStuff"
echo message
echo error 1>&2
echo message
echo error 1>&2
echo message
echo error 1>&2
Output:
Hi
\033[32mStuff
message
message
message
error # <- shows up red
error # <- shows up red
error # <- shows up red

Prefix for command output

From this question I learned how to add a prefix to each output of a command:
command | sed "s/^/[prefix] /"
But this only adds the prefix for each line from stdout.
I successfully used the following to add the prefix also to stderr output.
command 2>&1 | sed "s/^/[prefix] /"
But this sends the result to stdout only.
How can I prefix any output of command while pushing the lines to the previous output (preserving both stdout and stderr)?
As a combination of iBug's answer and this and especially this answer, I came up with a one-liner that uses temporary file descriptors:
command 1> >(sed "s/^/[prefix]/") 2> >(sed "s/^/[prefix]/" >&2)
Or as a function:
function prefix_cmd {
local PREF="${1//\//\\/}" # replace / with \/
shift
local CMD=("$#")
${CMD[#]} 1> >(sed "s/^/${PREF}/") 2> >(sed "s/^/${PREF}/" 1>&2)
}
prefix_cmd "prefix" command
You can only pipe stdout using the shell pipe syntax. You need two pipes if you want to process stdout and stderr separately. A named pipe may work here.
Here's a sample script that demonstrates the solution
#!/bin/bash
PREF="$1"
shift
NPOUT=pipe.out
NPERR=pipe.err
mkfifo $NPOUT $NPERR
# Make two background sed processes
sed "s/^/$PREF/" <$NPOUT &
sed "s/^/$PREF/" <$NPERR >&2 &
# Run the program
"$#" >$NPOUT 2>$NPERR
rm $NPOUT $NPERR
Usage:
./foo.sh "[prefix] " command -options
It will feed command with its stdin and send command's stdout and stderr to its stdout and stderr separately.
Note I didn't suppress sed's stderr, which may interfere with the output. You can do so like this:
sed "s/^/$PREF/" <$NPOUT 2>/dev/null &
^^^^^^^^^^^

Why does `>` redirect not capture substituted processes' stdout?

In an answer to a question about piping and redirection, robert mentions that piping also captures the stdout of substituted processes in the pipeline, whilst redirection doesn't. Why is this so? What exactly is going on, that results in this behavior:
bash-4.1$ echo -e '1\n2' | tee >(head -n1) >redirect
1
bash-4.1$ cat redirect
1
2
bash-4.1$ echo -e '1\n2' | tee >(head -n1) | cat >pipe
bash-4.1$ cat pipe
1
2
1
I would've thought that both forms would produce the same result -- the latter one.
Reading an answer to a different question, it seemed plausible that reordering the redirect in the command might produce the desired result, but no matter the order, the result is always the same:
bash-4.1$ echo -e '1\n2' | tee >redirect >(head -n1)
1
bash-4.1$ cat redirect
1
2
bash-4.1$ echo -e '1\n2' | >redirect tee >(head -n1)
1
bash-4.1$ cat redirect
1
2
Why does the stdout redirect only affect tee, but pipe captures the substituted process head as well? Simply "By design"?
Just a thought related to the above question: I thought that redirecting to a file and piping the output would never make sense, but it does make sense with process substitution:
bash-4.1$ echo -e '1\n2\n3' | tee >(head -n1) >(tail -n1) >tee_out | cat >subst_out
bash-4.1$ cat tee_out
1
2
3
bash-4.1$ cat subst_out
1
3
The shell that runs head is spawned by the same shell that runs tee, which means tee and head both inherit the same file descriptor for standard output, which file descriptor is connected to the pipe to cat. That means both tee and head have their output piped to cat, resulting in the behavior you see.
For
echo -e '1\n2' | tee >(head -n1) > redirect
, after |, only tee's stdout is redirected to the file and head still outputs to the tty. To redirect both tee and head's stdout you can write
echo -e '1\n2' | { tee >(head -n1); } > redirect
or
{ echo -e '1\n2' | tee >(head -n1); } > redirect
For
echo -e '1\n2' | tee >(head -n1) | cat > pipe
, tee >(head -n1) as a whole their stdout is piped to cat. It's logically the same as echo -e '1\n2' | { tee >(head -n1); } > redirect.
TL;DR: When executing part of a pipeline, the shell performs pipe-redirection of stdin/stdout first and >/< redirection last. Command substitution happens in between those two, so pipeline-redirection of stdin/stdout is inherited, whilst >/< redirection is not. It's a design decision.
To be fair, I accepted chepner's answer because he was first and he was correct. However, I decided to add my own answer to document my process of understanding this issue by reading bash's sources, as chepner's answer doesn't explain why the >/< redirection isn't inherited.
It is helpful to understand the steps involved (grossly simplified), when a complex pipeline is encountered by the shell. I have simplified my original problem to this example:
$ echo x >(echo y) >file
y
$ cat file
x /dev/fd/63
$ echo x >(echo y) | cat >file
$ cat file
x /dev/fd/63
y
Redirection-only
When the shell encounters echo x >(echo y) >file, it first forks once to execute the complex command (this can be avoided for some cases, like builtins), and then the forked shell:
creates a pipe (for process substitution)
forks for second echo
fork: connects its stdin to pipe[1]
fork: exec's echo y; the exec'ed echo inherits:
stdin connected to pipe[1]
unchanged stdout
opens file
connects its stdout to file
exec's echo x /proc/<pid>/fd/<pipe id>; the exec'ed echo inherits:
stdin unchanged
stdout connected to file
Here, the second echo inherits the stdout of the forked shell, before that forked shell redirects its stdout to file. I see no absolute necessity for this order of actions in this context, but I assume it makes more sense this way.
Pipe-Redirect
When the shell encounters echo x >(echo y) | cat >file, it detects a pipeline and starts processing it (without forking):
parent: creates a pipe (corresponding to the only actual | in the full command)
parent: forks for left side of pipe
fork1: connects its stdout to pipe[0]
fork1: creates a pipe_subst (for process substitution)
fork1: forks for second echo
nested-fork: connects its stdin to pipe_subst[1]
nested-fork: exec's echo y; the exec'ed echo inherits:
stdin connected to pipe_subst[1] from the inner fork
stdout connected to pipe[0] from the outer fork
fork1: exec's echo x /proc/<pid>/fd/<pipe_subst id>; the exec'ed echo inherits:
stdin unchanged
stdout connected to pipe[0]
parent: forks for right side of pipe (this fork, again, can sometimes be avoided)
fork2: connects its stdin to pipe[1]
fork2: opens file
fork2: connects its stdout to file
fork2: exec's cat; the exec'ed cat inherits:
stdin connected to pipe[1]
stdout connected to file
Here, the pipe takes precedence, i.e. redirection of stdin/stdout due to the pipe is performed before any other actions take place in executing the pipeline elements. Thus both echo's inherit the stdout redirected to cat.
All of this is really a design-consequence of >file redirection being handled after process substitution. If >file redirection were handled before that (like pipe redirection is), then >file would also have been inherited by the substituted processes.

Bash: How to direct output to both stderr and to stdout, to pipe into another command?

I know variations of this question have been asked and answered several times before, but I'm either misunderstanding the solutions, or am trying to do something eccentric. My instinct is that it shouldn't require tee but maybe I'm completely wrong...
Given a command like this:
sh
echo "hello"
I want to send it to STDERR so that it can be logged/seen on the console, and so that it can be sent to another command. For example, if I run:
sh
echo "hello" SOLUTION>&2 > myfile.txt
(SOLUTION> being whatever the answer to my problem is)
I want:
hello to be shown in the console like any other STDERR message
The file myfile.txt to contain hello
There's no need to redirect it to stderr. Just use tee to send it to the file while also sending to stdout, which will go to the terminal.
echo "hello" | tee myfile.txt
If you want to pipe the output to another command without writing it to a file, then you could use
echo "hello" | tee /dev/stderr | other_command
You could also write a shell function that does the equivalent of tee /dev/stderr:
$ tee_to_stderr() {
while read -r line; do
printf "%s\n" "$line";
printf "%s\n" "$line" >&2
done
}
$ echo "hello" | tee_to_stderr | wc
hello
1 1 6
This doesn't work well with binary output, but since you intend to use this to display text on the terminal that shouldn't be a concern.
tee copies stdin to the files on its command line, and also to stdout.
echo hello | tee myfile.txt >&2
This will save hello in myfile.txt and also print it to stderr.

forward stdin to stdout

I'm looking for a way to "forward" stdin to stdout in a pipe, while in that step something is written to stderr. The example should clarify this:
echo "before.." | >&2 echo "some logging..."; [[forward stdin>stdout]] | cat
This should put "before.." to stdout, meanwhile "some logging..." to stderr.
How to do that? Or is there maybe another quite different approach to this?
Here's a solution based on your comments:
cat ~/.bashrc | tee >( cat -n >&2 ) | sort
cat ~/.bashrc represents the start of your pipeline, producing some data.
tee duplicates its input, writing to both stdout and any files listed as arguments.
>( ... ) is a bash construct that runs ... as a pipe subcommand but replaces itself by a filename (something that tee can open and write to).
cat -n represents modifying the input (adding line numbers).
>&2 redirects stdout to stderr.
sort represents the end of your pipeline (normal processing of the unchanged input).
Putting it all together, bash will
run cat ~/.bashrc, putting the contents of ~/.bashrc on stdout
... which is piped to the stdin of tee
run cat -n with stdout redirected to stderr and stdin redirected to a new pipe
run tee /dev/fd/63 (where /dev/fd/63 represents the other end of the cat -n pipe)
this is where it all comes together: tee reads its input and writes it to both its stdout and to the other pipe that goes to cat -n (and from there to stderr)
finally tee's stdout goes into sort
Redirections follows the simple command they refer to, thus
echo "before" >&1
echo "some logging..." >&2
should do the trick, if I understand what you're trying to do.

Resources