piping stderr and stdout separately - bash

I'd like to do different things to the stdout and stderr of a particular command. Something like
cmd |1 stdout_1 | stdout_2 |2 stderr_1 | stderr_2
where stdout_x is a command specifically for stdout and stderr_x is specifically for stderr. It's okay if stderr from every command gets piped into my stderr commands, but it's even better if the stderr could be strictly from cmd. I've been searching for some syntax that may support this, but I can't seem to find anything.

You can make use of a different file descriptor:
{ cmd 2>&3 | stdout_1; } 3>&1 1>&2 | stderr_1
Example:
{ { echo 'out'; echo >&2 'error'; } 2>&3 | awk '{print "stdout: " $0}'; } 3>&1 1>&2 |
awk '{print "stderr: " $0}'
stderr: error
stdout: out
Or else use process substitution:
cmd 2> >(stderr_1) > >(stdout_1)
Example:
{ echo 'out'; echo >&2 'error'; } 2> >(awk '{print "stderr: " $0}') \
> >(awk '{print "stdout: " $0}')
stderr: error
stdout: out
to pipe stdout and stderr separately from your cmd.

You can use process substitution and redirection to achieve this:
cmd 2> >(stderr_1 | stderr_2) | stdout_1 | stdout_2

The most straightforward solution would be something like this:
(cmd | gets_stdout) 2>&1 | gets_stderr
The main drawback being that if gets_stdout itself has any output on stdout, that will also go to gets_stderr. If that is a problem, you should use one of anubhava's or Kevin's answers.

Late-answer - as with all previous answers stderr is landing in stdout at the end (#anubhava's answer was nearly complete).
To pipe stderr and stdout independently (or identical) and keep them in stdout and stderr, we can use file descriptors.
Solution:
{ { cmd | stdout_pipe ; } 2>&1 1>&3 | stderr_pipe; } 1>&2 3>&1
Explanation:
piping in the shell always happens on stdout (file descriptor 1)
we therefore apply the pipe for stdout directly (leaving stderr intact)
then we park stdout in a temporary file descriptor and move stderr to stdout, allowing us to pipe this in the next step
at the end we get the "current" stdout (piped stderr) back to stderr and stdout back from our temporary file descriptor
Example:
using
cmd = { echo 'out'; echo >&2 'error'; }
stdout_pipe = awk '{print "stdout: " $0}'
stderr_pipe = awk '{print "stderr: " $0}'
{ { { echo 'out'; echo >&2 'error'; } \
| awk '{print "stdout: " $0}'; } 2>&1 1>&3 \
| awk '{print "stderr: " $0}'; } 1>&2 3>&1
Note: this example has an extra pair of { } to "combine" both echo commands into a single one
You see the difference when redirecting the output of a script using this or when using this with a terminal that colors stderr.

Related

Bash script - Modify output of command and print into file

Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.
To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'

bash stdout some information and pipe other from inside loop

How to print output from a loop which is piped to some other command:
for f in "${!myList[#]}"; do
echo $f > /dev/stdout # echoed to stdout, how to?
unzip -qqc $f # piped to awk script
done | awk -f script.awk
You can use /dev/stderr or second file descriptor:
echo something >&2 | grep nothing
echo something >/dev/stderr | grep nothing
You can use another file descriptor that will be connected to stdout:
# for a single command group
{ echo something >&3 | grep nothing; } 3>&1
# or for everywhere
exec 3>&1
echo something >&3 | grep nothing
# same as above with named file descriptor
exec {LOG}>&1
echo 123 >&$LOG | grep nothing
You can also redirect the output to current controlling terminal /dev/tty (if there is one):
echo something >/dev/tty | grep nothing

How pipe std and err output to separate commands in bash script?

I have a bash script executing a long run command. I want to prefix each line printed by the command to stdout with $stdprefix and each line printed to stderr with $errprefix.
I don't want to store output to variables or even worse to files, because I'd have to wait until the command finishes execution to see the output.
You can use:
# your prefixes
stdprefix="stdout: "
errprefix="stderr: "
# sample command to produce output and error
cmd() { echo 'output'; echo >&2 'error'; }
Now to redirect stdout and stderr independently:
{ cmd 2>&3 | awk -v p="$stdprefix" '{print p $0}'; } 3>&1 1>&2 |
awk -v p="$errprefix" '{print p $0}'
stderr: error
stdout: output
Just replace cmd with your long running command.

tee stdout and stderr to separate files while retaining them on their respective streams

I'm trying to write a script the essentially acts as a passthru log of all the output created by a (non-interactive) command, without affecting the output of the command to other processes. That is to say, stdout and stderr should appear the same as if they had not run through my command.
To do this, I'm trying to redirect stdout and stderr separately to two different tees, each for a different file, and then recombine them so that they still appear on stdout and stderr, respectively. I have seen a lot of other questions about teeing and redirecting and have tried some of the answers gleaned from those, but none of them seem to work combining both splitting the stream to separate tees and then recombining them correctly.
My attempts are successfully splitting the output into the right files, but the streams are not correctly retained for actual stdout/stderr output. I see this in a more complicated setting, so I created simplified commands where I echoed data to stdout or stderr as my "command" as shown below.
Here are a couple of things that I have tried:
{ command | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; }
Running my simple test I see:
$ { { { echo "test" 1>&2; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } > /dev/null
test
$ { { { echo "test" 1>&2; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } 2> /dev/null
$
Ok, this is as I expect. I am echoing to stderr, so I expect to see nothing when I redirect the final stderr to /dev/null and my original echo when I only redirect stdout.
$ { { { echo "test"; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } > /dev/null
test
$ { { { echo "test"; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } 2> /dev/null
$
This is backwards! My command sends only data to stdout, so I would expect to see nothing when I redirect the final stdout to null. But the reverse is true.
Here is the second command I tried, it is a bit more complicated:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2; }
Unfortunately, I see the same identical behavior as before.
I can't really see what I am doing wrong, but it appears that stdout is getting clobbered somehow. In the case of the first command, I suspect that this is because I am combining stdout and stderr (2>&1) before I pipe it to the second tee, but if this were the case I would expect to see both stdout and stderr in the tee2.txt file, which I don't - I only see stderr! In the case of the second command, my impression from reading the answer I adapted for this command is that descriptors are getting swapped around so as to avoid this problem, but obviously something is still going wrong.
Edit: I had another thought, that maybe the second command is failing because I am redirecting 1>&2 and that is killing stdout from the first tee. So I tried to redirecting it with 1>&4 and then redirecting that back to stdout at the end:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&4 | { tee /home/michael/tee2.txt 1>&2 4>&1; }
But now I get:
-bash: 4: Bad file descriptor
I also tried redirecting descriptor 2 back to descriptor 1 in the final tee:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2 2>&1; }
and:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2; } 2>&1
A process-substitution-based solution is simple, although not as simple as you might think. My first attempt seemed like it should work
{ echo stdout; echo stderr >&2; } > >( tee ~/stdout.txt ) \
2> >( tee ~/stderr.txt )
However, it doesn't quite work as intended in bash because the second tee inherits its standard output from the original command (and hence it goes to the first tee) rather than from the calling shell. It's not clear if this should be considered a bug in bash.
It can be fixed by separating the output redirections into two separate commands:
{ { echo stdout; echo stderr >&2; } > >(tee stdout.txt ); } \
2> >(tee stderr.txt )
Update: the second tee should actually be tee stderr.txt >&2 so that what was read from standard error is printed back onto standard error.
Now, the redirection of standard error occurs in a command which does not have its standard output redirected, so it works in the intended fashion. The outer compound command has its standard error redirected to the outer tee, with its standard output left on the terminal. The inner compound command inherits its standard error from the outer (and so it also goes to the outer tee, while its standard output is redirected to the inner tee.

pipe stdout and stderr to two different processes in shell script?

I've a pipline doing just
command1 | command2
So, stdout of command1 goes to command2 , while stderr of command1 go to the terminal (or wherever stdout of the shell is).
How can I pipe stderr of command1 to a third process (command3) while stdout is still going to command2 ?
Use another file descriptor
{ command1 2>&3 | command2; } 3>&1 1>&2 | command3
You can use up to 7 other file descriptors: from 3 to 9.
If you want more explanation, please ask, I can explain ;-)
Test
{ { echo a; echo >&2 b; } 2>&3 | sed >&2 's/$/1/'; } 3>&1 1>&2 | sed 's/$/2/'
output:
b2
a1
Example
Produce two log files:
1. stderr only
2. stderr and stdout
{ { { command 2>&1 1>&3; } | tee err-only.log; } 3>&1; } > err-and-stdout.log
If command is echo "stdout"; echo "stderr" >&2 then we can test it like that:
$ { { { echo out>&3;echo err>&1;}| tee err-only.log;} 3>&1;} > err-and-stdout.log
$ head err-only.log err-and-stdout.log
==> err-only.log <==
err
==> err-and-stdout.log <==
out
err
The accepted answer results in the reversing of stdout and stderr. Here's a method that preserves them (since Googling on that purpose brings up this post):
{ command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command
Notice:
3>&- is required to prevent fd 3 from being inherited by command. (As this can lead to unexpected results depending on what command does inside.)
Parts explained:
Outer part first:
3>&1 -- fd 3 for { ... } is set to what fd 1 was (i.e. stdout)
1>&2 -- fd 1 for { ... } is set to what fd 2 was (i.e. stderr)
| stdout_command -- fd 1 (was stdout) is piped through stdout_command
Inner part inherits file descriptors from the outer part:
2>&1 -- fd 2 for command is set to what fd 1 was (i.e. stderr as per outer part)
1>&3 -- fd 1 for command is set to what fd 3 was (i.e. stdout as per outer part)
3>&- -- fd 3 for command is set to nothing (i.e. closed)
| stderr_command -- fd 1 (was stderr) is piped through stderr_command
Example:
foo() {
echo a
echo b >&2
echo c
echo d >&2
}
{ foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /'
Output:
out: a
err: b
err: d
out: c
(Order of a -> c and b -> d will always be indeterminate because there's no form of synchronization between stderr_command and stdout_command.)
Using process substitution:
command1 > >(command2) 2> >(command3)
See http://tldp.org/LDP/abs/html/process-sub.html for more info.
Simply redirect stderr to stdout
{ command1 | command2; } 2>&1 | command3
Caution: commnd3 will also read command2 stdout (if any).
To avoid that, you can discard commnd2 stdout:
{ command1 | command2 >/dev/null; } 2>&1 | command3
However, to keep command2 stdout (e.g. in the terminal),
then please refer to my other answer more complex.
Test
{ { echo -e "a\nb\nc" >&2; echo "----"; } | sed 's/$/1/'; } 2>&1 | sed 's/$/2/'
output:
a2
b2
c2
----12
Pipe stdout as usual, but use Bash process substitution for the stderr redirection:
some_command 2> >(command of stderr) | command of stdout
Header: #!/bin/bash
Zsh Version
I like the answer posted by #antak, but it doesn't work correctly in zsh due to multios. Here is a small tweak to use it in zsh:
{ unsetopt multios; command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command
To use, replace command with the command you want to run, and replace stderr_command and stdout_command with your desired pipelines. For example, the command ls / /foo will produce both stdout output and stderr output, so we can use it as a test case. To save the stdout to a file called stdout and the stderr to a file called stderr, you can do this:
{ unsetopt multios; ls / /foo 2>&1 1>&3 3>&- | cat >stderr; } 3>&1 1>&2 | cat >stdout
See #antak's original answer for full explanation.
The same effect can be accomplished fairly easily with a fifo. I'm not aware of a direct piping syntax for doing it (though it would be nifty to see one). This is how you might do it with a fifo.
First, something that prints to both stdout and stderr, outerr.sh:
#!/bin/bash
echo "This goes to stdout"
echo "This goes to stderr" >&2
Then we can do something like this:
$ mkfifo err
$ wc -c err &
[1] 2546
$ ./outerr.sh 2>err | wc -c
20
20 err
[1]+ Done wc -c err
That way you set up the listener for stderr output first and it blocks until it has a writer, which happens in the next command, using the syntax 2>err. You can see that each wc -c got 20 characters of input.
Don't forget to clean up the fifo after you're done if you don't want it to hang around (i.e. rm). If the other command wants input on stdin and not a file arg, you can use input redirection like wc -c < err too.
It's been a long time but...
#oHo's answer has the disadvantage of redirecting command2 outputs to stderr. While #antak's answer may reverse the order of the outputs.
The solution below is likely to fix these problems by correctly redirecting command2 and command3 outputs and errors to, respectively, stdout and stderr, as expected and preserving order.
{ { command1 2>&3 | command2; } 3>&1 1>&4 | command3; } 4>&1
Of course, it also satisfies the OP's need to redirect output and errors from command1 to, respectively, command2 and command3.

Resources