Copy stderr to stdout without using tee - bash

I know there are many similar questions. But, none of the scenario satisfy my requirement.
I have a cron which backup MySQL databases. Currently, I redirect stderr to Slack and stdout to syslog like this:
mysql-backup.sh 1> >(logger -it DB_BACKUP) 2> >(push-to-slack.sh)
This way, we are instantly notified about any errors during backup process. And stdout is kept in syslog, but the stderr are missing from the syslog.
In short, I need stdout+stderr in syslog (with date, PID etc) and pipe (or redirect) stderr to push-to-slack.sh
Any solutions without using temporary files are expected.

This sends stderr to push-to-slack.sh while sending both stderr and stdout to logger:
{ mysql-backup.sh 2>&1 1>&3 | tee >(push-to-slack.sh); } 3>&1 | logger -it DB_BACKUP
Reproducible Example
Let's create a function that produces both stdout and stderr:
$ fn() { echo out; echo err>&2; }
Now, let's run the analog of our command above:
$ { fn 2>&1 1>&3 | tee err_only; } 3>&1 | cat >both
$ cat err_only
err
$ cat both
out
err
We can see that err_only captured only the stderr while both captured both stdout and stderr.
(Note to nitpickers: Yes, I know that cat above "useless" but I am keeping the command parallel to the one the OP needs.)
Without using tee
If you really seriously can't use tee, then we can do something like using shell:
{ fn 2>&1 1>&3 | (while read -r line; do echo "$line" >&3; echo "$line"; done >err_only); } 3>&1 | cat >both
Or, using awk:
{ fn 2>&1 1>&3 | awk '{print>"err"} 1'; } 3>&1 | cat >both

Related

Bash script - Modify output of command and print into file

Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.
To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'

How to prepend stdout and stderr output with timestamp when redirecting into log files?

In Linux I'm starting a program called $cmd in an init script (SysVInit). I'm already redirecting stdout and stderr of $cmd into two different logfiles called $stdout_log and $stderr_log. Now I also want to add a timestamp in front of every line printed into the logfiles.
I tried to write a function called log_pipe as follows:
log_pipe() {
while read line; do
echo [$(date +%Y-%m-%d\ %H:%M:%S)] "$line"
done
}
then pipe the output of my script into this function and after that redirect them to the logfiles as follows:
$cmd | log_pipe >> "$stdout_log" 2>> "$stderr_log" &
What I get is an empty $stdout.log (stdout) what should be okay, because the $cmd normally doesn't print anything. And a $stderr.log file with only timestamps but without error texts.
Where is my faulty reasoning?
PS: Because the problem exists within an init script I only want to use basic shell commands and no extra packages.
In any POSIX shell, try:
{ cmd | log_pipe >>stdout.log; } 2>&1 | log_pipe >>stderr.log
Also, if you have GNU awk (sometimes called gawk), then log_pipe can be made simpler and faster:
log_pipe() { awk '{print strftime("[%Y-%m-%d %H:%M:%S]"),$0}'; }
Example
As an example, let's create the command cmd:
cmd() { echo "This is out"; echo "This is err">&2; }
Now, let's run our command and look at the output files:
$ { cmd | log_pipe >>stdout.log; } 2>&1 | log_pipe >>stderr.log
$ cat stdout.log
[2019-07-04 23:42:20] This is out
$ cat stderr.log
[2019-07-04 23:42:20] This is err
The problem
cmd | log_pipe >> "$stdout_log" 2>> "$stderr_log"
The above redirects stdout from cmd to log_pipe. The stdout of log_pipe is redirected to $stdout_log and the stderr of log_pipe is redirected to $stderr_log. The problem is that the stderr of cmd is never redirected. It goes straight to the terminal.
As an example, consider this cmd:
cmd() { echo "This is out"; echo "This is err">&2; }
Now, let's run the command:
$ cmd | log_pipe >>stdout.log 2>>stderr.log
This is err
We can see that This is err is not sent to the file stderr.log. Instead, it appears on the terminal. It is never seen by log_pipe. stderr.log only captures error messages from log_pipe.
In Bash, you can also redirect to a subshell using process substitution:
logger.sh
#!/bin/bash
while read -r line; do
echo "[$(date +%Y-%m-%d\ %H:%M:%S)] $line"
done
redirection
cmd > >(logger.sh > stdout.log) 2> >(logger.sh > stderr.log)
This works, but my command has to run in background because it is within an init script, therefore i have to do:
({ cmd | log_pipe >>stdout.log; } 2>&1 | log_pipe >>stderr.log) &
echo $! > "$pid_file"
right?
But I think in this case the pid in the $pid_file is not the pid of $cmd...

piping stderr and stdout separately

I'd like to do different things to the stdout and stderr of a particular command. Something like
cmd |1 stdout_1 | stdout_2 |2 stderr_1 | stderr_2
where stdout_x is a command specifically for stdout and stderr_x is specifically for stderr. It's okay if stderr from every command gets piped into my stderr commands, but it's even better if the stderr could be strictly from cmd. I've been searching for some syntax that may support this, but I can't seem to find anything.
You can make use of a different file descriptor:
{ cmd 2>&3 | stdout_1; } 3>&1 1>&2 | stderr_1
Example:
{ { echo 'out'; echo >&2 'error'; } 2>&3 | awk '{print "stdout: " $0}'; } 3>&1 1>&2 |
awk '{print "stderr: " $0}'
stderr: error
stdout: out
Or else use process substitution:
cmd 2> >(stderr_1) > >(stdout_1)
Example:
{ echo 'out'; echo >&2 'error'; } 2> >(awk '{print "stderr: " $0}') \
> >(awk '{print "stdout: " $0}')
stderr: error
stdout: out
to pipe stdout and stderr separately from your cmd.
You can use process substitution and redirection to achieve this:
cmd 2> >(stderr_1 | stderr_2) | stdout_1 | stdout_2
The most straightforward solution would be something like this:
(cmd | gets_stdout) 2>&1 | gets_stderr
The main drawback being that if gets_stdout itself has any output on stdout, that will also go to gets_stderr. If that is a problem, you should use one of anubhava's or Kevin's answers.
Late-answer - as with all previous answers stderr is landing in stdout at the end (#anubhava's answer was nearly complete).
To pipe stderr and stdout independently (or identical) and keep them in stdout and stderr, we can use file descriptors.
Solution:
{ { cmd | stdout_pipe ; } 2>&1 1>&3 | stderr_pipe; } 1>&2 3>&1
Explanation:
piping in the shell always happens on stdout (file descriptor 1)
we therefore apply the pipe for stdout directly (leaving stderr intact)
then we park stdout in a temporary file descriptor and move stderr to stdout, allowing us to pipe this in the next step
at the end we get the "current" stdout (piped stderr) back to stderr and stdout back from our temporary file descriptor
Example:
using
cmd = { echo 'out'; echo >&2 'error'; }
stdout_pipe = awk '{print "stdout: " $0}'
stderr_pipe = awk '{print "stderr: " $0}'
{ { { echo 'out'; echo >&2 'error'; } \
| awk '{print "stdout: " $0}'; } 2>&1 1>&3 \
| awk '{print "stderr: " $0}'; } 1>&2 3>&1
Note: this example has an extra pair of { } to "combine" both echo commands into a single one
You see the difference when redirecting the output of a script using this or when using this with a terminal that colors stderr.

tee stdout and stderr to separate files while retaining them on their respective streams

I'm trying to write a script the essentially acts as a passthru log of all the output created by a (non-interactive) command, without affecting the output of the command to other processes. That is to say, stdout and stderr should appear the same as if they had not run through my command.
To do this, I'm trying to redirect stdout and stderr separately to two different tees, each for a different file, and then recombine them so that they still appear on stdout and stderr, respectively. I have seen a lot of other questions about teeing and redirecting and have tried some of the answers gleaned from those, but none of them seem to work combining both splitting the stream to separate tees and then recombining them correctly.
My attempts are successfully splitting the output into the right files, but the streams are not correctly retained for actual stdout/stderr output. I see this in a more complicated setting, so I created simplified commands where I echoed data to stdout or stderr as my "command" as shown below.
Here are a couple of things that I have tried:
{ command | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; }
Running my simple test I see:
$ { { { echo "test" 1>&2; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } > /dev/null
test
$ { { { echo "test" 1>&2; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } 2> /dev/null
$
Ok, this is as I expect. I am echoing to stderr, so I expect to see nothing when I redirect the final stderr to /dev/null and my original echo when I only redirect stdout.
$ { { { echo "test"; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } > /dev/null
test
$ { { { echo "test"; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } 2> /dev/null
$
This is backwards! My command sends only data to stdout, so I would expect to see nothing when I redirect the final stdout to null. But the reverse is true.
Here is the second command I tried, it is a bit more complicated:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2; }
Unfortunately, I see the same identical behavior as before.
I can't really see what I am doing wrong, but it appears that stdout is getting clobbered somehow. In the case of the first command, I suspect that this is because I am combining stdout and stderr (2>&1) before I pipe it to the second tee, but if this were the case I would expect to see both stdout and stderr in the tee2.txt file, which I don't - I only see stderr! In the case of the second command, my impression from reading the answer I adapted for this command is that descriptors are getting swapped around so as to avoid this problem, but obviously something is still going wrong.
Edit: I had another thought, that maybe the second command is failing because I am redirecting 1>&2 and that is killing stdout from the first tee. So I tried to redirecting it with 1>&4 and then redirecting that back to stdout at the end:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&4 | { tee /home/michael/tee2.txt 1>&2 4>&1; }
But now I get:
-bash: 4: Bad file descriptor
I also tried redirecting descriptor 2 back to descriptor 1 in the final tee:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2 2>&1; }
and:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2; } 2>&1
A process-substitution-based solution is simple, although not as simple as you might think. My first attempt seemed like it should work
{ echo stdout; echo stderr >&2; } > >( tee ~/stdout.txt ) \
2> >( tee ~/stderr.txt )
However, it doesn't quite work as intended in bash because the second tee inherits its standard output from the original command (and hence it goes to the first tee) rather than from the calling shell. It's not clear if this should be considered a bug in bash.
It can be fixed by separating the output redirections into two separate commands:
{ { echo stdout; echo stderr >&2; } > >(tee stdout.txt ); } \
2> >(tee stderr.txt )
Update: the second tee should actually be tee stderr.txt >&2 so that what was read from standard error is printed back onto standard error.
Now, the redirection of standard error occurs in a command which does not have its standard output redirected, so it works in the intended fashion. The outer compound command has its standard error redirected to the outer tee, with its standard output left on the terminal. The inner compound command inherits its standard error from the outer (and so it also goes to the outer tee, while its standard output is redirected to the inner tee.

pipe stdout and stderr to two different processes in shell script?

I've a pipline doing just
command1 | command2
So, stdout of command1 goes to command2 , while stderr of command1 go to the terminal (or wherever stdout of the shell is).
How can I pipe stderr of command1 to a third process (command3) while stdout is still going to command2 ?
Use another file descriptor
{ command1 2>&3 | command2; } 3>&1 1>&2 | command3
You can use up to 7 other file descriptors: from 3 to 9.
If you want more explanation, please ask, I can explain ;-)
Test
{ { echo a; echo >&2 b; } 2>&3 | sed >&2 's/$/1/'; } 3>&1 1>&2 | sed 's/$/2/'
output:
b2
a1
Example
Produce two log files:
1. stderr only
2. stderr and stdout
{ { { command 2>&1 1>&3; } | tee err-only.log; } 3>&1; } > err-and-stdout.log
If command is echo "stdout"; echo "stderr" >&2 then we can test it like that:
$ { { { echo out>&3;echo err>&1;}| tee err-only.log;} 3>&1;} > err-and-stdout.log
$ head err-only.log err-and-stdout.log
==> err-only.log <==
err
==> err-and-stdout.log <==
out
err
The accepted answer results in the reversing of stdout and stderr. Here's a method that preserves them (since Googling on that purpose brings up this post):
{ command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command
Notice:
3>&- is required to prevent fd 3 from being inherited by command. (As this can lead to unexpected results depending on what command does inside.)
Parts explained:
Outer part first:
3>&1 -- fd 3 for { ... } is set to what fd 1 was (i.e. stdout)
1>&2 -- fd 1 for { ... } is set to what fd 2 was (i.e. stderr)
| stdout_command -- fd 1 (was stdout) is piped through stdout_command
Inner part inherits file descriptors from the outer part:
2>&1 -- fd 2 for command is set to what fd 1 was (i.e. stderr as per outer part)
1>&3 -- fd 1 for command is set to what fd 3 was (i.e. stdout as per outer part)
3>&- -- fd 3 for command is set to nothing (i.e. closed)
| stderr_command -- fd 1 (was stderr) is piped through stderr_command
Example:
foo() {
echo a
echo b >&2
echo c
echo d >&2
}
{ foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /'
Output:
out: a
err: b
err: d
out: c
(Order of a -> c and b -> d will always be indeterminate because there's no form of synchronization between stderr_command and stdout_command.)
Using process substitution:
command1 > >(command2) 2> >(command3)
See http://tldp.org/LDP/abs/html/process-sub.html for more info.
Simply redirect stderr to stdout
{ command1 | command2; } 2>&1 | command3
Caution: commnd3 will also read command2 stdout (if any).
To avoid that, you can discard commnd2 stdout:
{ command1 | command2 >/dev/null; } 2>&1 | command3
However, to keep command2 stdout (e.g. in the terminal),
then please refer to my other answer more complex.
Test
{ { echo -e "a\nb\nc" >&2; echo "----"; } | sed 's/$/1/'; } 2>&1 | sed 's/$/2/'
output:
a2
b2
c2
----12
Pipe stdout as usual, but use Bash process substitution for the stderr redirection:
some_command 2> >(command of stderr) | command of stdout
Header: #!/bin/bash
Zsh Version
I like the answer posted by #antak, but it doesn't work correctly in zsh due to multios. Here is a small tweak to use it in zsh:
{ unsetopt multios; command 2>&1 1>&3 3>&- | stderr_command; } 3>&1 1>&2 | stdout_command
To use, replace command with the command you want to run, and replace stderr_command and stdout_command with your desired pipelines. For example, the command ls / /foo will produce both stdout output and stderr output, so we can use it as a test case. To save the stdout to a file called stdout and the stderr to a file called stderr, you can do this:
{ unsetopt multios; ls / /foo 2>&1 1>&3 3>&- | cat >stderr; } 3>&1 1>&2 | cat >stdout
See #antak's original answer for full explanation.
The same effect can be accomplished fairly easily with a fifo. I'm not aware of a direct piping syntax for doing it (though it would be nifty to see one). This is how you might do it with a fifo.
First, something that prints to both stdout and stderr, outerr.sh:
#!/bin/bash
echo "This goes to stdout"
echo "This goes to stderr" >&2
Then we can do something like this:
$ mkfifo err
$ wc -c err &
[1] 2546
$ ./outerr.sh 2>err | wc -c
20
20 err
[1]+ Done wc -c err
That way you set up the listener for stderr output first and it blocks until it has a writer, which happens in the next command, using the syntax 2>err. You can see that each wc -c got 20 characters of input.
Don't forget to clean up the fifo after you're done if you don't want it to hang around (i.e. rm). If the other command wants input on stdin and not a file arg, you can use input redirection like wc -c < err too.
It's been a long time but...
#oHo's answer has the disadvantage of redirecting command2 outputs to stderr. While #antak's answer may reverse the order of the outputs.
The solution below is likely to fix these problems by correctly redirecting command2 and command3 outputs and errors to, respectively, stdout and stderr, as expected and preserving order.
{ { command1 2>&3 | command2; } 3>&1 1>&4 | command3; } 4>&1
Of course, it also satisfies the OP's need to redirect output and errors from command1 to, respectively, command2 and command3.

Resources