Bash redirecting a substituted process that redirects back to itself - bash

Consider
$ zzz > >(echo fine) 2> >(echo error >&2)
fine
error
I was expecting this to keep printing 'error' to terminal because this is what I think is happening here:
First set up all the redirections:
redirect stdout to >(echo fine) process
redirect stderr to >(echo error >&2) process
After setting up all the redirections, execute the zzz command:
since zzz is an invalid command this is redirected to >(echo error >&2) process
echo error >&2 is redirected to stderr
but stderr is redirected to >(echo error >&2) so there is recursion happening here?
At least I didn't expect it to output fine because zzz command is an invalid command so it won't trigger redirection to stdout and >(echo error >&2) shouldn't trigger redirection to stdout.
My understanding of this is incomplete/wrong.
Could you explain 1) why recursion doesn't happen and 2) why fine is printed?

Figured it out.
Let's start with
$ > >(echo fine) 2> >(echo error)
fine
Here the effect is the same as echo error | echo fine.
Next
$ > >(echo fine) 2> >(echo error >&2)
fine
error
Here the effect is the same as echo fine; echo error >&2. Because they are disjointed (none of them depend on the other), they are not piped. There is no recursion because echo error >&2 just prints to terminal.
If we try the other way around
$ 2> >(echo error) > >(echo fine >&2)
error
This is the same as { echo fine >&2; } 3>&1 1>&2 2>&3 | echo error.
If they are disjointed
$ 2> >(echo error) > >(echo fine)
error
fine
This is the same as echo error; echo fine.
What if we include a command to be run?
$ whoami > >(cat) 2> >(echo error)
error
logan
This is the same as { echo error & whoami; } | cat.
Another example
$ whoami > >(sed 's/^/processed: /') 2> >(echo error)
processed: error
processed: logan
You could think of this as { echo error & whoami; } | sed 's/^/processed: /'
What if they are disjointed?
$ whoami > >(sed 's/^/processed: /') 2> >(echo error >&2)
error
processed: logan
This is the same as echo error >&2; whoami | sed 's/^/processed: /'.
Let's try with an invalid command.
$ BOB 2> >(cat) > >(echo hi >&2)
hi
BOB: command not found
This is the same as { echo hi >&2 & BOB; } 3>&1 1>&2 2>&3 | cat.
What if we want to introduce a custom file descriptor?
$ whoami > >(sed 's/^/processed: /') 3>&1 2> >(echo error >&3)
processed: error
processed: logan
This is the same as { echo error & whoami; } | sed 's/^/processed: /'. Here be careful that the order of redirections matter!
What if we mix valid and invalid commands in a substituted process?
$ > >(echo fine) 2> >(echo error >&2) 2> >(ls;whoami;echo bob >&2)
fine
error
Here we have preset stderr to print 'error' to terminal. So ls;whoami portion ends up printing 'fine' to terminal, and echo bob >&2 portion ends up printing 'error'.
Also consider
$> >(echo fine) 2> >(ls;whoami;BOB)
fine
BOB: command not found
Here the difference is that stderr still points to the terminal. So the BOB: command not found error message is just sent to the terminal unaltered.
Also consider
$ BOB > >(echo fine) 2> >(echo error >&2) 2> >(ls;whoami;echo bob >&2)
fine
error
This has the same effect as { ls & whoami & } | echo fine; { echo bob >&2 & BOB; } 3>&1 1>&2 2>&3 | echo error >&2.

Related

Bash script - Modify output of command and print into file

Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.
To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'

Copy stderr to stdout without using tee

I know there are many similar questions. But, none of the scenario satisfy my requirement.
I have a cron which backup MySQL databases. Currently, I redirect stderr to Slack and stdout to syslog like this:
mysql-backup.sh 1> >(logger -it DB_BACKUP) 2> >(push-to-slack.sh)
This way, we are instantly notified about any errors during backup process. And stdout is kept in syslog, but the stderr are missing from the syslog.
In short, I need stdout+stderr in syslog (with date, PID etc) and pipe (or redirect) stderr to push-to-slack.sh
Any solutions without using temporary files are expected.
This sends stderr to push-to-slack.sh while sending both stderr and stdout to logger:
{ mysql-backup.sh 2>&1 1>&3 | tee >(push-to-slack.sh); } 3>&1 | logger -it DB_BACKUP
Reproducible Example
Let's create a function that produces both stdout and stderr:
$ fn() { echo out; echo err>&2; }
Now, let's run the analog of our command above:
$ { fn 2>&1 1>&3 | tee err_only; } 3>&1 | cat >both
$ cat err_only
err
$ cat both
out
err
We can see that err_only captured only the stderr while both captured both stdout and stderr.
(Note to nitpickers: Yes, I know that cat above "useless" but I am keeping the command parallel to the one the OP needs.)
Without using tee
If you really seriously can't use tee, then we can do something like using shell:
{ fn 2>&1 1>&3 | (while read -r line; do echo "$line" >&3; echo "$line"; done >err_only); } 3>&1 | cat >both
Or, using awk:
{ fn 2>&1 1>&3 | awk '{print>"err"} 1'; } 3>&1 | cat >both

Pipe stdout and stderr through ssh

Consider the following example:
{ echo 1 | tee /dev/stderr 2> >(sed -e 's|1|err|' >&2) 1> >(sed -e 's|1|out|') ; }
which prints
out
err
Note that out out is printed on stdout and err on stderr.
Question: How to do this remotely via ssh?
More precisely, how to run
ssh host 'echo 1 | tee /dev/stderr SOME_MAGIC_HERE'
st. again out/err pops up on stdout/stderr (for an appropriate bash magic SOME_MAGIC_HERE).
Clearly, the following works:
ssh host 'echo 1 | tee /dev/stderr' 2> >(sed -e 's|1|err|' >&2) 1> >(sed -e 's|1|out|')
But that executes sed locally, and I'd rather want to do that remotely on host.
after the update:
ssh host 'echo 1 | tee >(cat - | sed -e "s|1|err|" >&2) | sed -e "s|1|out|"'
out
err
The idea is to use <pipes> | for processing /dev/stdout and use process substitution in combination with tee to create the /dev/stderr part.
Now it works as expected:
$ ssh host 'echo 1 | tee >(cat - | sed -e "s|1|err|" >&2) | sed -e "s|1|out|"' > /dev/null
err
$ ssh host 'echo 1 | tee >(cat - | sed -e "s|1|err|" >&2) | sed -e "s|1|out|"' 2> /dev/null
out
original answer:
The following command executes by changing your <single quotes> into <double quotes> :
ssh host 'echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|")'
but this has everything in /dev/stdout. Example:
$ ssh host 'echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|")' > /dev/null
$ ssh host 'echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|")' 2> /dev/null
out
err
and this is exactly what your original command does on the host system:
{ echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|") ; } >/dev/null
{ echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|") ; } 2>/dev/null
out
err
The ssh program normally handles the passing of /dev/stdout,/dev/stderr and /dev/stdin correctly:
$ ssh host "echo 1; echo 2 > /dev/stderr" > /dev/null
2
$ ssh host "echo 1; echo 2 > /dev/stderr" 2> /dev/null
1

tee stdout and stderr to separate files while retaining them on their respective streams

I'm trying to write a script the essentially acts as a passthru log of all the output created by a (non-interactive) command, without affecting the output of the command to other processes. That is to say, stdout and stderr should appear the same as if they had not run through my command.
To do this, I'm trying to redirect stdout and stderr separately to two different tees, each for a different file, and then recombine them so that they still appear on stdout and stderr, respectively. I have seen a lot of other questions about teeing and redirecting and have tried some of the answers gleaned from those, but none of them seem to work combining both splitting the stream to separate tees and then recombining them correctly.
My attempts are successfully splitting the output into the right files, but the streams are not correctly retained for actual stdout/stderr output. I see this in a more complicated setting, so I created simplified commands where I echoed data to stdout or stderr as my "command" as shown below.
Here are a couple of things that I have tried:
{ command | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; }
Running my simple test I see:
$ { { { echo "test" 1>&2; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } > /dev/null
test
$ { { { echo "test" 1>&2; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } 2> /dev/null
$
Ok, this is as I expect. I am echoing to stderr, so I expect to see nothing when I redirect the final stderr to /dev/null and my original echo when I only redirect stdout.
$ { { { echo "test"; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } > /dev/null
test
$ { { { echo "test"; } | tee ~/tee.txt; } 2>&1 | { tee ~/tee2.txt 1>&2; } } 2> /dev/null
$
This is backwards! My command sends only data to stdout, so I would expect to see nothing when I redirect the final stdout to null. But the reverse is true.
Here is the second command I tried, it is a bit more complicated:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2; }
Unfortunately, I see the same identical behavior as before.
I can't really see what I am doing wrong, but it appears that stdout is getting clobbered somehow. In the case of the first command, I suspect that this is because I am combining stdout and stderr (2>&1) before I pipe it to the second tee, but if this were the case I would expect to see both stdout and stderr in the tee2.txt file, which I don't - I only see stderr! In the case of the second command, my impression from reading the answer I adapted for this command is that descriptors are getting swapped around so as to avoid this problem, but obviously something is still going wrong.
Edit: I had another thought, that maybe the second command is failing because I am redirecting 1>&2 and that is killing stdout from the first tee. So I tried to redirecting it with 1>&4 and then redirecting that back to stdout at the end:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&4 | { tee /home/michael/tee2.txt 1>&2 4>&1; }
But now I get:
-bash: 4: Bad file descriptor
I also tried redirecting descriptor 2 back to descriptor 1 in the final tee:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2 2>&1; }
and:
{ command 2>&3 | tee ~/tee.txt; } 3>&1 1>&2 | { tee /home/michael/tee2.txt 1>&2; } 2>&1
A process-substitution-based solution is simple, although not as simple as you might think. My first attempt seemed like it should work
{ echo stdout; echo stderr >&2; } > >( tee ~/stdout.txt ) \
2> >( tee ~/stderr.txt )
However, it doesn't quite work as intended in bash because the second tee inherits its standard output from the original command (and hence it goes to the first tee) rather than from the calling shell. It's not clear if this should be considered a bug in bash.
It can be fixed by separating the output redirections into two separate commands:
{ { echo stdout; echo stderr >&2; } > >(tee stdout.txt ); } \
2> >(tee stderr.txt )
Update: the second tee should actually be tee stderr.txt >&2 so that what was read from standard error is printed back onto standard error.
Now, the redirection of standard error occurs in a command which does not have its standard output redirected, so it works in the intended fashion. The outer compound command has its standard error redirected to the outer tee, with its standard output left on the terminal. The inner compound command inherits its standard error from the outer (and so it also goes to the outer tee, while its standard output is redirected to the inner tee.

Bash - redirect stdout to log and screen with stderr only to log

I would like to do the following;
Redirect a copy of stdout to logfile and keep stdout on the screen.
Redirect stderr to the same logfile and not display on the screen.
Code without stdout to screen:
#!/bin/bash
exec 1> >(sed -u 's/^/INF: /' >> common.log)
exec 2> >(sed -u 's/^/ERR: /' >> common.log)
echo "some txt"
echo "an error" >&2
echo "some more txt"
echo "one more error" >&2
Log:
INF: some txt
INF: some more txt
ERR: an error
ERR: one more error
The first issue is buffering which I tried to negate with sed '-u' for unbuffered.
Code with stdout to screen:
#!/bin/bash
exec 1> >(sed -u 's/^/INF: /' | tee -a common.log)
exec 2> >(sed -u 's/^/ERR: /' >> common.log)
echo "some txt"
echo "an error" >&2
echo "some more txt"
echo "one more error" >&2
Results in the screen hanging (had to Ctrl-C) and log still buffered. Suggestions?
Does this work for you?
command 2> >(sed -u 's/^/ERR: /' >> common.log) | sed -u 's/^/INF: /' | tee -a common.log
Where command is your command.

Resources