Bash - redirect stdout to log and screen with stderr only to log - bash

I would like to do the following;
Redirect a copy of stdout to logfile and keep stdout on the screen.
Redirect stderr to the same logfile and not display on the screen.
Code without stdout to screen:
#!/bin/bash
exec 1> >(sed -u 's/^/INF: /' >> common.log)
exec 2> >(sed -u 's/^/ERR: /' >> common.log)
echo "some txt"
echo "an error" >&2
echo "some more txt"
echo "one more error" >&2
Log:
INF: some txt
INF: some more txt
ERR: an error
ERR: one more error
The first issue is buffering which I tried to negate with sed '-u' for unbuffered.
Code with stdout to screen:
#!/bin/bash
exec 1> >(sed -u 's/^/INF: /' | tee -a common.log)
exec 2> >(sed -u 's/^/ERR: /' >> common.log)
echo "some txt"
echo "an error" >&2
echo "some more txt"
echo "one more error" >&2
Results in the screen hanging (had to Ctrl-C) and log still buffered. Suggestions?

Does this work for you?
command 2> >(sed -u 's/^/ERR: /' >> common.log) | sed -u 's/^/INF: /' | tee -a common.log
Where command is your command.

Related

How to use nohup with curly braces?

I try to run the following command (ref.) using nohup, which basically separates stdout and stderr into two processes.
{ foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /'
The foo script is like below.
#!/bin/bash
while true; do
echo a
echo b >&2
sleep 1
done
This is the test result.
$ nohup { foo 2>&1 1>&3 3>&- | sed -u 's/^/err: /'; } 3>&1 1>&2 | sed -u 's/^/out: /' >/dev/null 2>&1 &
-bash: syntax error near unexpected token `}'
That's syntatically impossible. But you can wrap your {} in a sh -c cmd:
nohup sh -c 'foo 2>&1 1>&3 3>&- | sed -u "s/^/err: /"'
Notice I change the single quote for sed to double quote.

Bash redirecting a substituted process that redirects back to itself

Consider
$ zzz > >(echo fine) 2> >(echo error >&2)
fine
error
I was expecting this to keep printing 'error' to terminal because this is what I think is happening here:
First set up all the redirections:
redirect stdout to >(echo fine) process
redirect stderr to >(echo error >&2) process
After setting up all the redirections, execute the zzz command:
since zzz is an invalid command this is redirected to >(echo error >&2) process
echo error >&2 is redirected to stderr
but stderr is redirected to >(echo error >&2) so there is recursion happening here?
At least I didn't expect it to output fine because zzz command is an invalid command so it won't trigger redirection to stdout and >(echo error >&2) shouldn't trigger redirection to stdout.
My understanding of this is incomplete/wrong.
Could you explain 1) why recursion doesn't happen and 2) why fine is printed?
Figured it out.
Let's start with
$ > >(echo fine) 2> >(echo error)
fine
Here the effect is the same as echo error | echo fine.
Next
$ > >(echo fine) 2> >(echo error >&2)
fine
error
Here the effect is the same as echo fine; echo error >&2. Because they are disjointed (none of them depend on the other), they are not piped. There is no recursion because echo error >&2 just prints to terminal.
If we try the other way around
$ 2> >(echo error) > >(echo fine >&2)
error
This is the same as { echo fine >&2; } 3>&1 1>&2 2>&3 | echo error.
If they are disjointed
$ 2> >(echo error) > >(echo fine)
error
fine
This is the same as echo error; echo fine.
What if we include a command to be run?
$ whoami > >(cat) 2> >(echo error)
error
logan
This is the same as { echo error & whoami; } | cat.
Another example
$ whoami > >(sed 's/^/processed: /') 2> >(echo error)
processed: error
processed: logan
You could think of this as { echo error & whoami; } | sed 's/^/processed: /'
What if they are disjointed?
$ whoami > >(sed 's/^/processed: /') 2> >(echo error >&2)
error
processed: logan
This is the same as echo error >&2; whoami | sed 's/^/processed: /'.
Let's try with an invalid command.
$ BOB 2> >(cat) > >(echo hi >&2)
hi
BOB: command not found
This is the same as { echo hi >&2 & BOB; } 3>&1 1>&2 2>&3 | cat.
What if we want to introduce a custom file descriptor?
$ whoami > >(sed 's/^/processed: /') 3>&1 2> >(echo error >&3)
processed: error
processed: logan
This is the same as { echo error & whoami; } | sed 's/^/processed: /'. Here be careful that the order of redirections matter!
What if we mix valid and invalid commands in a substituted process?
$ > >(echo fine) 2> >(echo error >&2) 2> >(ls;whoami;echo bob >&2)
fine
error
Here we have preset stderr to print 'error' to terminal. So ls;whoami portion ends up printing 'fine' to terminal, and echo bob >&2 portion ends up printing 'error'.
Also consider
$> >(echo fine) 2> >(ls;whoami;BOB)
fine
BOB: command not found
Here the difference is that stderr still points to the terminal. So the BOB: command not found error message is just sent to the terminal unaltered.
Also consider
$ BOB > >(echo fine) 2> >(echo error >&2) 2> >(ls;whoami;echo bob >&2)
fine
error
This has the same effect as { ls & whoami & } | echo fine; { echo bob >&2 & BOB; } 3>&1 1>&2 2>&3 | echo error >&2.

Bash script - Modify output of command and print into file

Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.
To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'

Pipe stdout and stderr through ssh

Consider the following example:
{ echo 1 | tee /dev/stderr 2> >(sed -e 's|1|err|' >&2) 1> >(sed -e 's|1|out|') ; }
which prints
out
err
Note that out out is printed on stdout and err on stderr.
Question: How to do this remotely via ssh?
More precisely, how to run
ssh host 'echo 1 | tee /dev/stderr SOME_MAGIC_HERE'
st. again out/err pops up on stdout/stderr (for an appropriate bash magic SOME_MAGIC_HERE).
Clearly, the following works:
ssh host 'echo 1 | tee /dev/stderr' 2> >(sed -e 's|1|err|' >&2) 1> >(sed -e 's|1|out|')
But that executes sed locally, and I'd rather want to do that remotely on host.
after the update:
ssh host 'echo 1 | tee >(cat - | sed -e "s|1|err|" >&2) | sed -e "s|1|out|"'
out
err
The idea is to use <pipes> | for processing /dev/stdout and use process substitution in combination with tee to create the /dev/stderr part.
Now it works as expected:
$ ssh host 'echo 1 | tee >(cat - | sed -e "s|1|err|" >&2) | sed -e "s|1|out|"' > /dev/null
err
$ ssh host 'echo 1 | tee >(cat - | sed -e "s|1|err|" >&2) | sed -e "s|1|out|"' 2> /dev/null
out
original answer:
The following command executes by changing your <single quotes> into <double quotes> :
ssh host 'echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|")'
but this has everything in /dev/stdout. Example:
$ ssh host 'echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|")' > /dev/null
$ ssh host 'echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|")' 2> /dev/null
out
err
and this is exactly what your original command does on the host system:
{ echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|") ; } >/dev/null
{ echo 1 | tee /dev/stderr 2> >(sed -e "s|1|err|") 1> >(sed -e "s|1|out|") ; } 2>/dev/null
out
err
The ssh program normally handles the passing of /dev/stdout,/dev/stderr and /dev/stdin correctly:
$ ssh host "echo 1; echo 2 > /dev/stderr" > /dev/null
2
$ ssh host "echo 1; echo 2 > /dev/stderr" 2> /dev/null
1

tee and pipelines inside a bash script

i need to redirect stout and stderr in bash each to separate file.
well i completed this command:
((/usr/bin/java -jar /opt/SEOC2/seoc2.jar 2>&1 1>&3 | tee --append /opt/SEOC2/log/err.log) 3>&1 1>&2 | tee --append /opt/SEOC2/log/app.log) >> /opt/SEOC2/log/combined.log 2>&1 &
which works fine running from a command line.
trying to put the very same command into bash script
...
12 cmd="(($run -jar $cmd 2>&1 1>&3 | tee --append $err) 3>&1 1>&2 | tee --append $log) >> $combined 2>&1"
...
30 echo -e "Starting servis..."
31 $cmd &
32 pid=`ps -eo pid,args | grep seoc2.jar | grep -v grep | cut -c1-6`
33 if [ ! -z $pid ]; then
...
leads to error like this:
root#operator:/opt/SEOC2# seoc2 start
Starting servis...
/usr/local/bin/seoc2: line 31: ((/usr/bin/java: dir or file doesn't exist
tried to cover this command by $( ), ` ` etc but with no effect at all :(
any suggestion or advice would be very appreciated, playing around for hours already :/
thanx a lot
Rene
If you store the whole command line in a variable you have to use eval to execute it:
cmd="(($run -jar $cmd 2>&1 1>&3 | tee --append $err) 3>&1 1>&2 | tee --append $log) >> $combined 2>&1"
...
eval $cmd &

Resources