Capture stderr into a pipe from command expansion - bash

I have a program that returns answers on stdout and errors on stderr.
Unfortunately the program ends by emitting some text on stderr even if successful.
I would like to store the program output in a variable using command expansion as:
ans=$(prog) 2>&1 | grep -v success
This doesn't work. Tried putting 2>&1 in the parens, but as I suspected $ans then
gets the success text.
Any ideas?

Not sure, what you trying to get, but probably this is your command:
ans=$(prog 2>&1 | grep -v success)
If you want to filter 'success' only from standard error stream, you could use something like this:
ans=$({ ./foo 3>&2 2>&1 >&3- | grep -v success; } 2>&1)
And just in case, as noted in BashFAQ/002:
What you cannot do is capture stdout in one variable, and stderr in another, using only FD redirections. You must use a temporary file (or a named pipe) to achieve that one.

Related

echo to file and terminal [duplicate]

In bash, calling foo would display any output from that command on the stdout.
Calling foo > output would redirect any output from that command to the file specified (in this case 'output').
Is there a way to redirect output to a file and have it display on stdout?
The command you want is named tee:
foo | tee output.file
For example, if you only care about stdout:
ls -a | tee output.file
If you want to include stderr, do:
program [arguments...] 2>&1 | tee outfile
2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.
Furthermore, if you want to append to the log file, use tee -a as:
program [arguments...] 2>&1 | tee -a outfile
$ program [arguments...] 2>&1 | tee outfile
2>&1 dumps the stderr and stdout streams.
tee outfile takes the stream it gets and writes it to the screen and to the file "outfile".
This is probably what most people are looking for. The likely situation is some program or script is working hard for a long time and producing a lot of output. The user wants to check it periodically for progress, but also wants the output written to a file.
The problem (especially when mixing stdout and stderr streams) is that there is reliance on the streams being flushed by the program. If, for example, all the writes to stdout are not flushed, but all the writes to stderr are flushed, then they'll end up out of chronological order in the output file and on the screen.
It's also bad if the program only outputs 1 or 2 lines every few minutes to report progress. In such a case, if the output was not flushed by the program, the user wouldn't even see any output on the screen for hours, because none of it would get pushed through the pipe for hours.
Update: The program unbuffer, part of the expect package, will solve the buffering problem. This will cause stdout and stderr to write to the screen and file immediately and keep them in sync when being combined and redirected to tee. E.g.:
$ unbuffer program [arguments...] 2>&1 | tee outfile
Another way that works for me is,
<command> |& tee <outputFile>
as shown in gnu bash manual
Example:
ls |& tee files.txt
If ‘|&’ is used, command1’s standard error, in addition to its standard output, is connected to command2’s standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
For more information, refer redirection
You can primarily use Zoredache solution, but If you don't want to overwrite the output file you should write tee with -a option as follow :
ls -lR / | tee -a output.file
Something to add ...
The package unbuffer has support issues with some packages under fedora and redhat unix releases.
Setting aside the troubles
Following worked for me
bash myscript.sh 2>&1 | tee output.log
Thank you ScDF & matthew your inputs saved me lot of time..
Using tail -f output should work.
In my case I had the Java process with output logs. The simplest solution to display output logs and redirect them into the file(named logfile here) was:
my_java_process_run_script.sh |& tee logfile
Result was Java process running with output logs displaying and
putting them into the file with name logfile
You can do that for your entire script by using something like that at the beginning of your script :
#!/usr/bin/env bash
test x$1 = x$'\x00' && shift || { set -o pipefail ; ( exec 2>&1 ; $0 $'\x00' "$#" ) | tee mylogfile ; exit $? ; }
# do whaetever you want
This redirect both stderr and stdout outputs to the file called mylogfile and let everything goes to stdout at the same time.
It is used some stupid tricks :
use exec without command to setup redirections,
use tee to duplicates outputs,
restart the script with the wanted redirections,
use a special first parameter (a simple NUL character specified by the $'string' special bash notation) to specify that the script is restarted (no equivalent parameter may be used by your original work),
try to preserve the original exit status when restarting the script using the pipefail option.
Ugly but useful for me in certain situations.
Bonus answer since this use-case brought me here:
In the case where you need to do this as some other user
echo "some output" | sudo -u some_user tee /some/path/some_file
Note that the echo will happen as you and the file write will happen as "some_user" what will NOT work is if you were to run the echo as "some_user" and redirect the output with >> "some_file" because the file redirect will happen as you.
Hint: tee also supports append with the -a flag, if you need to replace a line in a file as another user you could execute sed as the desired user.
< command > |& tee filename # this will create a file "filename" with command status as a content, If a file already exists it will remove existed content and writes the command status.
< command > | tee >> filename # this will append status to the file but it doesn't print the command status on standard_output (screen).
I want to print something by using "echo" on screen and append that echoed data to a file
echo "hi there, Have to print this on screen and append to a file"
tee is perfect for this, but this will also do the job
ls -lr / > output | cat output

Why would tee fail to write to stdout?

I typically use tee to receive a piped output data, echo it to standard output, and forward it to the actual intended recipient of the piped data. But sometimes this fails, and I cannot exactly understand why.
I'll try to demonstrate with a series of examples:
$ echo testing with this string | tee
testing with this string
So, just echoing some data to tee without arguments, is replicated/printed on the terminal/stdout. Note that this should be tee printing the output, as the output from echo is now "piped"/redirected, and therefore not present in stdout anymore (the same that happens here:
$ echo aa | echo bb
bb
... i.e. echo aa output got redirected to the next command, - which, being echo b, does not care about the input, and outputs just its own output.)
$ echo testing with this string | tee | python3 -c 'a=1'
$
Now here, piping data into tee without arguments, - and then piping, from tee, to a program that does not provide any output to terminal/stdout - prints nothing. I would have expected tee here to duplicate to stdout, and then forward to the next command in the pipeline, but apparently that does not happen.
$ echo testing with this string | tee /dev/stdout
testing with this string
testing with this string
Right, so if we pipe to tee with command line argument /dev/stdout, we get the printout twice - and as concluded earlier, it must be tee that produces both printed lines. That means, that when used without an argument, |tee basically does not open any file for duplicating, and simply forwards what it receives on its input, to its output; but as it is the last in the pipeline, its output is stdout in that case, so we get a single printout.
Here we get double printout, because
tee duplicated its input stream to /dev/stdout due to the argument (which ends up as the first printout); and then
forwarded the same input to its output, which here being stdout (as tee is again last in the pipeline), results with the second printout.
This also would explain why the previous ...| tee | python3 -c 'a=1' did not print anything: tee without arguments did not open any file for duplication, and merely forwarded to next command in the toolchain - and as the next one does not print any output either, no output is generated whatsoever.
Well, if the above understanding is correct, then this:
$ echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
$
... should print at least one line (from tee copying to /dev/stdout; the "forwarded" part will end up being "gulped" by the final command as it prints nothing), but it does not.
So, why does this happen - where am I going wrong in my understanding of what tee does?
And how can I use tee, to print to stdout, also when its output is forwarded to a command that doesn't print anything to stdout on its own?
You aren't misunderstanding tee, you're misunderstanding what stdout is. In a pipe, like echo testing | tee | python3 -c 'a=1', the tee command's stdout is not the terminal, it's the pipe going to the python command (and the echo command's stdout is the pipe going to tee).
So tee /dev/stdout sends two copies of its input (on stdin) to the exact same place: its stdout, whether that's the terminal, or a pipe, or whatever.
If you want to send a copy of the input to tee someplace other than down the pipe, you need to send it somewhere other than stdout. Where that is depends on where you actually want to send it (i.e. why you want to copy it). If you specifically want to send it to the terminal, you could do this:
echo testing | tee /dev/tty | python3 -c 'a=1'
...while if you want to send it to the outer context's stdout (which might or might not be a terminal), you can duplicate the outer context's stdin to a different file descriptor (#3 is handy for this), and then have tee write a copy to that:
{ echo testing | tee /dev/fd/3 | python3 -c 'a=1'; } 3>&1
Yet another option is to redirect it to stderr (aka FD #2, which is also the terminal by default, but redirectable separately from stdout) with tee /dev/fd/2.
Note that the various /dev entries I'm using here are supported by most unixish OSes, but they aren't universal. Check to see what your specific OS provides.
I think I got it, but am not sure if it is correct: I saw this: 19.8. Forgetting That Pipelines Make Subshells - bash Cookbook [Book].
So, if pipelines make subshells, then
echo testing with this string | tee /dev/stdout | python3 -c 'a=1'
... is conceptually equal to:
echo testing with this string | (tee /dev/stdout | (python3 -c 'a=1'))
Note that the second pipe | redirects stdout of the subshell tee runs in, and as /dev/stdout is just an interface to stdout, it is redirected too, so we get nothing printed.
So, while stdout (and /dev/stdout) is local to the (sub)shell, /dev/tty is local to the terminal - and therefore the following:
$ echo testing with this string | tee /dev/tty | python3 -c 'a=1'
testing with this string
... in fact prints a line, as expected.

Grep stderr in bash script for any output

In my bash script I use grep in different logs like this:
LOGS1=$(grep -E -i 'err|warn' /opt/backup/exports.log /opt/backup/imports.log && grep "tar:" /opt/backup/h2_backups.log /opt/backup/st_backups.log)
if [ -n "$LOGS1" ] ]; then
COLOUR="yellow"
MESSAGE="Logs contain warnings. Backups may be incomplete. Invetigate these warnings:\n$LOGS"
Instead of checking if each log exsist (there are many more logs than this) I want check stderr while the script runs to see if I get any output. If one of the logs does not exists it will produce an error like this: grep: /opt/backup/st_backups.log: No such file or directory
I've tried to read sterr with commands like command 2> >(grep "file" >&2 but that does not seem to work.
I know I can pipe the output to a file, but I rather just handle the stderr when there is any output instead of reading the file. OR is there any reason why pipe to file is better?
Send the standard error (file descriptor 2) to standard output(file descriptor 1) and assign it to var Q:
$ Q=$(grep text file 2>&1)
$ echo $Q
grep: file: No such file or directory
This is default behaviour, stderr is normally set to your terminal (and unbuffered) so you see errors as you pipe stdout somewhere. If you want to merge stderr with stdout then this is the syntax,
command >file 2>&1

Piping both stdout and stderr in bash?

It seems that newer versions of bash have the &> operator, which (if I understand correctly), redirects both stdout and stderr to a file (&>> appends to the file instead, as Adrian clarified).
What's the simplest way to achieve the same thing, but instead piping to another command?
For example, in this line:
cmd-doesnt-respect-difference-between-stdout-and-stderr | grep -i SomeError
I'd like the grep to match on content both in stdout and stderr (effectively, have them combined into one stream).
Note: this question is asking about piping, not redirecting - so it is not a duplicate of the question it's currently marked as a duplicate of.
(Note that &>>file appends to a file while &> would redirect and overwrite a previously existing file.)
To combine stdout and stderr you would redirect the latter to the former using 1>&2. This redirects stdout (file descriptor 1) to stderr (file descriptor 2), e.g.:
$ { echo "stdout"; echo "stderr" 1>&2; } | grep -v std
stderr
$
stdout goes to stdout, stderr goes to stderr. grep only sees stdout, hence stderr prints to the terminal.
On the other hand:
$ { echo "stdout"; echo "stderr" 1>&2; } 2>&1 | grep -v std
$
After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.
You can read more about redirection here.
Regarding your example (POSIX):
cmd-doesnt-respect-difference-between-stdout-and-stderr 2>&1 | grep -i SomeError
or, using >=bash-4:
cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError
Bash has a shorthand for 2>&1 |, namely |&, which pipes both stdout and stderr (see the manual):
cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError
This was introduced in Bash 4.0, see the release notes.

How to use stdout and stderr io-redirection to get sane error/warning messages output from a program?

I have a program that outputs to stdout and stderr but doesn't make use of them in the correct way. Some errors go to stdout, some go do stderr, non error stuff goes to stderr and it prints way to much info on stdout. To fix this I want to make a pipeline to do:
Save all output of $cmd (from both stderr and stdout) to a file $logfile (don't print it to screen).
Filter out all warning and error messages on stderr and stdout (from warning|error to empty line) and colorize only "error" words (redirect output to stderr).
Save output of step 2 to a file $logfile:r.stderr.
Exit with the correct exit code from the command.
So far I have this:
$!/bin/zsh
# using zsh 4.2.0
setopt no_multios
# Don't error out if sed or grep don't find a match:
alias -g grep_err_warn="(sed -n '/error\|warning/I,/^$/p' || true)"
alias -g color_err="(grep --color -i -C 1000 error 1>&2 || true)"
alias -g filter='tee $logfile | grep_err_warn | tee $logfile:r.stderr | color_err'
# use {} around command to avoid possible race conditions:
{ eval $cmd } 2>&1 | filter
exit $pipestatus[1]
I've tried many things but can't get it to work. I've read "From Bash to Z Shell", many posts, etc. My problems currently are:
Only stdin goes into the filter
Note: the $cmd is a shell script that calls a binary with a /usr/bin/time -p prefix. This seems to cause issues with pipelines and is why I'm wrapping the command in {…} all the output goes into the pipe.
I don't have zsh available.
I did notice that your {..}'d statement is not correct.
You always need a semicolon before the closing `}'.
When I added that in bash, I could prove to my satisfaction that stderr was being redirected to stdout.
Try
{ eval $cmd ; } 2>&1 | filter
# ----------^
Also, you wrote
Save all output of $cmd (form stderr
and stdout) to a file $logfile
I don't see any mention of $logfile in your code.
You should be able to get all output into logfile (while losing the specficity of stderr stream), with
yourCommand 2>&1 | tee ${logFile} | ....
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
Don't use aliases in scripts, use functions (global aliases are especially looking for trouble). Not that you actually need functions here. You also don't need || true (unless you're running under set -e, in which case you should turn it off here). Other than that, your script looks ok; what is it choking on?
{ eval $cmd } |
tee $logfile |
sed -n '/error\|warning/I,/^$/p' |
tee $logfile:r.stderr |
grep --color -i -C 1000 error 1>&2
exit $pipestatus[1]
I'm also not sure what you meant by the sed expression; I don't quite understand your requirement 2.
The original post was mostly correct, except for an optimization by Gilles (to turn off set -e so the || true's are not needed.
#!/bin/zsh
# using zsh 4.2.0
setopt no_multios
#setopt no_errexit # set -e # don't turn this on
{ eval $cmd } 2>&1 |
tee $logfile |
sed -n '/error\|warning/I,/^$/p' |
tee $logfile:r.stderr |
grep --color -i -C 1000 error 1>&2
exit $pipestatus[1]
The part that confused me was the mixing of stdout and stderr led to them being interleaved and the sed -n '/error\|warning/I,/^$/p' (which prints out from and error || warning to the next empty line) was printing out a lot more than expected which made it seem like the command wasn't working.

Resources