Redirect suppressed errors [duplicate] - xcode

I want to execute a long running command in Bash, and both capture its exit status, and tee its output.
So I do this:
command | tee out.txt
ST=$?
The problem is that the variable ST captures the exit status of tee and not of command. How can I solve this?
Note that command is long running and redirecting the output to a file to view it later is not a good solution for me.

There is an internal Bash variable called $PIPESTATUS; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh due to a little bit different syntax.

Dumb solution: Connecting them through a named pipe (mkfifo). Then the command can be run second.
mkfifo pipe
tee out.txt < pipe &
command > pipe
echo $?

using bash's set -o pipefail is helpful
pipefail: the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status

There's an array that gives you the exit status of each command in a pipe.
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo $?
0
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo ${PIPESTATUS[*]}
1 0
$ touch x
$ cat x| sed 's'
sed: 1: "s": substitute pattern can not be delimited by newline or backslash
$ echo ${PIPESTATUS[*]}
0 1

This solution works without using bash specific features or temporary files. Bonus: in the end the exit status is actually an exit status and not some string in a file.
Situation:
someprog | filter
you want the exit status from someprog and the output from filter.
Here is my solution:
((((someprog; echo $? >&3) | filter >&4) 3>&1) | (read xs; exit $xs)) 4>&1
echo $?
See my answer for the same question on unix.stackexchange.com for a detailed explanation and an alternative without subshells and some caveats.

By combining PIPESTATUS[0] and the result of executing the exit command in a subshell, you can directly access the return value of your initial command:
command | tee ; ( exit ${PIPESTATUS[0]} )
Here's an example:
# the "false" shell built-in command returns 1
false | tee ; ( exit ${PIPESTATUS[0]} )
echo "return value: $?"
will give you:
return value: 1

So I wanted to contribute an answer like lesmana's, but I think mine is perhaps a little simpler and slightly more advantageous pure-Bourne-shell solution:
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; printf $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out - command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print icommand1's exit code on its stdout, but that stdout is redirected to file descriptor 3.
While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 - because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 - because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.
The final bit of magic is that first exec 4>&1 we did as a separate command - it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it - but since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it - however once it gets "out" of the command substitution it is effectively still going to the script's overall file descriptor 1.
(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.
Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out - this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.
Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; printf $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
I'm not sure how often things use file descriptor three and four directly - I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.

(command | tee out.txt; exit ${PIPESTATUS[0]})
Unlike #cODAR's answer this returns the original exit code of the first command and not only 0 for success and 127 for failure. But as #Chaoran pointed out you can just call ${PIPESTATUS[0]}. It is important however that all is put into brackets.

In Ubuntu and Debian, you can apt-get install moreutils. This contains a utility called mispipe that returns the exit status of the first command in the pipe.

Outside of bash, you can do:
bash -o pipefail -c "command1 | tee output"
This is useful for example in ninja scripts where the shell is expected to be /bin/sh.

The simplest way to do this in plain bash is to use process substitution instead of a pipeline. There are several differences, but they probably don't matter very much for your use case:
When running a pipeline, bash waits until all processes complete.
Sending Ctrl-C to bash makes it kill all the processes of a pipeline, not just the main one.
The pipefail option and the PIPESTATUS variable are irrelevant to process substitution.
Possibly more
With process substitution, bash just starts the process and forgets about it, it's not even visible in jobs.
Mentioned differences aside, consumer < <(producer) and producer | consumer are essentially equivalent.
If you want to flip which one is the "main" process, you just flip the commands and the direction of the substitution to producer > >(consumer). In your case:
command > >(tee out.txt)
Example:
$ { echo "hello world"; false; } > >(tee out.txt)
hello world
$ echo $?
1
$ cat out.txt
hello world
$ echo "hello world" > >(tee out.txt)
hello world
$ echo $?
0
$ cat out.txt
hello world
As I said, there are differences from the pipe expression. The process may never stop running, unless it is sensitive to the pipe closing. In particular, it may keep writing things to your stdout, which may be confusing.

PIPESTATUS[#] must be copied to an array immediately after the pipe command returns.
Any reads of PIPESTATUS[#] will erase the contents.
Copy it to another array if you plan on checking the status of all pipe commands.
"$?" is the same value as the last element of "${PIPESTATUS[#]}",
and reading it seems to destroy "${PIPESTATUS[#]}", but I haven't absolutely verified this.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
This will not work if the pipe is in a sub-shell. For a solution to that problem,
see bash pipestatus in backticked command?

Base on #brian-s-wilson 's answer; this bash helper function:
pipestatus() {
local S=("${PIPESTATUS[#]}")
if test -n "$*"
then test "$*" = "${S[*]}"
else ! [[ "${S[#]}" =~ [^0\ ] ]]
fi
}
used thus:
1: get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce
get_bad_things | grep '^'
pipeinfo 0 1 || return
2: all pipeline must succeed
thing | something -q | thingy
pipeinfo || return

Pure shell solution:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (cat || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
hello world
And now with the second cat replaced by false:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (false || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
Some command failed:
Second command failed: 1
First command failed: 141
Please note the first cat fails as well, because it's stdout gets closed on it. The order of the failed commands in the log is correct in this example, but don't rely on it.
This method allows for capturing stdout and stderr for the individual commands so you can then dump that as well into a log file if an error occurs, or just delete it if no error (like the output of dd).

It may sometimes be simpler and clearer to use an external command, rather than digging into the details of bash. pipeline, from the minimal process scripting language execline, exits with the return code of the second command*, just like a sh pipeline does, but unlike sh, it allows reversing the direction of the pipe, so that we can capture the return code of the producer process (the below is all on the sh command line, but with execline installed):
$ # using the full execline grammar with the execlineb parser:
$ execlineb -c 'pipeline { echo "hello world" } tee out.txt'
hello world
$ cat out.txt
hello world
$ # for these simple examples, one can forego the parser and just use "" as a separator
$ # traditional order
$ pipeline echo "hello world" "" tee out.txt
hello world
$ # "write" order (second command writes rather than reads)
$ pipeline -w tee out.txt "" echo "hello world"
hello world
$ # pipeline execs into the second command, so that's the RC we get
$ pipeline -w tee out.txt "" false; echo $?
1
$ pipeline -w tee out.txt "" true; echo $?
0
$ # output and exit status
$ pipeline -w tee out.txt "" sh -c "echo 'hello world'; exit 42"; echo "RC: $?"
hello world
RC: 42
$ cat out.txt
hello world
Using pipeline has the same differences to native bash pipelines as the bash process substitution used in answer #43972501.
* Actually pipeline doesn't exit at all unless there is an error. It executes into the second command, so it's the second command that does the returning.

Why not use stderr? Like so:
(
# Our long-running process that exits abnormally
( for i in {1..100} ; do echo ploop ; sleep 0.5 ; done ; exit 5 )
echo $? 1>&2 # We pass the exit status of our long-running process to stderr (fd 2).
) | tee ploop.out
So ploop.out receives the stdout. stderr receives the exit status of the long running process. This has the benefit of being completely POSIX-compatible.
(Well, with the exception of the range expression in the example long-running process, but that's not really relevant.)
Here's what this looks like:
...
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
5
Note that the return code 5 does not get output to the file ploop.out.

Related

i have this script in nmap but he dont do what i need [duplicate]

I want to execute a long running command in Bash, and both capture its exit status, and tee its output.
So I do this:
command | tee out.txt
ST=$?
The problem is that the variable ST captures the exit status of tee and not of command. How can I solve this?
Note that command is long running and redirecting the output to a file to view it later is not a good solution for me.
There is an internal Bash variable called $PIPESTATUS; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh due to a little bit different syntax.
Dumb solution: Connecting them through a named pipe (mkfifo). Then the command can be run second.
mkfifo pipe
tee out.txt < pipe &
command > pipe
echo $?
using bash's set -o pipefail is helpful
pipefail: the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
There's an array that gives you the exit status of each command in a pipe.
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo $?
0
$ cat x| sed 's///'
cat: x: No such file or directory
$ echo ${PIPESTATUS[*]}
1 0
$ touch x
$ cat x| sed 's'
sed: 1: "s": substitute pattern can not be delimited by newline or backslash
$ echo ${PIPESTATUS[*]}
0 1
This solution works without using bash specific features or temporary files. Bonus: in the end the exit status is actually an exit status and not some string in a file.
Situation:
someprog | filter
you want the exit status from someprog and the output from filter.
Here is my solution:
((((someprog; echo $? >&3) | filter >&4) 3>&1) | (read xs; exit $xs)) 4>&1
echo $?
See my answer for the same question on unix.stackexchange.com for a detailed explanation and an alternative without subshells and some caveats.
By combining PIPESTATUS[0] and the result of executing the exit command in a subshell, you can directly access the return value of your initial command:
command | tee ; ( exit ${PIPESTATUS[0]} )
Here's an example:
# the "false" shell built-in command returns 1
false | tee ; ( exit ${PIPESTATUS[0]} )
echo "return value: $?"
will give you:
return value: 1
So I wanted to contribute an answer like lesmana's, but I think mine is perhaps a little simpler and slightly more advantageous pure-Bourne-shell solution:
# You want to pipe command1 through command2:
exec 4>&1
exitstatus=`{ { command1; printf $? 1>&3; } | command2 1>&4; } 3>&1`
# $exitstatus now has command1's exit status.
I think this is best explained from the inside out - command1 will execute and print its regular output on stdout (file descriptor 1), then once it's done, printf will execute and print icommand1's exit code on its stdout, but that stdout is redirected to file descriptor 3.
While command1 is running, its stdout is being piped to command2 (printf's output never makes it to command2 because we send it to file descriptor 3 instead of 1, which is what the pipe reads). Then we redirect command2's output to file descriptor 4, so that it also stays out of file descriptor 1 - because we want file descriptor 1 free for a little bit later, because we will bring the printf output on file descriptor 3 back down into file descriptor 1 - because that's what the command substitution (the backticks), will capture and that's what will get placed into the variable.
The final bit of magic is that first exec 4>&1 we did as a separate command - it opens file descriptor 4 as a copy of the external shell's stdout. Command substitution will capture whatever is written on standard out from the perspective of the commands inside it - but since command2's output is going to file descriptor 4 as far as the command substitution is concerned, the command substitution doesn't capture it - however once it gets "out" of the command substitution it is effectively still going to the script's overall file descriptor 1.
(The exec 4>&1 has to be a separate command because many common shells don't like it when you try to write to a file descriptor inside a command substitution, that is opened in the "external" command that is using the substitution. So this is the simplest portable way to do it.)
You can look at it in a less technical and more playful way, as if the outputs of the commands are leapfrogging each other: command1 pipes to command2, then the printf's output jumps over command 2 so that command2 doesn't catch it, and then command 2's output jumps over and out of the command substitution just as printf lands just in time to get captured by the substitution so that it ends up in the variable, and command2's output goes on its merry way being written to the standard output, just as in a normal pipe.
Also, as I understand it, $? will still contain the return code of the second command in the pipe, because variable assignments, command substitutions, and compound commands are all effectively transparent to the return code of the command inside them, so the return status of command2 should get propagated out - this, and not having to define an additional function, is why I think this might be a somewhat better solution than the one proposed by lesmana.
Per the caveats lesmana mentions, it's possible that command1 will at some point end up using file descriptors 3 or 4, so to be more robust, you would do:
exec 4>&1
exitstatus=`{ { command1 3>&-; printf $? 1>&3; } 4>&- | command2 1>&4; } 3>&1`
exec 4>&-
Note that I use compound commands in my example, but subshells (using ( ) instead of { } will also work, though may perhaps be less efficient.)
Commands inherit file descriptors from the process that launches them, so the entire second line will inherit file descriptor four, and the compound command followed by 3>&1 will inherit the file descriptor three. So the 4>&- makes sure that the inner compound command will not inherit file descriptor four, and the 3>&- will not inherit file descriptor three, so command1 gets a 'cleaner', more standard environment. You could also move the inner 4>&- next to the 3>&-, but I figure why not just limit its scope as much as possible.
I'm not sure how often things use file descriptor three and four directly - I think most of the time programs use syscalls that return not-used-at-the-moment file descriptors, but sometimes code writes to file descriptor 3 directly, I guess (I could imagine a program checking a file descriptor to see if it's open, and using it if it is, or behaving differently accordingly if it's not). So the latter is probably best to keep in mind and use for general-purpose cases.
(command | tee out.txt; exit ${PIPESTATUS[0]})
Unlike #cODAR's answer this returns the original exit code of the first command and not only 0 for success and 127 for failure. But as #Chaoran pointed out you can just call ${PIPESTATUS[0]}. It is important however that all is put into brackets.
In Ubuntu and Debian, you can apt-get install moreutils. This contains a utility called mispipe that returns the exit status of the first command in the pipe.
Outside of bash, you can do:
bash -o pipefail -c "command1 | tee output"
This is useful for example in ninja scripts where the shell is expected to be /bin/sh.
The simplest way to do this in plain bash is to use process substitution instead of a pipeline. There are several differences, but they probably don't matter very much for your use case:
When running a pipeline, bash waits until all processes complete.
Sending Ctrl-C to bash makes it kill all the processes of a pipeline, not just the main one.
The pipefail option and the PIPESTATUS variable are irrelevant to process substitution.
Possibly more
With process substitution, bash just starts the process and forgets about it, it's not even visible in jobs.
Mentioned differences aside, consumer < <(producer) and producer | consumer are essentially equivalent.
If you want to flip which one is the "main" process, you just flip the commands and the direction of the substitution to producer > >(consumer). In your case:
command > >(tee out.txt)
Example:
$ { echo "hello world"; false; } > >(tee out.txt)
hello world
$ echo $?
1
$ cat out.txt
hello world
$ echo "hello world" > >(tee out.txt)
hello world
$ echo $?
0
$ cat out.txt
hello world
As I said, there are differences from the pipe expression. The process may never stop running, unless it is sensitive to the pipe closing. In particular, it may keep writing things to your stdout, which may be confusing.
PIPESTATUS[#] must be copied to an array immediately after the pipe command returns.
Any reads of PIPESTATUS[#] will erase the contents.
Copy it to another array if you plan on checking the status of all pipe commands.
"$?" is the same value as the last element of "${PIPESTATUS[#]}",
and reading it seems to destroy "${PIPESTATUS[#]}", but I haven't absolutely verified this.
declare -a PSA
cmd1 | cmd2 | cmd3
PSA=( "${PIPESTATUS[#]}" )
This will not work if the pipe is in a sub-shell. For a solution to that problem,
see bash pipestatus in backticked command?
Base on #brian-s-wilson 's answer; this bash helper function:
pipestatus() {
local S=("${PIPESTATUS[#]}")
if test -n "$*"
then test "$*" = "${S[*]}"
else ! [[ "${S[#]}" =~ [^0\ ] ]]
fi
}
used thus:
1: get_bad_things must succeed, but it should produce no output; but we want to see output that it does produce
get_bad_things | grep '^'
pipeinfo 0 1 || return
2: all pipeline must succeed
thing | something -q | thingy
pipeinfo || return
Pure shell solution:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (cat || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
hello world
And now with the second cat replaced by false:
% rm -f error.flag; echo hello world \
| (cat || echo "First command failed: $?" >> error.flag) \
| (false || echo "Second command failed: $?" >> error.flag) \
| (cat || echo "Third command failed: $?" >> error.flag) \
; test -s error.flag && (echo Some command failed: ; cat error.flag)
Some command failed:
Second command failed: 1
First command failed: 141
Please note the first cat fails as well, because it's stdout gets closed on it. The order of the failed commands in the log is correct in this example, but don't rely on it.
This method allows for capturing stdout and stderr for the individual commands so you can then dump that as well into a log file if an error occurs, or just delete it if no error (like the output of dd).
It may sometimes be simpler and clearer to use an external command, rather than digging into the details of bash. pipeline, from the minimal process scripting language execline, exits with the return code of the second command*, just like a sh pipeline does, but unlike sh, it allows reversing the direction of the pipe, so that we can capture the return code of the producer process (the below is all on the sh command line, but with execline installed):
$ # using the full execline grammar with the execlineb parser:
$ execlineb -c 'pipeline { echo "hello world" } tee out.txt'
hello world
$ cat out.txt
hello world
$ # for these simple examples, one can forego the parser and just use "" as a separator
$ # traditional order
$ pipeline echo "hello world" "" tee out.txt
hello world
$ # "write" order (second command writes rather than reads)
$ pipeline -w tee out.txt "" echo "hello world"
hello world
$ # pipeline execs into the second command, so that's the RC we get
$ pipeline -w tee out.txt "" false; echo $?
1
$ pipeline -w tee out.txt "" true; echo $?
0
$ # output and exit status
$ pipeline -w tee out.txt "" sh -c "echo 'hello world'; exit 42"; echo "RC: $?"
hello world
RC: 42
$ cat out.txt
hello world
Using pipeline has the same differences to native bash pipelines as the bash process substitution used in answer #43972501.
* Actually pipeline doesn't exit at all unless there is an error. It executes into the second command, so it's the second command that does the returning.
Why not use stderr? Like so:
(
# Our long-running process that exits abnormally
( for i in {1..100} ; do echo ploop ; sleep 0.5 ; done ; exit 5 )
echo $? 1>&2 # We pass the exit status of our long-running process to stderr (fd 2).
) | tee ploop.out
So ploop.out receives the stdout. stderr receives the exit status of the long running process. This has the benefit of being completely POSIX-compatible.
(Well, with the exception of the range expression in the example long-running process, but that's not really relevant.)
Here's what this looks like:
...
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
ploop
5
Note that the return code 5 does not get output to the file ploop.out.

Capture stdout to variable and get the exit statuses of foreground pipe

I want to execute a command (say ls) and sed its output, then save the stdout to a variable, like this,
OUT=$(ls | sed -n -e 's/regexp/replacement/p')
After this, if I try to access the $PIPESTATUS array, I get only 0 (which is same as $?). So, how can I get both $PIPESTATUS as well as capture the entire piped command's stdout?
Note:
If I only executed those piped commands and didn't capture the stdout (like ls | sed -n -e 's/regexp/replacement/p'), I get expected exit statuses in $PIPESTATUS (like 0 0)
If I only executed single command (without piping multiple commands) using Command Substitution and captured the stdout (like OUT=$(ls)), I get expected single exit status in $PIPESTATUS (which is same as $?)
P.S. I know, I could run the command 2 times (first to capture the stdout, second to access $PIPESTATUS without using Command Substitution), but is there a way to get both in single execution?
You can:
Use a temporary file to pass PIPESTATUS.
tmp=$(mktemp)
out=$(pipeline; echo "${PIPESTATUS[#]}" > "$tmp")
PIPESTATUS=($(<"$tmp")) # Note: PIPESTATUS is overwritten each command...
rm "$tmp"
Use a temporary file to pass out.
tmp=$(mktemp)
pipeline > "$tmp"
out=$(<"$tmp"))
rm "$tmp"
Interleave output with pipestatus. For example reserve the part from last newline character till the end for PIPESTATUS. To preserve original return status I think some temporary variables are needed:
out=$(pipeline; tmp=("${PIPESTATUS[#]}") ret=$?; echo $'\n' "${tmp[#]}"; exit "$ret"))
pipestatus=(${out##*$'\n'})
out="${out%$'\n'*}"
out="${out%%$'\n'}" # remove trailing newlines like command substitution does
tested with:
out=$(false | true | false | echo 123; echo $'\n' "${PIPESTATUS[#]}");
pipestatus=(${out##*$'\n'});
out="${out%$'\n'*}"; out="${out%%$'\n'}";
echo out="$out" PIPESTATUS="${pipestatus[#]}"
# out=123 PIPESTATUS=1 0 1 0
Notes:
Upper case variables by convention should be reserved by exported variables.

Stop a bash command as soon as a (specific) error is written to stderr [duplicate]

I need to start a process, lets say foo. I would like to see the stdout/stderr as normal, but grep the stderr for string bar. Once bar is found in the
stderr foo should be killed.
Is this possible?
I initially wrote a way to do this that involved stream swizzling, but it wasn't very good. Some of the comments relate to that version. Check the history if you're curious.
Here's a way to do this:
(PIDFILE=$(mktemp /tmp/foo.XXXXXX) && trap "rm $PIDFILE" 0 \
&& { foo \
2> >(tee >(grep -q bar && kill $(cat $PIDFILE)) >&2) \
& PID=$! && echo $PID >$PIDFILE ; wait $PID || true; })
Good old-fashioned nightmare fuel. What's happening here?
The outermost parentheses put the whole thing in a subshell; this constrains the scope of variables, for purposes of hygeine
We create a temporary file, using a syntax which works with both GNU and BSD mktemp, and call it PIDFILE
We set up a catch-all exit trap (which runs when the outermost subshell exits) to remove the file named by PIDFILE, again for hygeine
We run foo; this is done in a compound statement so that & binds to foo and not to the whole preceding pipeline
We redirect foo's standard error into a process substitution which waits for bar to appear and then kills foo (of which more later)
We capture foo's PID into a variable, write it to the file named by PIDFILE, then wait for it, so that the whole command waits for foo to exit before itself exiting; the || true discards the error exit status of foo when that happens.
The code inside the process substitution works as follows:
First, tee the input (foo's standard error), redirecting tee's standard output to standard error, so that foo's standard error does indeed appear on standard error
Send the copy of the input going to a file going to another process substitution (a process substitution within a process substitution)
Within the deeper process substitution, firstly run grep -q on the input, which looks for the specified pattern, and exits as soon as it finds it (or when it reaches the end of the stream), without printing anything, after which (if it found the string and exited successfully) the shell goes on to ...
kill the process whose PID is captured in the file named by PIDFILE, namely foo
Tom Anderson’s answer is quite good, but the kill $(cat $PIDFILE) will only happen on my system if foo terminated on its own, or through Ctrl-C. The following solution works for me
while read g
do
if [[ $g =~ bar ]]
then
kill $!
fi
done < <(
exec foo 2> >(tee /dev/tty)
)
Use Expect to Monitor Standard Error
Expect is designed for taking actions based on output from a process. The simplest solution is to simply let Expect start the process, then exit when it sees the expected output. For example:
expect -c 'set msg {Saw "foo" on stderr. Exiting process.}
spawn /bin/bash -c "echo foo >&2; sleep 10"
expect "foo" { puts $msg; exit }'
If the spawned process ends normally (e.g. before "foo" is seen), then the Expect script will exit, too.
Just as an alternative to the other answer, one way would be to use bash's coproc facility:
{coproc FOO { foo; } 2>&1 1>&3; } 3>&1
CHILD=$!
while read line <&${FOO[0]}; do
if echo "$line" | grep -q bar; then
kill $CHILD
else
echo "$line"
fi
done
That's clearly bash-specific, though.
I actually managed to figure out a way to do this without PID files or co-routines and in a way that should work in all POSIX-compatible shells (I've tried bash and dash). At least on systems that support /dev/fd/, but that should be pretty much all of them.
It is a bit convoluted, though, so I'm not sure if it is to your liking.
( # A
( # B
( /tmp/foo 2>&1 1>&3 & echo $! >&4 ) | # C
( tee /dev/fd/2 | ( grep -q bar && echo fin >&4 ) ) # D and E
) 4>&1 | ( # F
read CHILD
read STATUS
if [ "$STATUS" = fin ]; then
kill $CHILD
fi
)
) 3>&1
To explain the numerous subshells used herein:
The body of A runs with the normal stdout duplicated to fd 3. It runs the subshells B and F with the stdout of B piped to the stdin of F.
The body of B runs with the pipe from A duplicated on fd 4.
C runs your actual foo command, with its stderr connected to a pipe from C to D and its stdout duplicated from fd 3; that is, restored to the global stdout. It then writes the PID of foo to fd 4; that is, to the pipe that subshell F has on its stdin.
D runs a tee command receiving, from the pipe, whatever foo prints on its stderr. It copies that output to both /dev/fd/2 (in order to have it displayed on the global stderr) and to a pipe connected to subshell E.
E greps for bar and then, when found, writes fin on fd 4, that is, to the pipe that F has on its stdin. Note the &&, making sure that no fin is written if grep encounters EOF without having found bar.
F, then, reads the PID from C and the fin terminator from E. If the fin terminator was properly output, it kills foo.
EDIT: Fixed the missing tee to copy foo's stderr to the real stderr.

How can I kill a process when a specific string is seen on standard error?

I need to start a process, lets say foo. I would like to see the stdout/stderr as normal, but grep the stderr for string bar. Once bar is found in the
stderr foo should be killed.
Is this possible?
I initially wrote a way to do this that involved stream swizzling, but it wasn't very good. Some of the comments relate to that version. Check the history if you're curious.
Here's a way to do this:
(PIDFILE=$(mktemp /tmp/foo.XXXXXX) && trap "rm $PIDFILE" 0 \
&& { foo \
2> >(tee >(grep -q bar && kill $(cat $PIDFILE)) >&2) \
& PID=$! && echo $PID >$PIDFILE ; wait $PID || true; })
Good old-fashioned nightmare fuel. What's happening here?
The outermost parentheses put the whole thing in a subshell; this constrains the scope of variables, for purposes of hygeine
We create a temporary file, using a syntax which works with both GNU and BSD mktemp, and call it PIDFILE
We set up a catch-all exit trap (which runs when the outermost subshell exits) to remove the file named by PIDFILE, again for hygeine
We run foo; this is done in a compound statement so that & binds to foo and not to the whole preceding pipeline
We redirect foo's standard error into a process substitution which waits for bar to appear and then kills foo (of which more later)
We capture foo's PID into a variable, write it to the file named by PIDFILE, then wait for it, so that the whole command waits for foo to exit before itself exiting; the || true discards the error exit status of foo when that happens.
The code inside the process substitution works as follows:
First, tee the input (foo's standard error), redirecting tee's standard output to standard error, so that foo's standard error does indeed appear on standard error
Send the copy of the input going to a file going to another process substitution (a process substitution within a process substitution)
Within the deeper process substitution, firstly run grep -q on the input, which looks for the specified pattern, and exits as soon as it finds it (or when it reaches the end of the stream), without printing anything, after which (if it found the string and exited successfully) the shell goes on to ...
kill the process whose PID is captured in the file named by PIDFILE, namely foo
Tom Anderson’s answer is quite good, but the kill $(cat $PIDFILE) will only happen on my system if foo terminated on its own, or through Ctrl-C. The following solution works for me
while read g
do
if [[ $g =~ bar ]]
then
kill $!
fi
done < <(
exec foo 2> >(tee /dev/tty)
)
Use Expect to Monitor Standard Error
Expect is designed for taking actions based on output from a process. The simplest solution is to simply let Expect start the process, then exit when it sees the expected output. For example:
expect -c 'set msg {Saw "foo" on stderr. Exiting process.}
spawn /bin/bash -c "echo foo >&2; sleep 10"
expect "foo" { puts $msg; exit }'
If the spawned process ends normally (e.g. before "foo" is seen), then the Expect script will exit, too.
Just as an alternative to the other answer, one way would be to use bash's coproc facility:
{coproc FOO { foo; } 2>&1 1>&3; } 3>&1
CHILD=$!
while read line <&${FOO[0]}; do
if echo "$line" | grep -q bar; then
kill $CHILD
else
echo "$line"
fi
done
That's clearly bash-specific, though.
I actually managed to figure out a way to do this without PID files or co-routines and in a way that should work in all POSIX-compatible shells (I've tried bash and dash). At least on systems that support /dev/fd/, but that should be pretty much all of them.
It is a bit convoluted, though, so I'm not sure if it is to your liking.
( # A
( # B
( /tmp/foo 2>&1 1>&3 & echo $! >&4 ) | # C
( tee /dev/fd/2 | ( grep -q bar && echo fin >&4 ) ) # D and E
) 4>&1 | ( # F
read CHILD
read STATUS
if [ "$STATUS" = fin ]; then
kill $CHILD
fi
)
) 3>&1
To explain the numerous subshells used herein:
The body of A runs with the normal stdout duplicated to fd 3. It runs the subshells B and F with the stdout of B piped to the stdin of F.
The body of B runs with the pipe from A duplicated on fd 4.
C runs your actual foo command, with its stderr connected to a pipe from C to D and its stdout duplicated from fd 3; that is, restored to the global stdout. It then writes the PID of foo to fd 4; that is, to the pipe that subshell F has on its stdin.
D runs a tee command receiving, from the pipe, whatever foo prints on its stderr. It copies that output to both /dev/fd/2 (in order to have it displayed on the global stderr) and to a pipe connected to subshell E.
E greps for bar and then, when found, writes fin on fd 4, that is, to the pipe that F has on its stdin. Note the &&, making sure that no fin is written if grep encounters EOF without having found bar.
F, then, reads the PID from C and the fin terminator from E. If the fin terminator was properly output, it kills foo.
EDIT: Fixed the missing tee to copy foo's stderr to the real stderr.

piping output through sed but retain exit status [duplicate]

This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 9 years ago.
I pipe the output of a long-running build process through sed for syntax-highlighting, implemented as a wrapper around "mvn".
Further I have a "monitor" script that notifies me on the desktop when the build is finished. The monitor script checks the exit state of its argument and reports "Success" or "Failure".
By piping the maven output through sed, the exit status is always "ok", even when the build fails.
How can I pipe the correct exit status through sed as well?
Are there alternatives ?
Maybe the PIPESTATUS variable can help.
If you are using Bash, there's an option to use the set -o pipefail option, but since it's bash dependent, it's not portable, and won't work from a crontab, unless you wrap the whole thing in a bash env (bad solution).
http://bclary.com/blog/2006/07/20/pipefail-testing-pipeline-exit-codes/
This is a well known pain in the rear. If you are using bash (and probably many other modern sh variants), you can access the PIPESTATUS array to get the return value of a program earlier in the pipe. (Typically, the return value of the pipe is the return value of the last program in the pipe.) If you are using a shell that doesn't have PIPESTATUS (or if you want portability), you can do something like this:
#!/bin/sh
# run 'echo foo | false | sed s/f/t/', recording the status
# of false in RV
eval $( { { echo foo | false; printf RV=$? >&4; } |
sed s/f/t/ >&3; } 4>&1; ) 3>&1
echo RV=$RV
# run 'echo foo | cat | sed s/f/t/', recording the status
# of cat in RV
eval $( { { echo foo | cat; printf RV=$? >&4; } |
sed s/f/t/ >&3; } 4>&1; ) 3>&1
echo RV=$RV
In each case, RV will contain the return value of the false and the cat, respectively.
Bastil, because the pipe doesn't care about the exit status, you can only know whether sed exits sanely or not. I would enhance the sed script (or perhaps consider using a 3-liner Perl script) to exit with a failure status if the expected text isn't found, something like in pseudocode:
read($stdin)
if blank
exit(1) // output was blank, or on $stderr
else
regular expression substitution here
end
// natural exit success here
You could do it as a perl one-liner, and the same can be done in sedscript (but not in a sed one-liner, as far as I know)
Perhaps you could use a named pipe? Here's an example:
FIFODIR=`mktemp -d`
FIFO=$FIFODIR/fifo
mkfifo $FIFO
cat $FIFO & # An arbitrary pipeline
if false > $FIFO
then
echo "Build succeeded"
else
echo "Build failed" # This line WILL execute
fi
rm -r $FIFODIR
A week later I got a solution:
Originally I wwanted to do
monitor "mvn blah | sed -e SomeHighlightRegEx"
where monitor would reacts on exit status of sed (instead of mvn).
It's easier to do
monitor "mvn blah" | sed -e SomeHiglightRegEx
Note that this pipes the output of monitor through sed, while the monitor script reacts on status of mvn.
Thanks anyway for the other ideas.

Resources