Is it possible to store or capture stdout and stderr in different variables, without using a temp file? Right now I do this to get stdout in out and stderr in err when running some_command, but I'd
like to avoid the temp file.
error_file=$(mktemp)
out=$(some_command 2>$error_file)
err=$(< $error_file)
rm $error_file
Ok, it got a bit ugly, but here is a solution:
unset t_std t_err
eval "$( (echo std; echo err >&2) \
2> >(readarray -t t_err; typeset -p t_err) \
> >(readarray -t t_std; typeset -p t_std) )"
where (echo std; echo err >&2) needs to be replaced by the actual command. Output of stdout is saved into the array $t_std line by line omitting the newlines (the -t) and stderr into $t_err.
If you don't like arrays you can do
unset t_std t_err
eval "$( (echo std; echo err >&2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std) )"
which pretty much mimics the behavior of var=$(cmd) except for the value of $? which takes us to the last modification:
unset t_std t_err t_ret
eval "$( (echo std; echo err >&2; exit 2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std); t_ret=$?; typeset -p t_ret )"
Here $? is preserved into $t_ret
Tested on Debian wheezy using GNU bash, Version 4.2.37(1)-release (i486-pc-linux-gnu).
I think before saying “you can't” do something, people should at least give it a try with their own hands…
Simple and clean solution, without using eval or anything exotic
1. A minimal version
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\0' "$(some_command)" 1>&2) 2>&1)
Requires: printf, read
2. A simple test
A dummy script for producing stdout and stderr: useless.sh
#!/bin/bash
#
# useless.sh
#
echo "This is stderr" 1>&2
echo "This is stdout"
The actual script that will capture stdout and stderr: capture.sh
#!/bin/bash
#
# capture.sh
#
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\0' "$(./useless.sh)" 1>&2) 2>&1)
echo 'Here is the captured stdout:'
echo "${CAPTURED_STDOUT}"
echo
echo 'And here is the captured stderr:'
echo "${CAPTURED_STDERR}"
echo
Output of capture.sh
Here is the captured stdout:
This is stdout
And here is the captured stderr:
This is stderr
3. How it works
The command
(printf '\0%s\0' "$(some_command)" 1>&2) 2>&1
sends the standard output of some_command to printf '\0%s\0', thus creating the string \0${stdout}\n\0 (where \0 is a NUL byte and \n is a new line character); the string \0${stdout}\n\0 is then redirected to the standard error, where the standard error of some_command was already present, thus composing the string ${stderr}\n\0${stdout}\n\0, which is then redirected back to the standard output.
Afterwards, the command
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
starts reading the string ${stderr}\n\0${stdout}\n\0 up until the first NUL byte and saves the content into ${CAPTURED_STDERR}. Then the command
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
keeps reading the same string up to the next NUL byte and saves the content into ${CAPTURED_STDOUT}.
4. Making it unbreakable
The solution above relies on a NUL byte for the delimiter between stderr and stdout, therefore it will not work if for any reason stderr contains other NUL bytes.
Although that will rarely happen, it is possible to make the script completely unbreakable by stripping all possible NUL bytes from stdout and stderr before passing both outputs to read (sanitization) – NUL bytes would anyway get lost, as it is not possible to store them into shell variables:
{
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
} < <((printf '\0%s\0' "$((some_command | tr -d '\0') 3>&1- 1>&2- 2>&3- | tr -d '\0')" 1>&2) 2>&1)
Requires: printf, read, tr
5. Preserving the exit status – the blueprint (without sanitization)
After thinking a bit about the ultimate approach, I have come out with a solution that uses printf to cache both stdout and the exit code as two different arguments, so that they never interfere.
The first thing I did was outlining a way to communicate the exit status to the third argument of printf, and this was something very easy to do in its simplest form (i.e. without sanitization).
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
(IFS=$'\n' read -r -d '' _ERRNO_; exit ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(some_command)" "${?}" 1>&2) 2>&1)
Requires: exit, printf, read
6. Preserving the exit status with sanitization – unbreakable (rewritten)
Things get very messy though when we try to introduce sanitization. Launching tr for sanitizing the streams does in fact overwrite our previous exit status, so apparently the only solution is to redirect the latter to a separate descriptor before it gets lost, keep it there until tr does its job twice, and then redirect it back to its place.
After some quite acrobatic redirections between file descriptors, this is what I came out with.
The code below is a rewriting of the example that I have removed. It also sanitizes possible NUL bytes in the streams, so that read can always work properly.
{
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
(IFS=$'\n' read -r -d '' _ERRNO_; exit ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ some_command; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
Requires: exit, printf, read, tr
This solution is really robust. The exit code is always kept separated in a different descriptor until it reaches printf directly as a separate argument.
7. The ultimate solution – a general purpose function with exit status
We can also transform the code above to a general purpose function.
# SYNTAX:
# catch STDOUT_VARIABLE STDERR_VARIABLE COMMAND [ARG1[ ARG2[ ...[ ARGN]]]]
catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ shift 2; "${#}"; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Requires: cat, exit, printf, read, shift, tr
ChangeLog: 2022-06-17 // Replaced ${3} with shift 2; ${#} after Pavel Tankov's comment (Bash-only). 2023-01-18 // Replaced ${#} with "${#}" after cbugk's comment.
With the catch function we can launch the following snippet,
catch MY_STDOUT MY_STDERR './useless.sh'
echo "The \`./useless.sh\` program exited with code ${?}"
echo
echo 'Here is the captured stdout:'
echo "${MY_STDOUT}"
echo
echo 'And here is the captured stderr:'
echo "${MY_STDERR}"
echo
and get the following result:
The `./useless.sh` program exited with code 0
Here is the captured stdout:
This is stderr 1
This is stderr 2
And here is the captured stderr:
This is stdout 1
This is stdout 2
8. What happens in the last examples
Here follows a fast schematization:
some_command is launched: we then have some_command's stdout on the descriptor 1, some_command's stderr on the descriptor 2 and some_command's exit code redirected to the descriptor 3
stdout is piped to tr (sanitization)
stderr is swapped with stdout (using temporarily the descriptor 4) and piped to tr (sanitization)
the exit code (descriptor 3) is swapped with stderr (now descriptor 1) and piped to exit $(cat)
stderr (now descriptor 3) is redirected to the descriptor 1, end expanded as the second argument of printf
the exit code of exit $(cat) is captured by the third argument of printf
the output of printf is redirected to the descriptor 2, where stdout was already present
the concatenation of stdout and the output of printf is piped to read
9. The POSIX-compliant version #1 (breakable)
Process substitutions (the < <() syntax) are not POSIX-standard (although they de facto are). In a shell that does not support the < <() syntax the only way to reach the same result is via the <<EOF … EOF syntax. Unfortunately this does not allow us to use NUL bytes as delimiters, because these get automatically stripped out before reaching read. We must use a different delimiter. The natural choice falls onto the CTRL+Z character (ASCII character no. 26). Here is a breakable version (outputs must never contain the CTRL+Z character, or otherwise they will get mixed).
_CTRL_Z_=$'\cZ'
{
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" CAPTURED_STDERR;
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" CAPTURED_STDOUT;
(IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" _ERRNO_; exit ${_ERRNO_});
} <<EOF
$((printf "${_CTRL_Z_}%s${_CTRL_Z_}%d${_CTRL_Z_}" "$(some_command)" "${?}" 1>&2) 2>&1)
EOF
Requires: exit, printf, read
Note: As shift is Bash-only, in this POSIX-compliant version command + arguments must appear under the same quotes.
10. The POSIX-compliant version #2 (unbreakable, but not as good as the non-POSIX one)
And here is its unbreakable version, directly in function form (if either stdout or stderr contain CTRL+Z characters, the stream will be truncated, but will never be exchanged with another descriptor).
_CTRL_Z_=$'\cZ'
# SYNTAX:
# catch_posix STDOUT_VARIABLE STDERR_VARIABLE COMMAND
catch_posix() {
{
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" "${1}";
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" "${2}";
(IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" _ERRNO_; return ${_ERRNO_});
} <<EOF
$((printf "${_CTRL_Z_}%s${_CTRL_Z_}%d${_CTRL_Z_}" "$(((({ ${3}; echo "${?}" 1>&3-; } | cut -z -d"${_CTRL_Z_}" -f1 | tr -d '\0' 1>&4-) 4>&2- 2>&1- | cut -z -d"${_CTRL_Z_}" -f1 | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
EOF
}
Requires: cat, cut, exit, printf, read, tr
Answer's history
Here is a previous version of catch() before Pavel Tankov's comment (this version requires the additional arguments to be quoted together with the command):
# SYNTAX:
# catch STDOUT_VARIABLE STDERR_VARIABLE COMMAND [ARG1[ ARG2[ ...[ ARGN]]]]
catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ shift 2; ${#}; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Requires: cat, exit, printf, read, tr
Furthermore, I replaced an old example for propagating the exit status to the current shell, because, as Andy had pointed out in the comments, it was not as “unbreakable” as it was supposed to be (since it did not use printf to buffer one of the streams). For the record I paste the problematic code here:
Preserving the exit status (still unbreakable)
The following variant propagates also the exit status of some_command to the current shell:
{
IFS= read -r -d '' CAPTURED_STDOUT;
IFS= read -r -d '' CAPTURED_STDERR;
(IFS= read -r -d '' CAPTURED_EXIT; exit "${CAPTURED_EXIT}");
} < <((({ { some_command ; echo "${?}" 1>&3; } | tr -d '\0'; printf '\0'; } 2>&1- 1>&4- | tr -d '\0' 1>&4-) 3>&1- | xargs printf '\0%s\0' 1>&4-) 4>&1-)
Requires: printf, read, tr, xargs
Later, Andy submitted the following “suggested edit” for capturing the exit code:
Simple and clean solution saving the exit value
We can add to the end of stderr, a third piece of information, another NUL plus the exit status of the command. It will be outputted after stderr but before stdout
{
IFS= read -r -d '' CAPTURED_STDERR;
IFS= read -r -d '' CAPTURED_EXIT;
IFS= read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\n\0' "$(some_command; printf '\0%d' "${?}" 1>&2)" 1>&2) 2>&1)
His solution seemed to work, but had the minor problem that the exit status needed to be placed as the last fragment of the string, so that we are able to launch exit "${CAPTURED_EXIT}" within round brackets and not pollute the global scope, as I had tried to do in the removed example. The other problem was that, as the output of his innermost printf got immediately appended to the stderr of some_command, we could no more sanitize possible NUL bytes in stderr, because among these now there was also our NUL delimiter.
Trying to find the right solution to this problem was what led me to write § 5. Preserving the exit status – the blueprint (without sanitization), and the following sections.
This is for catching stdout and stderr in different variables. If you only want to catch stderr, leaving stdout as-is, there is a better and shorter solution.
To sum everything up for the benefit of the reader, here is an
Easy Reusable bash Solution
This version does use subshells and runs without tempfiles. (For a tempfile version which runs without subshells, see my other answer.)
: catch STDOUT STDERR cmd args..
catch()
{
eval "$({
__2="$(
{ __1="$("${#:3}")"; } 2>&1;
ret=$?;
printf '%q=%q\n' "$1" "$__1" >&2;
exit $ret
)";
ret="$?";
printf '%s=%q\n' "$2" "$__2" >&2;
printf '( exit %q )' "$ret" >&2;
} 2>&1 )";
}
Example use:
dummy()
{
echo "$3" >&2
echo "$2" >&1
return "$1"
}
catch stdout stderr dummy 3 $'\ndiffcult\n data \n\n\n' $'\nother\n difficult \n data \n\n'
printf 'ret=%q\n' "$?"
printf 'stdout=%q\n' "$stdout"
printf 'stderr=%q\n' "$stderr"
this prints
ret=3
stdout=$'\ndiffcult\n data '
stderr=$'\nother\n difficult \n data '
So it can be used without deeper thinking about it. Just put catch VAR1 VAR2 in front of any command args.. and you are done.
Some if cmd args..; then will become if catch VAR1 VAR2 cmd args..; then. Really nothing complex.
Addendum: Use in "strict mode"
catch works for me identically in strict mode. The only caveat is, that the example above returns error code 3, which, in strict mode, calls the ERR trap. Hence if you run some command under set -e which is expected to return arbitrary error codes (not only 0), you need to catch the return code into some variable like && ret=$? || ret=$? as shown below:
dummy()
{
echo "$3" >&2
echo "$2" >&1
return "$1"
}
catch stdout stderr dummy 3 $'\ndifficult\n data \n\n\n' $'\nother\n difficult \n data \n\n' && ret=$? || ret=$?
printf 'ret=%q\n' "$ret"
printf 'stdout=%q\n' "$stdout"
printf 'stderr=%q\n' "$stderr"
Discussion
Q: How does it work?
It just wraps ideas from the other answers here into a function, such that it can easily be resused.
catch() basically uses eval to set the two variables. This is similar to https://stackoverflow.com/a/18086548
Consider a call of catch out err dummy 1 2a 3b:
let's skip the eval "$({ and the __2="$( for now. I will come to this later.
__1="$("$("${#:3}")"; } 2>&1; executes dummy 1 2a 3b and stores its stdout into __1 for later use. So __1 becomes 2a. It also redirects stderr of dummy to stdout, such that the outer catch can gather stdout
ret=$?; catches the exit code, which is 1
printf '%q=%q\n' "$1" "$__1" >&2; then outputs out=2a to stderr. stderr is used here, as the current stdout already has taken over the role of stderr of the dummy command.
exit $ret then forwards the exit code (1) to the next stage.
Now to the outer __2="$( ... )":
This catches stdout of the above, which is the stderr of the dummy call, into variable __2. (We could re-use __1 here, but I used __2 to make it less confusing.). So __2 becomes 3b
ret="$?"; catches the (returned) return code 1 (from dummy) again
printf '%s=%q\n' "$2" "$__2" >&2; then outputs err=3a to stderr. stderr is used again, as it already was used to output the other variable out=2a.
printf '( exit %q )' "$ret" >&2; then outputs the code to set the proper return value. I did not find a better way, as assigning it to a variable needs a variable name, which then cannot be used as first or second argument to catch.
Please note that, as an optimization, we could have written those 2 printf as a single one like printf '%s=%q\n( exit %q ) "$__2" "$ret"` as well.
So what do we have so far?
We have following written to stderr:
out=2a
err=3b
( exit 1 )
where out is from $1, 2a is from stdout of dummy, err is from $2, 3b is from stderr of dummy, and the 1 is from the return code from dummy.
Please note that %q in the format of printf takes care for quoting, such that the shell sees proper (single) arguments when it comes to eval. 2a and 3b are so simple, that they are copied literally.
Now to the outer eval "$({ ... } 2>&1 )";:
This executes all of above which output the 2 variables and the exit, catches it (therefor the 2>&1) and parses it into the current shell using eval.
This way the 2 variables get set and the return code as well.
Q: It uses eval which is evil. So is it safe?
As long as printf %q has no bugs, it should be safe. But you always have to be very careful, just think about ShellShock.
Q: Bugs?
No obvious bugs are known, except following:
Catching big output needs big memory and CPU, as everything goes into variables and needs to be back-parsed by the shell. So use it wisely.
As usual $(echo $'\n\n\n\n') swallows all linefeeds, not only the last one. This is a POSIX requirement. If you need to get the LFs unharmed, just add some trailing character to the output and remove it afterwards like in following recipe (look at the trailing x which allows to read a softlink pointing to a file which ends on a $'\n'):
target="$(readlink -e "$file")x"
target="${target%x}"
Shell-variables cannot carry the byte NUL ($'\0'). They are simply ignores if they happen to occur in stdout or stderr.
The given command runs in a sub-subshell. So it has no access to $PPID, nor can it alter shell variables. You can catch a shell function, even builtins, but those will not be able to alter shell variables (as everything running within $( .. ) cannot do this). So if you need to run a function in current shell and catch it's stderr/stdout, you need to do this the usual way with tempfiles. (There are ways to do this such, that interrupting the shell normally does not leave debris behind, but this is complex and deserves it's own answer.)
Q: Bash version?
I think you need Bash 4 and above (due to printf %q)
Q: This still looks so awkward.
Right. Another answer here shows how it can be done in ksh much more cleanly. However I am not used to ksh, so I leave it to others to create a similar easy to reuse recipe for ksh.
Q: Why not use ksh then?
Because this is a bash solution
Q: The script can be improved
Of course you can squeeze out some bytes and create smaller or more incomprehensible solution. Just go for it ;)
Q: There is a typo. : catch STDOUT STDERR cmd args.. shall read # catch STDOUT STDERR cmd args..
Actually this is intended. : shows up in bash -x while comments are silently swallowed. So you can see where the parser is if you happen to have a typo in the function definition. It's an old debugging trick. But beware a bit, you can easily create some neat sideffects within the arguments of :.
Edit: Added a couple more ; to make it more easy to create a single-liner out of catch(). And added section how it works.
Technically, named pipes aren't temporary files and nobody here mentions them. They store nothing in the filesystem and you can delete them as soon as you connect them (so you won't ever see them):
#!/bin/bash -e
foo () {
echo stdout1
echo stderr1 >&2
sleep 1
echo stdout2
echo stderr2 >&2
}
rm -f stdout stderr
mkfifo stdout stderr
foo >stdout 2>stderr & # blocks until reader is connected
exec {fdout}<stdout {fderr}<stderr # unblocks `foo &`
rm stdout stderr # filesystem objects are no longer needed
stdout=$(cat <&$fdout)
stderr=$(cat <&$fderr)
echo $stdout
echo $stderr
exec {fdout}<&- {fderr}<&- # free file descriptors, optional
You can have multiple background processes this way and asynchronously collect their stdouts and stderrs at a convenient time, etc.
If you need this for one process only, you may just as well use hardcoded fd numbers like 3 and 4, instead of the {fdout}/{fderr} syntax (which finds a free fd for you).
This command sets both stdout (stdval) and stderr (errval) values in the present running shell:
eval "$( execcommand 2> >(setval errval) > >(setval stdval); )"
provided this function has been defined:
function setval { printf -v "$1" "%s" "$(cat)"; declare -p "$1"; }
Change execcommand to the captured command, be it "ls", "cp", "df", etc.
All this is based on the idea that we could convert all captured values to a text line with the help of the function setval, then setval is used to capture each value in this structure:
execcommand 2> CaptureErr > CaptureOut
Convert each capture value to a setval call:
execcommand 2> >(setval errval) > >(setval stdval)
Wrap everything inside an execute call and echo it:
echo "$( execcommand 2> >(setval errval) > >(setval stdval) )"
You will get the declare calls that each setval creates:
declare -- stdval="I'm std"
declare -- errval="I'm err"
To execute that code (and get the vars set) use eval:
eval "$( execcommand 2> >(setval errval) > >(setval stdval) )"
and finally echo the set vars:
echo "std out is : |$stdval| std err is : |$errval|
It is also possible to include the return (exit) value.
A complete bash script example looks like this:
#!/bin/bash --
# The only function to declare:
function setval { printf -v "$1" "%s" "$(cat)"; declare -p "$1"; }
# a dummy function with some example values:
function dummy { echo "I'm std"; echo "I'm err" >&2; return 34; }
# Running a command to capture all values
# change execcommand to dummy or any other command to test.
eval "$( dummy 2> >(setval errval) > >(setval stdval); <<<"$?" setval retval; )"
echo "std out is : |$stdval| std err is : |$errval| return val is : |$retval|"
Jonathan has the answer. For reference, this is the ksh93 trick. (requires a non-ancient version).
function out {
echo stdout
echo stderr >&2
}
x=${ { y=$(out); } 2>&1; }
typeset -p x y # Show the values
produces
x=stderr
y=stdout
The ${ cmds;} syntax is just a command substitution that doesn't create a subshell. The commands are executed in the current shell environment. The space at the beginning is important ({ is a reserved word).
Stderr of the inner command group is redirected to stdout (so that it applies to the inner substitution). Next, the stdout of out is assigned to y, and the redirected stderr is captured by x, without the usual loss of y to a command substitution's subshell.
It isn't possible in other shells, because all constructs which capture output require putting the producer into a subshell, which in this case, would include the assignment.
update: Now also supported by mksh.
This is a diagram showing how #madmurphy's very neat solution works.
And an indented version of the one-liner:
catch() {
{
IFS=$'\n' read -r -d '' "$out_var";
IFS=$'\n' read -r -d '' "$err_var";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
}\
< <(
(printf '\0%s\0%d\0' \
"$(
(
(
(
{ ${3}; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-
) 4>&2- 2>&1- | tr -d '\0' 1>&4-
) 3>&1- | exit "$(cat)"
) 4>&1-
)" "${?}" 1>&2
) 2>&1
)
}
Did not like the eval, so here is a solution that uses some redirection tricks to capture program output to a variable and then parses that variable to extract the different components. The -w flag sets the chunk size and influences the ordering of std-out/err messages in the intermediate format. 1 gives potentially high resolution at the cost of overhead.
#######
# runs "$#" and outputs both stdout and stderr on stdin, both in a prefixed format allowing both std in and out to be separately stored in variables later.
# limitations: Bash does not allow null to be returned from subshells, limiting the usefullness of applying this function to commands with null in the output.
# example:
# var=$(keepBoth ls . notHere)
# echo ls had the exit code "$(extractOne r "$var")"
# echo ls had the stdErr of "$(extractOne e "$var")"
# echo ls had the stdOut of "$(extractOne o "$var")"
keepBoth() {
(
prefix(){
( set -o pipefail
base64 -w 1 - | (
while read c
do echo -E "$1" "$c"
done
)
)
}
( (
"$#" | prefix o >&3
echo ${PIPESTATUS[0]} | prefix r >&3
) 2>&1 | prefix e >&1
) 3>&1
)
}
extractOne() { # extract
echo "$2" | grep "^$1" | cut --delimiter=' ' --fields=2 | base64 --decode -
}
For the benefit of the reader here is a solution using tempfiles.
The question was not to use tempfiles. However this might be due to the unwanted pollution of /tmp/ with tempfile in case the shell dies. In case of kill -9 some trap 'rm "$tmpfile1" "$tmpfile2"' 0 does not fire.
If you are in a situation where you can use tempfile, but want to never leave debris behind, here is a recipe.
Again it is called catch() (as my other answer) and has the same calling syntax:
catch stdout stderr command args..
# Wrappers to avoid polluting the current shell's environment with variables
: catch_read returncode FD variable
catch_read()
{
eval "$3=\"\`cat <&$2\`\"";
# You can use read instead to skip some fork()s.
# However read stops at the first NUL byte,
# also does no \n removal and needs bash 3 or above:
#IFS='' read -ru$2 -d '' "$3";
return $1;
}
: catch_1 tempfile variable comand args..
catch_1()
{
{
rm -f "$1";
"${#:3}" 66<&-;
catch_read $? 66 "$2";
} 2>&1 >"$1" 66<"$1";
}
: catch stdout stderr command args..
catch()
{
catch_1 "`tempfile`" "${2:-stderr}" catch_1 "`tempfile`" "${1:-stdout}" "${#:3}";
}
What it does:
It creates two tempfiles for stdout and stderr. However it nearly immediately removes these, such that they are only around for a very short time.
catch_1() catches stdout (FD 1) into a variable and moves stderr to stdout, such that the next ("left") catch_1 can catch that.
Processing in catch is done from right to left, so the left catch_1 is executed last and catches stderr.
The worst which can happen is, that some temporary files show up on /tmp/, but they are always empty in that case. (They are removed before they get filled.). Usually this should not be a problem, as under Linux tmpfs supports roughly 128K files per GB of main memory.
The given command can access and alter all local shell variables as well. So you can call a shell function which has sideffects!
This only forks twice for the tempfile call.
Bugs:
Missing good error handling in case tempfile fails.
This does the usual \n removal of the shell. See comment in catch_read().
You cannot use file descriptor 66 to pipe data to your command. If you need that, use another descriptor for the redirection, like 42 (note that very old shells only offer FDs up to 9).
This cannot handle NUL bytes ($'\0') in stdout and stderr. (NUL is just ignored. For the read variant everything behind a NUL is ignored.)
FYI:
Unix allows us to access deleted files, as long as you keep some reference to them around (such as an open filehandle). This way we can open and then remove them.
In the bash realm, #madmurphy's "7. The ultimate solution – a general purpose function with exit status" is the way to go that I've been massively using everywhere. Based on my experience I'm contributing minor updates making it really "ultimate" also in the following two scenarios:
complex command lines to have args correctly quoted and without the need of quoting the original commands which are now naturally typed as plain tokens. ( the replacement is this..."$(((({ "${#:3}" ; echo...)
our trusted friend "debug" options. xtrace and verbose work by injecting text into stderr... You can immagine for how long I was baffled by the erratic behaviour of scripts that seemed to work perfectly well just before the catch... And the problem actually was quite subtler and required to take care of xtrace and verbose options as mentioned here https://unix.stackexchange.com/a/21944
One of my use case scenarios, where you'll get why the entire quoting mechanism was a problem is the following. Also, to detect the length of a video and do something else in case of error, I needed some debug before figuring out how this fast ffprobe command fails on the given video:
catch end err ffprobe -i "${filename}" -show_entries format=duration -v warning -of csv='p=0'
This, in my experience so far, is the ultimate ultimate ;-) one, and hope may serve you as well. Credits to #madmurphy and all other contributors.
catch() {
if [ "$#" -lt 3 ]; then
echo USAGE: catch STDOUT_VAR STDERR_VAR COMMAND [CMD_ARGS...]
echo 'stdout-> ${STDOUT_VAR}' 'stderr-> ${STDERR_VAR}' 'exit-> ${?}'
echo -e "\n** NOTICE: FD redirects are used to make the magic happen."
echo " Shell's xtrace (set -x) and verbose (set -v) work by redirecting to stderr, which screws the magic up."
echo " xtrace (set -x) and verbose (set -v) modes are suspended during the execution of this function."
return 1
fi
# check "verbose" option, turn if off if enabled, and save restore status USE_V
[[ ${-/v} != $- ]] && set +v && USE_V="-v" || USE_V="+v"
# check "xtrace" option, turn if off if enabled, and save restore status USE_X
[[ ${-/x} != $- ]] && set +x && USE_X="-x" || USE_X="+x"
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
# restore the "xtrace" and "verbose" options before returning
(IFS=$'\n' read -r -d '' _ERRNO_; set $USE_X; set $USE_V; return "${_ERRNO_}");
} < <((printf '\0%s\0%d\0' "$(((({ "${#:3}" ; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Succinctly, I believe the answer is 'No'. The capturing $( ... ) only captures standard output to the variable; there isn't a way to get the standard error captured into a separate variable. So, what you have is about as neat as it gets.
What about... =D
GET_STDERR=""
GET_STDOUT=""
get_stderr_stdout() {
GET_STDERR=""
GET_STDOUT=""
unset t_std t_err
eval "$( (eval $1) 2> >(t_err=$(cat); typeset -p t_err) > >(t_std=$(cat); typeset -p t_std) )"
GET_STDERR=$t_err
GET_STDOUT=$t_std
}
get_stderr_stdout "command"
echo "$GET_STDERR"
echo "$GET_STDOUT"
One workaround, which is hacky but perhaps more intuitive than some of the suggestions on this page, is to tag the output streams, merge them, and split afterwards based on the tags. For example, we might tag stdout with a "STDOUT" prefix:
function someCmd {
echo "I am stdout"
echo "I am stderr" 1>&2
}
ALL=$({ someCmd | sed -e 's/^/STDOUT/g'; } 2>&1)
OUT=$(echo "$ALL" | grep "^STDOUT" | sed -e 's/^STDOUT//g')
ERR=$(echo "$ALL" | grep -v "^STDOUT")
```
If you know that stdout and/or stderr are of a restricted form, you can come up with a tag which does not conflict with their allowed content.
WARNING: NOT (yet?) WORKING!
The following seems a possible lead to get it working without creating any temp files and also on POSIX sh only; it requires base64 however and due to the encoding/decoding may not be that efficient and use also "larger" memory.
Even in the simple case, it would already fail, when the last stderr line has no newline. This can be fixed at least in some cases with replacing exe with "{ exe ; echo >&2 ; }", i.e. adding a newline.
The main problem is however that everything seems racy. Try using an exe like:
exe()
{
cat /usr/share/hunspell/de_DE.dic
cat /usr/share/hunspell/en_GB.dic >&2
}
and you'll see that e.g. parts of the base64 encoded line is on the top of the file, parts at the end, and the non-decoded stderr stuff in the middle.
Well, even if the idea below cannot be made working (which I assume), it may serve as an anti-example for people who may falsely believe it could be made working like this.
Idea (or anti-example):
#!/bin/sh
exe()
{
echo out1
echo err1 >&2
echo out2
echo out3
echo err2 >&2
echo out4
echo err3 >&2
echo -n err4 >&2
}
r="$( { exe | base64 -w 0 ; } 2>&1 )"
echo RAW
printf '%s' "$r"
echo RAW
o="$( printf '%s' "$r" | tail -n 1 | base64 -d )"
e="$( printf '%s' "$r" | head -n -1 )"
unset r
echo
echo OUT
printf '%s' "$o"
echo OUT
echo
echo ERR
printf '%s' "$e"
echo ERR
gives (with the stderr-newline fix):
$ ./ggg
RAW
err1
err2
err3
err4
b3V0MQpvdXQyCm91dDMKb3V0NAo=RAW
OUT
out1
out2
out3
out4OUT
ERR
err1
err2
err3
err4ERR
(At least on Debian's dash and bash)
Here is an variant of #madmurphy solution that should work for arbitrarily large stdout/stderr streams, maintain the exit return value, and handle nulls in the stream (by converting them to newlines)
function buffer_plus_null()
{
local buf
IFS= read -r -d '' buf || :
echo -n "${buf}"
printf '\0'
}
{
IFS= time read -r -d '' CAPTURED_STDOUT;
IFS= time read -r -d '' CAPTURED_STDERR;
(IFS= read -r -d '' CAPTURED_EXIT; exit "${CAPTURED_EXIT}");
} < <((({ { some_command ; echo "${?}" 1>&3; } | tr '\0' '\n' | buffer_plus_null; } 2>&1 1>&4 | tr '\0' '\n' | buffer_plus_null 1>&4 ) 3>&1 | xargs printf '%s\0' 1>&4) 4>&1 )
Cons:
The read commands are the most expensive part of the operation. For example: find /proc on a computer running 500 processes, takes 20 seconds (while the command was only 0.5 seconds). It takes 10 seconds to read in the first time, and 10 seconds more to read the second time, doubling the total time.
Explanation of buffer
The original solution was to an argument to printf to buffer the stream, however with the need to have the exit code come last, one solution is to buffer both stdout and stderr. I tried xargs -0 printf but then you quickly started hitting "max argument length limits". So I decided a solution was to write a quick buffer function:
Use read to store the stream in a variable
This read will terminate when the stream ends, or a null is received. Since we already removed the nulls, it ends when the stream is closed, and returns non-zero. Since this is expected behavior we add || : meaning "or true" so that the line always evaluates to true (0)
Now that I know the stream has ended, it's safe to start echoing it back out.
echo -n "${buf}" is a builtin command and thus not limited to the argument length limit
Lastly, add a null separator to the end.
This prefixes error messages (similar to the answer of #Warbo) and by that we are able to distinguish between stdout and stderr:
out=$(some_command 2> >(sed -e 's/^/stderr/g'))
err=$(echo "$out" | grep -oP "(?<=^stderr).*")
out=$(echo "$out" | grep -v '^stderr')
The (?<=string) part is called a positive lookbehind which excludes the string from the result.
How I use it
# cat ./script.sh
#!/bin/bash
# check script arguments
args=$(getopt -u -l "foo,bar" "fb" "$#" 2> >(sed -e 's/^/stderr/g') )
[[ $? -ne 0 ]] && echo -n "Error: " && echo "$args" | grep -oP "(?<=^stderr).*" && exit 1
mapfile -t args < <(xargs -n1 <<< "$args")
#
# ./script.sh --foo --bar --baz
# Error: getopt: unrecognized option '--baz'
Notes:
As you can see I don't need to filter for stdout as the condition already catched the error and stopped the script. So if the script does not stop, $args does not contain any prefixed content.
An alternative to sed -e 's/^/stderr/g' is xargs -d '\n' -I {} echo "stderr{}".
Variant to prefix stdout AND stderr
# smbclient localhost 1> >(sed -e 's/^/std/g') 2> >(sed -e 's/^/err/g')
std
stdlocalhost: Not enough '\' characters in service
stderrUsage: smbclient [-?EgBVNkPeC] [-?|--help] [--usage]
stderr [-R|--name-resolve=NAME-RESOLVE-ORDER] [-M|--message=HOST]
stderr [-I|--ip-address=IP] [-E|--stderr] [-L|--list=HOST]
stderr [-m|--max-protocol=LEVEL] [-T|--tar=<c|x>IXFqgbNan]
stderr [-D|--directory=DIR] [-c|--command=STRING] [-b|--send-buffer=BYTES]
stderr [-t|--timeout=SECONDS] [-p|--port=PORT] [-g|--grepable]
stderr [-B|--browse] [-d|--debuglevel=DEBUGLEVEL]
stderr [-s|--configfile=CONFIGFILE] [-l|--log-basename=LOGFILEBASE]
stderr [-V|--version] [--option=name=value]
stderr [-O|--socket-options=SOCKETOPTIONS] [-n|--netbiosname=NETBIOSNAME]
stderr [-W|--workgroup=WORKGROUP] [-i|--scope=SCOPE] [-U|--user=USERNAME]
stderr [-N|--no-pass] [-k|--kerberos] [-A|--authentication-file=FILE]
stderr [-S|--signing=on|off|required] [-P|--machine-pass] [-e|--encrypt]
stderr [-C|--use-ccache] [--pw-nt-hash] service <password>
This is an addendum to Jacques Gaudin's addendum to madmurphy's answer.
Unlike the source, this uses eval to execute multi-line command (multi-argument is ok as well thanks to "${#}").
Another caveat is this function will return 0 in any case, and output exit code to a third variable instead. IMO this is more apt for catch.
#!/bin/bash
# Overwrites existing values of provided variables in any case.
# SYNTAX:
# catch STDOUT_VAR_NAME STDERR_VAR_NAME EXIT_CODE_VAR_NAME COMMAND1 [COMMAND2 [...]]
function catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
IFS=$'\n' read -r -d '' "${3}";
return 0;
}\
< <(
(printf '\0%s\0%d\0' \
"$(
(
(
(
{
shift 3;
eval "${#}";
echo "${?}" 1>&3-;
} | tr -d '\0' 1>&4-
) 4>&2- 2>&1- | tr -d '\0' 1>&4-
) 3>&1- | exit "$(cat)"
) 4>&1-
)" "${?}" 1>&2
) 2>&1
)
}
# Simulation of here-doc
MULTILINE_SCRIPT_1='cat << EOF
foo
bar
with newlines
EOF
'
# Simulation of multiple streams
# Notice the lack of semi-colons, otherwise below code
# could become a one-liner and still work
MULTILINE_SCRIPT_2='echo stdout stream
echo error stream 1>&2
'
catch out err code "${MULTILINE_SCRIPT_1}" \
'printf "wait there is more\n" 1>&2'
printf "1)\n\tSTDOUT: ${out}\n\tSTDERR: ${err}\n\tCODE: ${code}\n"
echo ''
catch out err code "${MULTILINE_SCRIPT_2}" echo this multi-argument \
form works too '1>&2' \; \(exit 5\)
printf "2)\n\tSTDOUT: ${out}\n\tSTDERR: ${err}\n\tCODE: ${code}\n"
Output:
1)
STDOUT: foo
bar
with newlines
STDERR: wait there is more
CODE: 0
2)
STDOUT: stdout stream
STDERR: error stream
this multi-argument form works too
CODE: 5
If the command 1) no stateful side effects and 2) is computationally cheap, the easiest solution is to just run it twice. I've mainly used this for code that runs during the boot sequence when you don't yet know if the disk is going to be working. In my case it was a tiny some_command so there was no performance hit for running twice, and the command had no side effects.
The main benefit is that this is clean and easy to read. The solutions here are quite clever, but I would hate to be the one that has to maintain a script containing the more complicated solutions. I'd recommend the simple run-it-twice approach if your scenario works with that, as it's much cleaner and easier to maintain.
Example:
output=$(getopt -o '' -l test: -- "$#")
errout=$(getopt -o '' -l test: -- "$#" 2>&1 >/dev/null)
if [[ -n "$errout" ]]; then
echo "Option Error: $errout"
fi
Again, this is only ok to do because getopt has no side effects. I know it's performance-safe because my parent code calls this less than 100 times during the entire program, and the user will never notice 100 getopt calls vs 200 getopt calls.
Here's a simpler variation that isn't quite what the OP wanted, but is unlike any of the other options. You can get whatever you want by rearranging the file descriptors.
Test command:
%> cat xx.sh
#!/bin/bash
echo stdout
>&2 echo stderr
which by itself does:
%> ./xx.sh
stdout
stderr
Now, print stdout, capture stderr to a variable, & log stdout to a file
%> export err=$(./xx.sh 3>&1 1>&2 2>&3 >"out")
stdout
%> cat out
stdout
%> echo
$err
stderr
Or log stdout & capture stderr to a variable:
export err=$(./xx.sh 3>&1 1>out 2>&3 )
%> cat out
stdout
%> echo $err
stderr
You get the idea.
Realtime output and write to file:
#!/usr/bin/env bash
# File where store the output
log_file=/tmp/out.log
# Empty file
echo > ${log_file}
outToLog() {
# File where write (first parameter)
local f="$1"
# Start file output watcher in background
tail -f "${f}" &
# Capture background process PID
local pid=$!
# Write "stdin" to file
cat /dev/stdin >> "${f}"
# Kill background task
kill -9 ${pid}
}
(
# Long execution script example
echo a
sleep 1
echo b >&2
sleep 1
echo c >&2
sleep 1
echo d
) 2>&1 | outToLog "${log_file}"
# File result
echo '==========='
cat "${log_file}"
I've posted my solution to this problem here. It does use process substitution and requires Bash > v4 but also captures stdout, stderr and return code into variables you name in the current scope:
https://gist.github.com/pmarreck/5eacc6482bc19b55b7c2f48b4f1db4e8
The whole point of this exercise was so that I could assert on these things in a test suite. The fact that I just spent all afternoon figuring out this simple-sounding thing... I hope one of these solutions helps others!
Related
Let's say I have a script like the following:
useless.sh
echo "This Is Error" 1>&2
echo "This Is Output"
And I have another shell script:
alsoUseless.sh
./useless.sh | sed 's/Output/Useless/'
I want to capture "This Is Error", or any other stderr from useless.sh, into a variable.
Let's call it ERROR.
Notice that I am using stdout for something. I want to continue using stdout, so redirecting stderr into stdout is not helpful, in this case.
So, basically, I want to do
./useless.sh 2> $ERROR | ...
but that obviously doesn't work.
I also know that I could do
./useless.sh 2> /tmp/Error
ERROR=`cat /tmp/Error`
but that's ugly and unnecessary.
Unfortunately, if no answers turn up here that's what I'm going to have to do.
I'm hoping there's another way.
Anyone have any better ideas?
It would be neater to capture the error file thus:
ERROR=$(</tmp/Error)
The shell recognizes this and doesn't have to run 'cat' to get the data.
The bigger question is hard. I don't think there's an easy way to do it. You'd have to build the entire pipeline into the sub-shell, eventually sending its final standard output to a file, so that you can redirect the errors to standard output.
ERROR=$( { ./useless.sh | sed s/Output/Useless/ > outfile; } 2>&1 )
Note that the semi-colon is needed (in classic shells - Bourne, Korn - for sure; probably in Bash too). The '{}' does I/O redirection over the enclosed commands. As written, it would capture errors from sed too.
WARNING: Formally untested code - use at own risk.
Redirected stderr to stdout, stdout to /dev/null, and then use the backticks or $() to capture the redirected stderr:
ERROR=$(./useless.sh 2>&1 >/dev/null)
alsoUseless.sh
This will allow you to pipe the output of your useless.sh script through a command such as sed and save the stderr in a variable named error. The result of the pipe is sent to stdout for display or to be piped into another command.
It sets up a couple of extra file descriptors to manage the redirections needed in order to do this.
#!/bin/bash
exec 3>&1 4>&2 #set up extra file descriptors
error=$( { ./useless.sh | sed 's/Output/Useless/' 2>&4 1>&3; } 2>&1 )
echo "The message is \"${error}.\""
exec 3>&- 4>&- # release the extra file descriptors
There are a lot of duplicates for this question, many of which have a slightly simpler usage scenario where you don't want to capture stderr and stdout and the exit code all at the same time.
if result=$(useless.sh 2>&1); then
stdout=$result
else
rc=$?
stderr=$result
fi
works for the common scenario where you expect either proper output in the case of success, or a diagnostic message on stderr in the case of failure.
Note that the shell's control statements already examine $? under the hood; so anything which looks like
cmd
if [ $? -eq 0 ], then ...
is just a clumsy, unidiomatic way of saying
if cmd; then ...
For the benefit of the reader, this recipe here
can be re-used as oneliner to catch stderr into a variable
still gives access to the return code of the command
Sacrifices a temporary file descriptor 3 (which can be changed by you of course)
And does not expose this temporary file descriptors to the inner command
If you want to catch stderr of some command into var you can do
{ var="$( { command; } 2>&1 1>&3 3>&- )"; } 3>&1;
Afterwards you have it all:
echo "command gives $? and stderr '$var'";
If command is simple (not something like a | b) you can leave the inner {} away:
{ var="$(command 2>&1 1>&3 3>&-)"; } 3>&1;
Wrapped into an easy reusable bash-function (probably needs version 3 and above for local -n):
: catch-stderr var cmd [args..]
catch-stderr() { local -n v="$1"; shift && { v="$("$#" 2>&1 1>&3 3>&-)"; } 3>&1; }
Explained:
local -n aliases "$1" (which is the variable for catch-stderr)
3>&1 uses file descriptor 3 to save there stdout points
{ command; } (or "$#") then executes the command within the output capturing $(..)
Please note that the exact order is important here (doing it the wrong way shuffles the file descriptors wrongly):
2>&1 redirects stderr to the output capturing $(..)
1>&3 redirects stdout away from the output capturing $(..) back to the "outer" stdout which was saved in file descriptor 3. Note that stderr still refers to where FD 1 pointed before: To the output capturing $(..)
3>&- then closes the file descriptor 3 as it is no more needed, such that command does not suddenly has some unknown open file descriptor showing up. Note that the outer shell still has FD 3 open, but command will not see it.
The latter is important, because some programs like lvm complain about unexpected file descriptors. And lvm complains to stderr - just what we are going to capture!
You can catch any other file descriptor with this recipe, if you adapt accordingly. Except file descriptor 1 of course (here the redirection logic would be wrong, but for file descriptor 1 you can just use var=$(command) as usual).
Note that this sacrifices file descriptor 3. If you happen to need that file descriptor, feel free to change the number. But be aware, that some shells (from the 1980s) might understand 99>&1 as argument 9 followed by 9>&1 (this is no problem for bash).
Also note that it is not particluar easy to make this FD 3 configurable through a variable. This makes things very unreadable:
: catch-var-from-fd-by-fd variable fd-to-catch fd-to-sacrifice command [args..]
catch-var-from-fd-by-fd()
{
local -n v="$1";
local fd1="$2" fd2="$3";
shift 3 || return;
eval exec "$fd2>&1";
v="$(eval '"$#"' "$fd1>&1" "1>&$fd2" "$fd2>&-")";
eval exec "$fd2>&-";
}
Security note: The first 3 arguments to catch-var-from-fd-by-fd must not be taken from a 3rd party. Always give them explicitly in a "static" fashion.
So no-no-no catch-var-from-fd-by-fd $var $fda $fdb $command, never do this!
If you happen to pass in a variable variable name, at least do it as follows:
local -n var="$var"; catch-var-from-fd-by-fd var 3 5 $command
This still will not protect you against every exploit, but at least helps to detect and avoid common scripting errors.
Notes:
catch-var-from-fd-by-fd var 2 3 cmd.. is the same as catch-stderr var cmd..
shift || return is just some way to prevent ugly errors in case you forget to give the correct number of arguments. Perhaps terminating the shell would be another way (but this makes it hard to test from commandline).
The routine was written such, that it is more easy to understand. One can rewrite the function such that it does not need exec, but then it gets really ugly.
This routine can be rewritten for non-bash as well such that there is no need for local -n. However then you cannot use local variables and it gets extremely ugly!
Also note that the evals are used in a safe fashion. Usually eval is considerered dangerous. However in this case it is no more evil than using "$#" (to execute arbitrary commands). However please be sure to use the exact and correct quoting as shown here (else it becomes very very dangerous).
# command receives its input from stdin.
# command sends its output to stdout.
exec 3>&1
stderr="$(command </dev/stdin 2>&1 1>&3)"
exitcode="${?}"
echo "STDERR: $stderr"
exit ${exitcode}
POSIX
STDERR can be captured with some redirection magic:
$ { error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&3 ; } 2>&1); } 3>&1
lrwxrwxrwx 1 rZZt rZZt 7 Aug 22 15:44 /bin -> usr/bin/
$ echo $error
ls: cannot access '/XXXX': No such file or directory
Note that piping of STDOUT of the command (here ls) is done inside the innermost { }. If you're executing a simple command (eg, not a pipe), you could remove these inner braces.
You can't pipe outside the command as piping makes a subshell in bash and zsh, and the assignment to the variable in the subshell wouldn't be available to the current shell.
bash
In bash, it would be better not to assume that file descriptor 3 is unused:
{ error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&$tmp ; } 2>&1); } {tmp}>&1;
exec {tmp}>&- # With this syntax the FD stays open
Note that this doesn't work in zsh.
Thanks to this answer for the general idea.
A simple solution
{ ERROR=$(./useless.sh 2>&1 1>&$out); } {out}>&1
echo "-"
echo $ERROR
Will produce:
This Is Output
-
This Is Error
Iterating a bit on Tom Hale's answer, I've found it possible to wrap the redirection yoga into a function for easier reuse. For example:
#!/bin/sh
capture () {
{ captured=$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
# Example usage; capturing dialog's output without resorting to temp files
# was what motivated me to search for this particular SO question
capture dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
choice=$captured
clear; echo $choice
It's almost certainly possible to simplify this further. Haven't tested especially-thoroughly, but it does appear to work with both bash and ksh.
EDIT: an alternative version of the capture function which stores the captured STDERR output into a user-specified variable (instead of relying on a global $captured), taking inspiration from Léa Gris's answer while preserving the ksh (and zsh) compatibility of the above implementation:
capture () {
if [ "$#" -lt 2 ]; then
echo "Usage: capture varname command [arg ...]"
return 1
fi
typeset var captured; captured="$1"; shift
{ read $captured <<<$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
And usage:
capture choice dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
clear; echo $choice
Here's how I did it :
#
# $1 - name of the (global) variable where the contents of stderr will be stored
# $2 - command to be executed
#
captureStderr()
{
local tmpFile=$(mktemp)
$2 2> $tmpFile
eval "$1=$(< $tmpFile)"
rm $tmpFile
}
Usage example :
captureStderr err "./useless.sh"
echo -$err-
It does use a temporary file. But at least the ugly stuff is wrapped in a function.
This is an interesting problem to which I hoped there was an elegant solution. Sadly, I end up with a solution similar to Mr. Leffler, but I'll add that you can call useless from inside a Bash function for improved readability:
#!/bin/bash
function useless {
/tmp/useless.sh | sed 's/Output/Useless/'
}
ERROR=$(useless)
echo $ERROR
All other kind of output redirection must be backed by a temporary file.
I think you want to capture stderr, stdout and exitcode if that is your intention you can use this code:
## Capture error when 'some_command() is executed
some_command_with_err() {
echo 'this is the stdout'
echo 'this is the stderr' >&2
exit 1
}
run_command() {
{
IFS=$'\n' read -r -d '' stderr;
IFS=$'\n' read -r -d '' stdout;
IFS=$'\n' read -r -d '' stdexit;
} < <((printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1)
stdexit=${stdexit:-0};
}
echo 'Run command:'
if ! run_command; then
## Show the values
typeset -p stdout stderr stdexit
else
typeset -p stdout stderr stdexit
fi
This scripts capture the stderr, stdout as well as the exitcode.
But Teo how it works?
First, we capture the stdout as well as the exitcode using printf '\0%s\0%d\0'. They are separated by the \0 aka 'null byte'.
After that, we redirect the printf to stderr by doing: 1>&2 and then we redirect all back to stdout using 2>&1. Therefore, the stdout will look like:
"<stderr>\0<stdout>\0<exitcode>\0"
Enclosing the printf command in <( ... ) performs process substitution. Process substitution allows a process’s input or output to be referred to using a filename. This means <( ... ) will pipe the stdout of (printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1into the stdin of the command group using the first <.
Then, we can capture the piped stdout from the stdin of the command group with read. This command reads a line from the file descriptor stdin and split it into fields. Only the characters found in $IFS are recognized as word delimiters. $IFS or Internal Field Separator is a variable that determines how Bash recognizes fields, or word boundaries, when it interprets character strings. $IFS defaults to whitespace (space, tab, and newline), but may be changed, for example, to parse a comma-separated data file. Note that $* uses the first character held in $IFS.
## Shows whitespace as a single space, ^I(horizontal tab), and newline, and display "$" at end-of-line.
echo "$IFS" | cat -vte
# Output:
# ^I$
# $
## Reads commands from string and assign any arguments to pos params
bash -c 'set w x y z; IFS=":-;"; echo "$*"'
# Output:
# w:x:y:z
for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a
# b
# c
IFS=$'\n'; for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a b
# c
That is why we defined IFS=$'\n' (newline) as delimiter.
Our script uses read -r -d '', where read -r does not allow backslashes to escape any characters, and -d '' continues until the first character '' is read, rather than newline.
Finally, replace some_command_with_err with your script file and you can capture and handle the stderr, stdout as well as the exitcode as your will.
This post helped me come up with a similar solution for my own purposes:
MESSAGE=`{ echo $ERROR_MESSAGE | format_logs.py --level=ERROR; } 2>&1`
Then as long as our MESSAGE is not an empty string, we pass it on to other stuff. This will let us know if our format_logs.py failed with some kind of python exception.
In zsh:
{ . ./useless.sh > /dev/tty } 2>&1 | read ERROR
$ echo $ERROR
( your message )
Capture AND Print stderr
ERROR=$( ./useless.sh 3>&1 1>&2 2>&3 | tee /dev/fd/2 )
Breakdown
You can use $() to capture stdout, but you want to capture stderr instead. So you swap stdout and stderr. Using fd 3 as the temporary storage in the standard swap algorithm.
If you want to capture AND print use tee to make a duplicate. In this case the output of tee will be captured by $() rather than go to the console, but stderr(of tee) will still go to the console so we use that as the second output for tee via the special file /dev/fd/2 since tee expects a file path rather than a fd number.
NOTE: That is an awful lot of redirections in a single line and the order matters. $() is grabbing the stdout of tee at the end of the pipeline and the pipeline itself routes stdout of ./useless.sh to the stdin of tee AFTER we swapped stdin and stdout for ./useless.sh.
Using stdout of ./useless.sh
The OP said he still wanted to use (not just print) stdout, like ./useless.sh | sed 's/Output/Useless/'.
No problem just do it BEFORE swapping stdout and stderr. I recommend moving it into a function or file (also-useless.sh) and calling that in place of ./useless.sh in the line above.
However, if you want to CAPTURE stdout AND stderr, then I think you have to fall back on temporary files because $() will only do one at a time and it makes a subshell from which you cannot return variables.
Improving on YellowApple's answer:
This is a Bash function to capture stderr into any variable
stderr_capture_example.sh:
#!/usr/bin/env bash
# Capture stderr from a command to a variable while maintaining stdout
# #Args:
# $1: The variable name to store the stderr output
# $2: Vararg command and arguments
# #Return:
# The Command's Returnn-Code or 2 if missing arguments
function capture_stderr {
[ $# -lt 2 ] && return 2
local stderr="$1"
shift
{
printf -v "$stderr" '%s' "$({ "$#" 1>&3; } 2>&1)"
} 3>&1
}
# Testing with a call to erroring ls
LANG=C capture_stderr my_stderr ls "$0" ''
printf '\nmy_stderr contains:\n%s' "$my_stderr"
Testing:
bash stderr_capture_example.sh
Output:
stderr_capture_example.sh
my_stderr contains:
ls: cannot access '': No such file or directory
This function can be used to capture the returned choice of a dialog command.
If you want to bypass the use of a temporary file you may be able to use process substitution. I haven't quite gotten it to work yet. This was my first attempt:
$ .useless.sh 2> >( ERROR=$(<) )
-bash: command substitution: line 42: syntax error near unexpected token `)'
-bash: command substitution: line 42: `<)'
Then I tried
$ ./useless.sh 2> >( ERROR=$( cat <() ) )
This Is Output
$ echo $ERROR # $ERROR is empty
However
$ ./useless.sh 2> >( cat <() > asdf.txt )
This Is Output
$ cat asdf.txt
This Is Error
So the process substitution is doing generally the right thing... unfortunately, whenever I wrap STDIN inside >( ) with something in $() in an attempt to capture that to a variable, I lose the contents of $(). I think that this is because $() launches a sub process which no longer has access to the file descriptor in /dev/fd which is owned by the parent process.
Process substitution has bought me the ability to work with a data stream which is no longer in STDERR, unfortunately I don't seem to be able to manipulate it the way that I want.
$ b=$( ( a=$( (echo stdout;echo stderr >&2) ) ) 2>&1 )
$ echo "a=>$a b=>$b"
a=>stdout b=>stderr
For error proofing your commands:
execute [INVOKING-FUNCTION] [COMMAND]
execute () {
function="${1}"
command="${2}"
error=$(eval "${command}" 2>&1 >"/dev/null")
if [ ${?} -ne 0 ]; then
echo "${function}: ${error}"
exit 1
fi
}
Inspired in Lean manufacturing:
Make errors impossible by design
Make steps the smallest
Finish items one by one
Make it obvious to anyone
I'll use find command
find / -maxdepth 2 -iname 'tmp' -type d
as non superuser for the demo. It should complain 'Permission denied' when acessing / dir.
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&- | tee /dev/stderr)"; } 3>&1 | tee /dev/fd/4 2>&1; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
that gives output:
terminal:
find: ‘/root’: Permission denied
/tmp
/var/tmp
find: ‘/lost+found’: Permission denied
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied
The terminal output has also /dev/stderr content the same way as if you were running that find command without any script. $out has /dev/stdout and $err has /dev/stderr content.
use:
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&-)"; } 3>&1 | tee /dev/fd/4; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
if you don't want to see /dev/stderr in the terminal output.
terminal:
/tmp
/var/tmp
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied
Let's say I have a script like the following:
useless.sh
echo "This Is Error" 1>&2
echo "This Is Output"
And I have another shell script:
alsoUseless.sh
./useless.sh | sed 's/Output/Useless/'
I want to capture "This Is Error", or any other stderr from useless.sh, into a variable.
Let's call it ERROR.
Notice that I am using stdout for something. I want to continue using stdout, so redirecting stderr into stdout is not helpful, in this case.
So, basically, I want to do
./useless.sh 2> $ERROR | ...
but that obviously doesn't work.
I also know that I could do
./useless.sh 2> /tmp/Error
ERROR=`cat /tmp/Error`
but that's ugly and unnecessary.
Unfortunately, if no answers turn up here that's what I'm going to have to do.
I'm hoping there's another way.
Anyone have any better ideas?
It would be neater to capture the error file thus:
ERROR=$(</tmp/Error)
The shell recognizes this and doesn't have to run 'cat' to get the data.
The bigger question is hard. I don't think there's an easy way to do it. You'd have to build the entire pipeline into the sub-shell, eventually sending its final standard output to a file, so that you can redirect the errors to standard output.
ERROR=$( { ./useless.sh | sed s/Output/Useless/ > outfile; } 2>&1 )
Note that the semi-colon is needed (in classic shells - Bourne, Korn - for sure; probably in Bash too). The '{}' does I/O redirection over the enclosed commands. As written, it would capture errors from sed too.
WARNING: Formally untested code - use at own risk.
Redirected stderr to stdout, stdout to /dev/null, and then use the backticks or $() to capture the redirected stderr:
ERROR=$(./useless.sh 2>&1 >/dev/null)
alsoUseless.sh
This will allow you to pipe the output of your useless.sh script through a command such as sed and save the stderr in a variable named error. The result of the pipe is sent to stdout for display or to be piped into another command.
It sets up a couple of extra file descriptors to manage the redirections needed in order to do this.
#!/bin/bash
exec 3>&1 4>&2 #set up extra file descriptors
error=$( { ./useless.sh | sed 's/Output/Useless/' 2>&4 1>&3; } 2>&1 )
echo "The message is \"${error}.\""
exec 3>&- 4>&- # release the extra file descriptors
There are a lot of duplicates for this question, many of which have a slightly simpler usage scenario where you don't want to capture stderr and stdout and the exit code all at the same time.
if result=$(useless.sh 2>&1); then
stdout=$result
else
rc=$?
stderr=$result
fi
works for the common scenario where you expect either proper output in the case of success, or a diagnostic message on stderr in the case of failure.
Note that the shell's control statements already examine $? under the hood; so anything which looks like
cmd
if [ $? -eq 0 ], then ...
is just a clumsy, unidiomatic way of saying
if cmd; then ...
For the benefit of the reader, this recipe here
can be re-used as oneliner to catch stderr into a variable
still gives access to the return code of the command
Sacrifices a temporary file descriptor 3 (which can be changed by you of course)
And does not expose this temporary file descriptors to the inner command
If you want to catch stderr of some command into var you can do
{ var="$( { command; } 2>&1 1>&3 3>&- )"; } 3>&1;
Afterwards you have it all:
echo "command gives $? and stderr '$var'";
If command is simple (not something like a | b) you can leave the inner {} away:
{ var="$(command 2>&1 1>&3 3>&-)"; } 3>&1;
Wrapped into an easy reusable bash-function (probably needs version 3 and above for local -n):
: catch-stderr var cmd [args..]
catch-stderr() { local -n v="$1"; shift && { v="$("$#" 2>&1 1>&3 3>&-)"; } 3>&1; }
Explained:
local -n aliases "$1" (which is the variable for catch-stderr)
3>&1 uses file descriptor 3 to save there stdout points
{ command; } (or "$#") then executes the command within the output capturing $(..)
Please note that the exact order is important here (doing it the wrong way shuffles the file descriptors wrongly):
2>&1 redirects stderr to the output capturing $(..)
1>&3 redirects stdout away from the output capturing $(..) back to the "outer" stdout which was saved in file descriptor 3. Note that stderr still refers to where FD 1 pointed before: To the output capturing $(..)
3>&- then closes the file descriptor 3 as it is no more needed, such that command does not suddenly has some unknown open file descriptor showing up. Note that the outer shell still has FD 3 open, but command will not see it.
The latter is important, because some programs like lvm complain about unexpected file descriptors. And lvm complains to stderr - just what we are going to capture!
You can catch any other file descriptor with this recipe, if you adapt accordingly. Except file descriptor 1 of course (here the redirection logic would be wrong, but for file descriptor 1 you can just use var=$(command) as usual).
Note that this sacrifices file descriptor 3. If you happen to need that file descriptor, feel free to change the number. But be aware, that some shells (from the 1980s) might understand 99>&1 as argument 9 followed by 9>&1 (this is no problem for bash).
Also note that it is not particluar easy to make this FD 3 configurable through a variable. This makes things very unreadable:
: catch-var-from-fd-by-fd variable fd-to-catch fd-to-sacrifice command [args..]
catch-var-from-fd-by-fd()
{
local -n v="$1";
local fd1="$2" fd2="$3";
shift 3 || return;
eval exec "$fd2>&1";
v="$(eval '"$#"' "$fd1>&1" "1>&$fd2" "$fd2>&-")";
eval exec "$fd2>&-";
}
Security note: The first 3 arguments to catch-var-from-fd-by-fd must not be taken from a 3rd party. Always give them explicitly in a "static" fashion.
So no-no-no catch-var-from-fd-by-fd $var $fda $fdb $command, never do this!
If you happen to pass in a variable variable name, at least do it as follows:
local -n var="$var"; catch-var-from-fd-by-fd var 3 5 $command
This still will not protect you against every exploit, but at least helps to detect and avoid common scripting errors.
Notes:
catch-var-from-fd-by-fd var 2 3 cmd.. is the same as catch-stderr var cmd..
shift || return is just some way to prevent ugly errors in case you forget to give the correct number of arguments. Perhaps terminating the shell would be another way (but this makes it hard to test from commandline).
The routine was written such, that it is more easy to understand. One can rewrite the function such that it does not need exec, but then it gets really ugly.
This routine can be rewritten for non-bash as well such that there is no need for local -n. However then you cannot use local variables and it gets extremely ugly!
Also note that the evals are used in a safe fashion. Usually eval is considerered dangerous. However in this case it is no more evil than using "$#" (to execute arbitrary commands). However please be sure to use the exact and correct quoting as shown here (else it becomes very very dangerous).
# command receives its input from stdin.
# command sends its output to stdout.
exec 3>&1
stderr="$(command </dev/stdin 2>&1 1>&3)"
exitcode="${?}"
echo "STDERR: $stderr"
exit ${exitcode}
POSIX
STDERR can be captured with some redirection magic:
$ { error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&3 ; } 2>&1); } 3>&1
lrwxrwxrwx 1 rZZt rZZt 7 Aug 22 15:44 /bin -> usr/bin/
$ echo $error
ls: cannot access '/XXXX': No such file or directory
Note that piping of STDOUT of the command (here ls) is done inside the innermost { }. If you're executing a simple command (eg, not a pipe), you could remove these inner braces.
You can't pipe outside the command as piping makes a subshell in bash and zsh, and the assignment to the variable in the subshell wouldn't be available to the current shell.
bash
In bash, it would be better not to assume that file descriptor 3 is unused:
{ error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&$tmp ; } 2>&1); } {tmp}>&1;
exec {tmp}>&- # With this syntax the FD stays open
Note that this doesn't work in zsh.
Thanks to this answer for the general idea.
A simple solution
{ ERROR=$(./useless.sh 2>&1 1>&$out); } {out}>&1
echo "-"
echo $ERROR
Will produce:
This Is Output
-
This Is Error
Iterating a bit on Tom Hale's answer, I've found it possible to wrap the redirection yoga into a function for easier reuse. For example:
#!/bin/sh
capture () {
{ captured=$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
# Example usage; capturing dialog's output without resorting to temp files
# was what motivated me to search for this particular SO question
capture dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
choice=$captured
clear; echo $choice
It's almost certainly possible to simplify this further. Haven't tested especially-thoroughly, but it does appear to work with both bash and ksh.
EDIT: an alternative version of the capture function which stores the captured STDERR output into a user-specified variable (instead of relying on a global $captured), taking inspiration from Léa Gris's answer while preserving the ksh (and zsh) compatibility of the above implementation:
capture () {
if [ "$#" -lt 2 ]; then
echo "Usage: capture varname command [arg ...]"
return 1
fi
typeset var captured; captured="$1"; shift
{ read $captured <<<$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
And usage:
capture choice dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
clear; echo $choice
Here's how I did it :
#
# $1 - name of the (global) variable where the contents of stderr will be stored
# $2 - command to be executed
#
captureStderr()
{
local tmpFile=$(mktemp)
$2 2> $tmpFile
eval "$1=$(< $tmpFile)"
rm $tmpFile
}
Usage example :
captureStderr err "./useless.sh"
echo -$err-
It does use a temporary file. But at least the ugly stuff is wrapped in a function.
This is an interesting problem to which I hoped there was an elegant solution. Sadly, I end up with a solution similar to Mr. Leffler, but I'll add that you can call useless from inside a Bash function for improved readability:
#!/bin/bash
function useless {
/tmp/useless.sh | sed 's/Output/Useless/'
}
ERROR=$(useless)
echo $ERROR
All other kind of output redirection must be backed by a temporary file.
I think you want to capture stderr, stdout and exitcode if that is your intention you can use this code:
## Capture error when 'some_command() is executed
some_command_with_err() {
echo 'this is the stdout'
echo 'this is the stderr' >&2
exit 1
}
run_command() {
{
IFS=$'\n' read -r -d '' stderr;
IFS=$'\n' read -r -d '' stdout;
IFS=$'\n' read -r -d '' stdexit;
} < <((printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1)
stdexit=${stdexit:-0};
}
echo 'Run command:'
if ! run_command; then
## Show the values
typeset -p stdout stderr stdexit
else
typeset -p stdout stderr stdexit
fi
This scripts capture the stderr, stdout as well as the exitcode.
But Teo how it works?
First, we capture the stdout as well as the exitcode using printf '\0%s\0%d\0'. They are separated by the \0 aka 'null byte'.
After that, we redirect the printf to stderr by doing: 1>&2 and then we redirect all back to stdout using 2>&1. Therefore, the stdout will look like:
"<stderr>\0<stdout>\0<exitcode>\0"
Enclosing the printf command in <( ... ) performs process substitution. Process substitution allows a process’s input or output to be referred to using a filename. This means <( ... ) will pipe the stdout of (printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1into the stdin of the command group using the first <.
Then, we can capture the piped stdout from the stdin of the command group with read. This command reads a line from the file descriptor stdin and split it into fields. Only the characters found in $IFS are recognized as word delimiters. $IFS or Internal Field Separator is a variable that determines how Bash recognizes fields, or word boundaries, when it interprets character strings. $IFS defaults to whitespace (space, tab, and newline), but may be changed, for example, to parse a comma-separated data file. Note that $* uses the first character held in $IFS.
## Shows whitespace as a single space, ^I(horizontal tab), and newline, and display "$" at end-of-line.
echo "$IFS" | cat -vte
# Output:
# ^I$
# $
## Reads commands from string and assign any arguments to pos params
bash -c 'set w x y z; IFS=":-;"; echo "$*"'
# Output:
# w:x:y:z
for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a
# b
# c
IFS=$'\n'; for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a b
# c
That is why we defined IFS=$'\n' (newline) as delimiter.
Our script uses read -r -d '', where read -r does not allow backslashes to escape any characters, and -d '' continues until the first character '' is read, rather than newline.
Finally, replace some_command_with_err with your script file and you can capture and handle the stderr, stdout as well as the exitcode as your will.
This post helped me come up with a similar solution for my own purposes:
MESSAGE=`{ echo $ERROR_MESSAGE | format_logs.py --level=ERROR; } 2>&1`
Then as long as our MESSAGE is not an empty string, we pass it on to other stuff. This will let us know if our format_logs.py failed with some kind of python exception.
In zsh:
{ . ./useless.sh > /dev/tty } 2>&1 | read ERROR
$ echo $ERROR
( your message )
Capture AND Print stderr
ERROR=$( ./useless.sh 3>&1 1>&2 2>&3 | tee /dev/fd/2 )
Breakdown
You can use $() to capture stdout, but you want to capture stderr instead. So you swap stdout and stderr. Using fd 3 as the temporary storage in the standard swap algorithm.
If you want to capture AND print use tee to make a duplicate. In this case the output of tee will be captured by $() rather than go to the console, but stderr(of tee) will still go to the console so we use that as the second output for tee via the special file /dev/fd/2 since tee expects a file path rather than a fd number.
NOTE: That is an awful lot of redirections in a single line and the order matters. $() is grabbing the stdout of tee at the end of the pipeline and the pipeline itself routes stdout of ./useless.sh to the stdin of tee AFTER we swapped stdin and stdout for ./useless.sh.
Using stdout of ./useless.sh
The OP said he still wanted to use (not just print) stdout, like ./useless.sh | sed 's/Output/Useless/'.
No problem just do it BEFORE swapping stdout and stderr. I recommend moving it into a function or file (also-useless.sh) and calling that in place of ./useless.sh in the line above.
However, if you want to CAPTURE stdout AND stderr, then I think you have to fall back on temporary files because $() will only do one at a time and it makes a subshell from which you cannot return variables.
Improving on YellowApple's answer:
This is a Bash function to capture stderr into any variable
stderr_capture_example.sh:
#!/usr/bin/env bash
# Capture stderr from a command to a variable while maintaining stdout
# #Args:
# $1: The variable name to store the stderr output
# $2: Vararg command and arguments
# #Return:
# The Command's Returnn-Code or 2 if missing arguments
function capture_stderr {
[ $# -lt 2 ] && return 2
local stderr="$1"
shift
{
printf -v "$stderr" '%s' "$({ "$#" 1>&3; } 2>&1)"
} 3>&1
}
# Testing with a call to erroring ls
LANG=C capture_stderr my_stderr ls "$0" ''
printf '\nmy_stderr contains:\n%s' "$my_stderr"
Testing:
bash stderr_capture_example.sh
Output:
stderr_capture_example.sh
my_stderr contains:
ls: cannot access '': No such file or directory
This function can be used to capture the returned choice of a dialog command.
If you want to bypass the use of a temporary file you may be able to use process substitution. I haven't quite gotten it to work yet. This was my first attempt:
$ .useless.sh 2> >( ERROR=$(<) )
-bash: command substitution: line 42: syntax error near unexpected token `)'
-bash: command substitution: line 42: `<)'
Then I tried
$ ./useless.sh 2> >( ERROR=$( cat <() ) )
This Is Output
$ echo $ERROR # $ERROR is empty
However
$ ./useless.sh 2> >( cat <() > asdf.txt )
This Is Output
$ cat asdf.txt
This Is Error
So the process substitution is doing generally the right thing... unfortunately, whenever I wrap STDIN inside >( ) with something in $() in an attempt to capture that to a variable, I lose the contents of $(). I think that this is because $() launches a sub process which no longer has access to the file descriptor in /dev/fd which is owned by the parent process.
Process substitution has bought me the ability to work with a data stream which is no longer in STDERR, unfortunately I don't seem to be able to manipulate it the way that I want.
$ b=$( ( a=$( (echo stdout;echo stderr >&2) ) ) 2>&1 )
$ echo "a=>$a b=>$b"
a=>stdout b=>stderr
For error proofing your commands:
execute [INVOKING-FUNCTION] [COMMAND]
execute () {
function="${1}"
command="${2}"
error=$(eval "${command}" 2>&1 >"/dev/null")
if [ ${?} -ne 0 ]; then
echo "${function}: ${error}"
exit 1
fi
}
Inspired in Lean manufacturing:
Make errors impossible by design
Make steps the smallest
Finish items one by one
Make it obvious to anyone
I'll use find command
find / -maxdepth 2 -iname 'tmp' -type d
as non superuser for the demo. It should complain 'Permission denied' when acessing / dir.
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&- | tee /dev/stderr)"; } 3>&1 | tee /dev/fd/4 2>&1; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
that gives output:
terminal:
find: ‘/root’: Permission denied
/tmp
/var/tmp
find: ‘/lost+found’: Permission denied
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied
The terminal output has also /dev/stderr content the same way as if you were running that find command without any script. $out has /dev/stdout and $err has /dev/stderr content.
use:
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&-)"; } 3>&1 | tee /dev/fd/4; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
if you don't want to see /dev/stderr in the terminal output.
terminal:
/tmp
/var/tmp
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied
Is it possible to store or capture stdout and stderr in different variables, without using a temp file? Right now I do this to get stdout in out and stderr in err when running some_command, but I'd
like to avoid the temp file.
error_file=$(mktemp)
out=$(some_command 2>$error_file)
err=$(< $error_file)
rm $error_file
Ok, it got a bit ugly, but here is a solution:
unset t_std t_err
eval "$( (echo std; echo err >&2) \
2> >(readarray -t t_err; typeset -p t_err) \
> >(readarray -t t_std; typeset -p t_std) )"
where (echo std; echo err >&2) needs to be replaced by the actual command. Output of stdout is saved into the array $t_std line by line omitting the newlines (the -t) and stderr into $t_err.
If you don't like arrays you can do
unset t_std t_err
eval "$( (echo std; echo err >&2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std) )"
which pretty much mimics the behavior of var=$(cmd) except for the value of $? which takes us to the last modification:
unset t_std t_err t_ret
eval "$( (echo std; echo err >&2; exit 2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std); t_ret=$?; typeset -p t_ret )"
Here $? is preserved into $t_ret
Tested on Debian wheezy using GNU bash, Version 4.2.37(1)-release (i486-pc-linux-gnu).
I think before saying “you can't” do something, people should at least give it a try with their own hands…
Simple and clean solution, without using eval or anything exotic
1. A minimal version
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\0' "$(some_command)" 1>&2) 2>&1)
Requires: printf, read
2. A simple test
A dummy script for producing stdout and stderr: useless.sh
#!/bin/bash
#
# useless.sh
#
echo "This is stderr" 1>&2
echo "This is stdout"
The actual script that will capture stdout and stderr: capture.sh
#!/bin/bash
#
# capture.sh
#
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\0' "$(./useless.sh)" 1>&2) 2>&1)
echo 'Here is the captured stdout:'
echo "${CAPTURED_STDOUT}"
echo
echo 'And here is the captured stderr:'
echo "${CAPTURED_STDERR}"
echo
Output of capture.sh
Here is the captured stdout:
This is stdout
And here is the captured stderr:
This is stderr
3. How it works
The command
(printf '\0%s\0' "$(some_command)" 1>&2) 2>&1
sends the standard output of some_command to printf '\0%s\0', thus creating the string \0${stdout}\n\0 (where \0 is a NUL byte and \n is a new line character); the string \0${stdout}\n\0 is then redirected to the standard error, where the standard error of some_command was already present, thus composing the string ${stderr}\n\0${stdout}\n\0, which is then redirected back to the standard output.
Afterwards, the command
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
starts reading the string ${stderr}\n\0${stdout}\n\0 up until the first NUL byte and saves the content into ${CAPTURED_STDERR}. Then the command
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
keeps reading the same string up to the next NUL byte and saves the content into ${CAPTURED_STDOUT}.
4. Making it unbreakable
The solution above relies on a NUL byte for the delimiter between stderr and stdout, therefore it will not work if for any reason stderr contains other NUL bytes.
Although that will rarely happen, it is possible to make the script completely unbreakable by stripping all possible NUL bytes from stdout and stderr before passing both outputs to read (sanitization) – NUL bytes would anyway get lost, as it is not possible to store them into shell variables:
{
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
} < <((printf '\0%s\0' "$((some_command | tr -d '\0') 3>&1- 1>&2- 2>&3- | tr -d '\0')" 1>&2) 2>&1)
Requires: printf, read, tr
5. Preserving the exit status – the blueprint (without sanitization)
After thinking a bit about the ultimate approach, I have come out with a solution that uses printf to cache both stdout and the exit code as two different arguments, so that they never interfere.
The first thing I did was outlining a way to communicate the exit status to the third argument of printf, and this was something very easy to do in its simplest form (i.e. without sanitization).
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
(IFS=$'\n' read -r -d '' _ERRNO_; exit ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(some_command)" "${?}" 1>&2) 2>&1)
Requires: exit, printf, read
6. Preserving the exit status with sanitization – unbreakable (rewritten)
Things get very messy though when we try to introduce sanitization. Launching tr for sanitizing the streams does in fact overwrite our previous exit status, so apparently the only solution is to redirect the latter to a separate descriptor before it gets lost, keep it there until tr does its job twice, and then redirect it back to its place.
After some quite acrobatic redirections between file descriptors, this is what I came out with.
The code below is a rewriting of the example that I have removed. It also sanitizes possible NUL bytes in the streams, so that read can always work properly.
{
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
(IFS=$'\n' read -r -d '' _ERRNO_; exit ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ some_command; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
Requires: exit, printf, read, tr
This solution is really robust. The exit code is always kept separated in a different descriptor until it reaches printf directly as a separate argument.
7. The ultimate solution – a general purpose function with exit status
We can also transform the code above to a general purpose function.
# SYNTAX:
# catch STDOUT_VARIABLE STDERR_VARIABLE COMMAND [ARG1[ ARG2[ ...[ ARGN]]]]
catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ shift 2; "${#}"; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Requires: cat, exit, printf, read, shift, tr
ChangeLog: 2022-06-17 // Replaced ${3} with shift 2; ${#} after Pavel Tankov's comment (Bash-only). 2023-01-18 // Replaced ${#} with "${#}" after cbugk's comment.
With the catch function we can launch the following snippet,
catch MY_STDOUT MY_STDERR './useless.sh'
echo "The \`./useless.sh\` program exited with code ${?}"
echo
echo 'Here is the captured stdout:'
echo "${MY_STDOUT}"
echo
echo 'And here is the captured stderr:'
echo "${MY_STDERR}"
echo
and get the following result:
The `./useless.sh` program exited with code 0
Here is the captured stdout:
This is stderr 1
This is stderr 2
And here is the captured stderr:
This is stdout 1
This is stdout 2
8. What happens in the last examples
Here follows a fast schematization:
some_command is launched: we then have some_command's stdout on the descriptor 1, some_command's stderr on the descriptor 2 and some_command's exit code redirected to the descriptor 3
stdout is piped to tr (sanitization)
stderr is swapped with stdout (using temporarily the descriptor 4) and piped to tr (sanitization)
the exit code (descriptor 3) is swapped with stderr (now descriptor 1) and piped to exit $(cat)
stderr (now descriptor 3) is redirected to the descriptor 1, end expanded as the second argument of printf
the exit code of exit $(cat) is captured by the third argument of printf
the output of printf is redirected to the descriptor 2, where stdout was already present
the concatenation of stdout and the output of printf is piped to read
9. The POSIX-compliant version #1 (breakable)
Process substitutions (the < <() syntax) are not POSIX-standard (although they de facto are). In a shell that does not support the < <() syntax the only way to reach the same result is via the <<EOF … EOF syntax. Unfortunately this does not allow us to use NUL bytes as delimiters, because these get automatically stripped out before reaching read. We must use a different delimiter. The natural choice falls onto the CTRL+Z character (ASCII character no. 26). Here is a breakable version (outputs must never contain the CTRL+Z character, or otherwise they will get mixed).
_CTRL_Z_=$'\cZ'
{
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" CAPTURED_STDERR;
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" CAPTURED_STDOUT;
(IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" _ERRNO_; exit ${_ERRNO_});
} <<EOF
$((printf "${_CTRL_Z_}%s${_CTRL_Z_}%d${_CTRL_Z_}" "$(some_command)" "${?}" 1>&2) 2>&1)
EOF
Requires: exit, printf, read
Note: As shift is Bash-only, in this POSIX-compliant version command + arguments must appear under the same quotes.
10. The POSIX-compliant version #2 (unbreakable, but not as good as the non-POSIX one)
And here is its unbreakable version, directly in function form (if either stdout or stderr contain CTRL+Z characters, the stream will be truncated, but will never be exchanged with another descriptor).
_CTRL_Z_=$'\cZ'
# SYNTAX:
# catch_posix STDOUT_VARIABLE STDERR_VARIABLE COMMAND
catch_posix() {
{
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" "${1}";
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" "${2}";
(IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" _ERRNO_; return ${_ERRNO_});
} <<EOF
$((printf "${_CTRL_Z_}%s${_CTRL_Z_}%d${_CTRL_Z_}" "$(((({ ${3}; echo "${?}" 1>&3-; } | cut -z -d"${_CTRL_Z_}" -f1 | tr -d '\0' 1>&4-) 4>&2- 2>&1- | cut -z -d"${_CTRL_Z_}" -f1 | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
EOF
}
Requires: cat, cut, exit, printf, read, tr
Answer's history
Here is a previous version of catch() before Pavel Tankov's comment (this version requires the additional arguments to be quoted together with the command):
# SYNTAX:
# catch STDOUT_VARIABLE STDERR_VARIABLE COMMAND [ARG1[ ARG2[ ...[ ARGN]]]]
catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ shift 2; ${#}; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Requires: cat, exit, printf, read, tr
Furthermore, I replaced an old example for propagating the exit status to the current shell, because, as Andy had pointed out in the comments, it was not as “unbreakable” as it was supposed to be (since it did not use printf to buffer one of the streams). For the record I paste the problematic code here:
Preserving the exit status (still unbreakable)
The following variant propagates also the exit status of some_command to the current shell:
{
IFS= read -r -d '' CAPTURED_STDOUT;
IFS= read -r -d '' CAPTURED_STDERR;
(IFS= read -r -d '' CAPTURED_EXIT; exit "${CAPTURED_EXIT}");
} < <((({ { some_command ; echo "${?}" 1>&3; } | tr -d '\0'; printf '\0'; } 2>&1- 1>&4- | tr -d '\0' 1>&4-) 3>&1- | xargs printf '\0%s\0' 1>&4-) 4>&1-)
Requires: printf, read, tr, xargs
Later, Andy submitted the following “suggested edit” for capturing the exit code:
Simple and clean solution saving the exit value
We can add to the end of stderr, a third piece of information, another NUL plus the exit status of the command. It will be outputted after stderr but before stdout
{
IFS= read -r -d '' CAPTURED_STDERR;
IFS= read -r -d '' CAPTURED_EXIT;
IFS= read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\n\0' "$(some_command; printf '\0%d' "${?}" 1>&2)" 1>&2) 2>&1)
His solution seemed to work, but had the minor problem that the exit status needed to be placed as the last fragment of the string, so that we are able to launch exit "${CAPTURED_EXIT}" within round brackets and not pollute the global scope, as I had tried to do in the removed example. The other problem was that, as the output of his innermost printf got immediately appended to the stderr of some_command, we could no more sanitize possible NUL bytes in stderr, because among these now there was also our NUL delimiter.
Trying to find the right solution to this problem was what led me to write § 5. Preserving the exit status – the blueprint (without sanitization), and the following sections.
This is for catching stdout and stderr in different variables. If you only want to catch stderr, leaving stdout as-is, there is a better and shorter solution.
To sum everything up for the benefit of the reader, here is an
Easy Reusable bash Solution
This version does use subshells and runs without tempfiles. (For a tempfile version which runs without subshells, see my other answer.)
: catch STDOUT STDERR cmd args..
catch()
{
eval "$({
__2="$(
{ __1="$("${#:3}")"; } 2>&1;
ret=$?;
printf '%q=%q\n' "$1" "$__1" >&2;
exit $ret
)";
ret="$?";
printf '%s=%q\n' "$2" "$__2" >&2;
printf '( exit %q )' "$ret" >&2;
} 2>&1 )";
}
Example use:
dummy()
{
echo "$3" >&2
echo "$2" >&1
return "$1"
}
catch stdout stderr dummy 3 $'\ndiffcult\n data \n\n\n' $'\nother\n difficult \n data \n\n'
printf 'ret=%q\n' "$?"
printf 'stdout=%q\n' "$stdout"
printf 'stderr=%q\n' "$stderr"
this prints
ret=3
stdout=$'\ndiffcult\n data '
stderr=$'\nother\n difficult \n data '
So it can be used without deeper thinking about it. Just put catch VAR1 VAR2 in front of any command args.. and you are done.
Some if cmd args..; then will become if catch VAR1 VAR2 cmd args..; then. Really nothing complex.
Addendum: Use in "strict mode"
catch works for me identically in strict mode. The only caveat is, that the example above returns error code 3, which, in strict mode, calls the ERR trap. Hence if you run some command under set -e which is expected to return arbitrary error codes (not only 0), you need to catch the return code into some variable like && ret=$? || ret=$? as shown below:
dummy()
{
echo "$3" >&2
echo "$2" >&1
return "$1"
}
catch stdout stderr dummy 3 $'\ndifficult\n data \n\n\n' $'\nother\n difficult \n data \n\n' && ret=$? || ret=$?
printf 'ret=%q\n' "$ret"
printf 'stdout=%q\n' "$stdout"
printf 'stderr=%q\n' "$stderr"
Discussion
Q: How does it work?
It just wraps ideas from the other answers here into a function, such that it can easily be resused.
catch() basically uses eval to set the two variables. This is similar to https://stackoverflow.com/a/18086548
Consider a call of catch out err dummy 1 2a 3b:
let's skip the eval "$({ and the __2="$( for now. I will come to this later.
__1="$("$("${#:3}")"; } 2>&1; executes dummy 1 2a 3b and stores its stdout into __1 for later use. So __1 becomes 2a. It also redirects stderr of dummy to stdout, such that the outer catch can gather stdout
ret=$?; catches the exit code, which is 1
printf '%q=%q\n' "$1" "$__1" >&2; then outputs out=2a to stderr. stderr is used here, as the current stdout already has taken over the role of stderr of the dummy command.
exit $ret then forwards the exit code (1) to the next stage.
Now to the outer __2="$( ... )":
This catches stdout of the above, which is the stderr of the dummy call, into variable __2. (We could re-use __1 here, but I used __2 to make it less confusing.). So __2 becomes 3b
ret="$?"; catches the (returned) return code 1 (from dummy) again
printf '%s=%q\n' "$2" "$__2" >&2; then outputs err=3a to stderr. stderr is used again, as it already was used to output the other variable out=2a.
printf '( exit %q )' "$ret" >&2; then outputs the code to set the proper return value. I did not find a better way, as assigning it to a variable needs a variable name, which then cannot be used as first or second argument to catch.
Please note that, as an optimization, we could have written those 2 printf as a single one like printf '%s=%q\n( exit %q ) "$__2" "$ret"` as well.
So what do we have so far?
We have following written to stderr:
out=2a
err=3b
( exit 1 )
where out is from $1, 2a is from stdout of dummy, err is from $2, 3b is from stderr of dummy, and the 1 is from the return code from dummy.
Please note that %q in the format of printf takes care for quoting, such that the shell sees proper (single) arguments when it comes to eval. 2a and 3b are so simple, that they are copied literally.
Now to the outer eval "$({ ... } 2>&1 )";:
This executes all of above which output the 2 variables and the exit, catches it (therefor the 2>&1) and parses it into the current shell using eval.
This way the 2 variables get set and the return code as well.
Q: It uses eval which is evil. So is it safe?
As long as printf %q has no bugs, it should be safe. But you always have to be very careful, just think about ShellShock.
Q: Bugs?
No obvious bugs are known, except following:
Catching big output needs big memory and CPU, as everything goes into variables and needs to be back-parsed by the shell. So use it wisely.
As usual $(echo $'\n\n\n\n') swallows all linefeeds, not only the last one. This is a POSIX requirement. If you need to get the LFs unharmed, just add some trailing character to the output and remove it afterwards like in following recipe (look at the trailing x which allows to read a softlink pointing to a file which ends on a $'\n'):
target="$(readlink -e "$file")x"
target="${target%x}"
Shell-variables cannot carry the byte NUL ($'\0'). They are simply ignores if they happen to occur in stdout or stderr.
The given command runs in a sub-subshell. So it has no access to $PPID, nor can it alter shell variables. You can catch a shell function, even builtins, but those will not be able to alter shell variables (as everything running within $( .. ) cannot do this). So if you need to run a function in current shell and catch it's stderr/stdout, you need to do this the usual way with tempfiles. (There are ways to do this such, that interrupting the shell normally does not leave debris behind, but this is complex and deserves it's own answer.)
Q: Bash version?
I think you need Bash 4 and above (due to printf %q)
Q: This still looks so awkward.
Right. Another answer here shows how it can be done in ksh much more cleanly. However I am not used to ksh, so I leave it to others to create a similar easy to reuse recipe for ksh.
Q: Why not use ksh then?
Because this is a bash solution
Q: The script can be improved
Of course you can squeeze out some bytes and create smaller or more incomprehensible solution. Just go for it ;)
Q: There is a typo. : catch STDOUT STDERR cmd args.. shall read # catch STDOUT STDERR cmd args..
Actually this is intended. : shows up in bash -x while comments are silently swallowed. So you can see where the parser is if you happen to have a typo in the function definition. It's an old debugging trick. But beware a bit, you can easily create some neat sideffects within the arguments of :.
Edit: Added a couple more ; to make it more easy to create a single-liner out of catch(). And added section how it works.
Technically, named pipes aren't temporary files and nobody here mentions them. They store nothing in the filesystem and you can delete them as soon as you connect them (so you won't ever see them):
#!/bin/bash -e
foo () {
echo stdout1
echo stderr1 >&2
sleep 1
echo stdout2
echo stderr2 >&2
}
rm -f stdout stderr
mkfifo stdout stderr
foo >stdout 2>stderr & # blocks until reader is connected
exec {fdout}<stdout {fderr}<stderr # unblocks `foo &`
rm stdout stderr # filesystem objects are no longer needed
stdout=$(cat <&$fdout)
stderr=$(cat <&$fderr)
echo $stdout
echo $stderr
exec {fdout}<&- {fderr}<&- # free file descriptors, optional
You can have multiple background processes this way and asynchronously collect their stdouts and stderrs at a convenient time, etc.
If you need this for one process only, you may just as well use hardcoded fd numbers like 3 and 4, instead of the {fdout}/{fderr} syntax (which finds a free fd for you).
This command sets both stdout (stdval) and stderr (errval) values in the present running shell:
eval "$( execcommand 2> >(setval errval) > >(setval stdval); )"
provided this function has been defined:
function setval { printf -v "$1" "%s" "$(cat)"; declare -p "$1"; }
Change execcommand to the captured command, be it "ls", "cp", "df", etc.
All this is based on the idea that we could convert all captured values to a text line with the help of the function setval, then setval is used to capture each value in this structure:
execcommand 2> CaptureErr > CaptureOut
Convert each capture value to a setval call:
execcommand 2> >(setval errval) > >(setval stdval)
Wrap everything inside an execute call and echo it:
echo "$( execcommand 2> >(setval errval) > >(setval stdval) )"
You will get the declare calls that each setval creates:
declare -- stdval="I'm std"
declare -- errval="I'm err"
To execute that code (and get the vars set) use eval:
eval "$( execcommand 2> >(setval errval) > >(setval stdval) )"
and finally echo the set vars:
echo "std out is : |$stdval| std err is : |$errval|
It is also possible to include the return (exit) value.
A complete bash script example looks like this:
#!/bin/bash --
# The only function to declare:
function setval { printf -v "$1" "%s" "$(cat)"; declare -p "$1"; }
# a dummy function with some example values:
function dummy { echo "I'm std"; echo "I'm err" >&2; return 34; }
# Running a command to capture all values
# change execcommand to dummy or any other command to test.
eval "$( dummy 2> >(setval errval) > >(setval stdval); <<<"$?" setval retval; )"
echo "std out is : |$stdval| std err is : |$errval| return val is : |$retval|"
Jonathan has the answer. For reference, this is the ksh93 trick. (requires a non-ancient version).
function out {
echo stdout
echo stderr >&2
}
x=${ { y=$(out); } 2>&1; }
typeset -p x y # Show the values
produces
x=stderr
y=stdout
The ${ cmds;} syntax is just a command substitution that doesn't create a subshell. The commands are executed in the current shell environment. The space at the beginning is important ({ is a reserved word).
Stderr of the inner command group is redirected to stdout (so that it applies to the inner substitution). Next, the stdout of out is assigned to y, and the redirected stderr is captured by x, without the usual loss of y to a command substitution's subshell.
It isn't possible in other shells, because all constructs which capture output require putting the producer into a subshell, which in this case, would include the assignment.
update: Now also supported by mksh.
This is a diagram showing how #madmurphy's very neat solution works.
And an indented version of the one-liner:
catch() {
{
IFS=$'\n' read -r -d '' "$out_var";
IFS=$'\n' read -r -d '' "$err_var";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
}\
< <(
(printf '\0%s\0%d\0' \
"$(
(
(
(
{ ${3}; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-
) 4>&2- 2>&1- | tr -d '\0' 1>&4-
) 3>&1- | exit "$(cat)"
) 4>&1-
)" "${?}" 1>&2
) 2>&1
)
}
Did not like the eval, so here is a solution that uses some redirection tricks to capture program output to a variable and then parses that variable to extract the different components. The -w flag sets the chunk size and influences the ordering of std-out/err messages in the intermediate format. 1 gives potentially high resolution at the cost of overhead.
#######
# runs "$#" and outputs both stdout and stderr on stdin, both in a prefixed format allowing both std in and out to be separately stored in variables later.
# limitations: Bash does not allow null to be returned from subshells, limiting the usefullness of applying this function to commands with null in the output.
# example:
# var=$(keepBoth ls . notHere)
# echo ls had the exit code "$(extractOne r "$var")"
# echo ls had the stdErr of "$(extractOne e "$var")"
# echo ls had the stdOut of "$(extractOne o "$var")"
keepBoth() {
(
prefix(){
( set -o pipefail
base64 -w 1 - | (
while read c
do echo -E "$1" "$c"
done
)
)
}
( (
"$#" | prefix o >&3
echo ${PIPESTATUS[0]} | prefix r >&3
) 2>&1 | prefix e >&1
) 3>&1
)
}
extractOne() { # extract
echo "$2" | grep "^$1" | cut --delimiter=' ' --fields=2 | base64 --decode -
}
For the benefit of the reader here is a solution using tempfiles.
The question was not to use tempfiles. However this might be due to the unwanted pollution of /tmp/ with tempfile in case the shell dies. In case of kill -9 some trap 'rm "$tmpfile1" "$tmpfile2"' 0 does not fire.
If you are in a situation where you can use tempfile, but want to never leave debris behind, here is a recipe.
Again it is called catch() (as my other answer) and has the same calling syntax:
catch stdout stderr command args..
# Wrappers to avoid polluting the current shell's environment with variables
: catch_read returncode FD variable
catch_read()
{
eval "$3=\"\`cat <&$2\`\"";
# You can use read instead to skip some fork()s.
# However read stops at the first NUL byte,
# also does no \n removal and needs bash 3 or above:
#IFS='' read -ru$2 -d '' "$3";
return $1;
}
: catch_1 tempfile variable comand args..
catch_1()
{
{
rm -f "$1";
"${#:3}" 66<&-;
catch_read $? 66 "$2";
} 2>&1 >"$1" 66<"$1";
}
: catch stdout stderr command args..
catch()
{
catch_1 "`tempfile`" "${2:-stderr}" catch_1 "`tempfile`" "${1:-stdout}" "${#:3}";
}
What it does:
It creates two tempfiles for stdout and stderr. However it nearly immediately removes these, such that they are only around for a very short time.
catch_1() catches stdout (FD 1) into a variable and moves stderr to stdout, such that the next ("left") catch_1 can catch that.
Processing in catch is done from right to left, so the left catch_1 is executed last and catches stderr.
The worst which can happen is, that some temporary files show up on /tmp/, but they are always empty in that case. (They are removed before they get filled.). Usually this should not be a problem, as under Linux tmpfs supports roughly 128K files per GB of main memory.
The given command can access and alter all local shell variables as well. So you can call a shell function which has sideffects!
This only forks twice for the tempfile call.
Bugs:
Missing good error handling in case tempfile fails.
This does the usual \n removal of the shell. See comment in catch_read().
You cannot use file descriptor 66 to pipe data to your command. If you need that, use another descriptor for the redirection, like 42 (note that very old shells only offer FDs up to 9).
This cannot handle NUL bytes ($'\0') in stdout and stderr. (NUL is just ignored. For the read variant everything behind a NUL is ignored.)
FYI:
Unix allows us to access deleted files, as long as you keep some reference to them around (such as an open filehandle). This way we can open and then remove them.
In the bash realm, #madmurphy's "7. The ultimate solution – a general purpose function with exit status" is the way to go that I've been massively using everywhere. Based on my experience I'm contributing minor updates making it really "ultimate" also in the following two scenarios:
complex command lines to have args correctly quoted and without the need of quoting the original commands which are now naturally typed as plain tokens. ( the replacement is this..."$(((({ "${#:3}" ; echo...)
our trusted friend "debug" options. xtrace and verbose work by injecting text into stderr... You can immagine for how long I was baffled by the erratic behaviour of scripts that seemed to work perfectly well just before the catch... And the problem actually was quite subtler and required to take care of xtrace and verbose options as mentioned here https://unix.stackexchange.com/a/21944
One of my use case scenarios, where you'll get why the entire quoting mechanism was a problem is the following. Also, to detect the length of a video and do something else in case of error, I needed some debug before figuring out how this fast ffprobe command fails on the given video:
catch end err ffprobe -i "${filename}" -show_entries format=duration -v warning -of csv='p=0'
This, in my experience so far, is the ultimate ultimate ;-) one, and hope may serve you as well. Credits to #madmurphy and all other contributors.
catch() {
if [ "$#" -lt 3 ]; then
echo USAGE: catch STDOUT_VAR STDERR_VAR COMMAND [CMD_ARGS...]
echo 'stdout-> ${STDOUT_VAR}' 'stderr-> ${STDERR_VAR}' 'exit-> ${?}'
echo -e "\n** NOTICE: FD redirects are used to make the magic happen."
echo " Shell's xtrace (set -x) and verbose (set -v) work by redirecting to stderr, which screws the magic up."
echo " xtrace (set -x) and verbose (set -v) modes are suspended during the execution of this function."
return 1
fi
# check "verbose" option, turn if off if enabled, and save restore status USE_V
[[ ${-/v} != $- ]] && set +v && USE_V="-v" || USE_V="+v"
# check "xtrace" option, turn if off if enabled, and save restore status USE_X
[[ ${-/x} != $- ]] && set +x && USE_X="-x" || USE_X="+x"
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
# restore the "xtrace" and "verbose" options before returning
(IFS=$'\n' read -r -d '' _ERRNO_; set $USE_X; set $USE_V; return "${_ERRNO_}");
} < <((printf '\0%s\0%d\0' "$(((({ "${#:3}" ; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Succinctly, I believe the answer is 'No'. The capturing $( ... ) only captures standard output to the variable; there isn't a way to get the standard error captured into a separate variable. So, what you have is about as neat as it gets.
What about... =D
GET_STDERR=""
GET_STDOUT=""
get_stderr_stdout() {
GET_STDERR=""
GET_STDOUT=""
unset t_std t_err
eval "$( (eval $1) 2> >(t_err=$(cat); typeset -p t_err) > >(t_std=$(cat); typeset -p t_std) )"
GET_STDERR=$t_err
GET_STDOUT=$t_std
}
get_stderr_stdout "command"
echo "$GET_STDERR"
echo "$GET_STDOUT"
One workaround, which is hacky but perhaps more intuitive than some of the suggestions on this page, is to tag the output streams, merge them, and split afterwards based on the tags. For example, we might tag stdout with a "STDOUT" prefix:
function someCmd {
echo "I am stdout"
echo "I am stderr" 1>&2
}
ALL=$({ someCmd | sed -e 's/^/STDOUT/g'; } 2>&1)
OUT=$(echo "$ALL" | grep "^STDOUT" | sed -e 's/^STDOUT//g')
ERR=$(echo "$ALL" | grep -v "^STDOUT")
```
If you know that stdout and/or stderr are of a restricted form, you can come up with a tag which does not conflict with their allowed content.
WARNING: NOT (yet?) WORKING!
The following seems a possible lead to get it working without creating any temp files and also on POSIX sh only; it requires base64 however and due to the encoding/decoding may not be that efficient and use also "larger" memory.
Even in the simple case, it would already fail, when the last stderr line has no newline. This can be fixed at least in some cases with replacing exe with "{ exe ; echo >&2 ; }", i.e. adding a newline.
The main problem is however that everything seems racy. Try using an exe like:
exe()
{
cat /usr/share/hunspell/de_DE.dic
cat /usr/share/hunspell/en_GB.dic >&2
}
and you'll see that e.g. parts of the base64 encoded line is on the top of the file, parts at the end, and the non-decoded stderr stuff in the middle.
Well, even if the idea below cannot be made working (which I assume), it may serve as an anti-example for people who may falsely believe it could be made working like this.
Idea (or anti-example):
#!/bin/sh
exe()
{
echo out1
echo err1 >&2
echo out2
echo out3
echo err2 >&2
echo out4
echo err3 >&2
echo -n err4 >&2
}
r="$( { exe | base64 -w 0 ; } 2>&1 )"
echo RAW
printf '%s' "$r"
echo RAW
o="$( printf '%s' "$r" | tail -n 1 | base64 -d )"
e="$( printf '%s' "$r" | head -n -1 )"
unset r
echo
echo OUT
printf '%s' "$o"
echo OUT
echo
echo ERR
printf '%s' "$e"
echo ERR
gives (with the stderr-newline fix):
$ ./ggg
RAW
err1
err2
err3
err4
b3V0MQpvdXQyCm91dDMKb3V0NAo=RAW
OUT
out1
out2
out3
out4OUT
ERR
err1
err2
err3
err4ERR
(At least on Debian's dash and bash)
Here is an variant of #madmurphy solution that should work for arbitrarily large stdout/stderr streams, maintain the exit return value, and handle nulls in the stream (by converting them to newlines)
function buffer_plus_null()
{
local buf
IFS= read -r -d '' buf || :
echo -n "${buf}"
printf '\0'
}
{
IFS= time read -r -d '' CAPTURED_STDOUT;
IFS= time read -r -d '' CAPTURED_STDERR;
(IFS= read -r -d '' CAPTURED_EXIT; exit "${CAPTURED_EXIT}");
} < <((({ { some_command ; echo "${?}" 1>&3; } | tr '\0' '\n' | buffer_plus_null; } 2>&1 1>&4 | tr '\0' '\n' | buffer_plus_null 1>&4 ) 3>&1 | xargs printf '%s\0' 1>&4) 4>&1 )
Cons:
The read commands are the most expensive part of the operation. For example: find /proc on a computer running 500 processes, takes 20 seconds (while the command was only 0.5 seconds). It takes 10 seconds to read in the first time, and 10 seconds more to read the second time, doubling the total time.
Explanation of buffer
The original solution was to an argument to printf to buffer the stream, however with the need to have the exit code come last, one solution is to buffer both stdout and stderr. I tried xargs -0 printf but then you quickly started hitting "max argument length limits". So I decided a solution was to write a quick buffer function:
Use read to store the stream in a variable
This read will terminate when the stream ends, or a null is received. Since we already removed the nulls, it ends when the stream is closed, and returns non-zero. Since this is expected behavior we add || : meaning "or true" so that the line always evaluates to true (0)
Now that I know the stream has ended, it's safe to start echoing it back out.
echo -n "${buf}" is a builtin command and thus not limited to the argument length limit
Lastly, add a null separator to the end.
This prefixes error messages (similar to the answer of #Warbo) and by that we are able to distinguish between stdout and stderr:
out=$(some_command 2> >(sed -e 's/^/stderr/g'))
err=$(echo "$out" | grep -oP "(?<=^stderr).*")
out=$(echo "$out" | grep -v '^stderr')
The (?<=string) part is called a positive lookbehind which excludes the string from the result.
How I use it
# cat ./script.sh
#!/bin/bash
# check script arguments
args=$(getopt -u -l "foo,bar" "fb" "$#" 2> >(sed -e 's/^/stderr/g') )
[[ $? -ne 0 ]] && echo -n "Error: " && echo "$args" | grep -oP "(?<=^stderr).*" && exit 1
mapfile -t args < <(xargs -n1 <<< "$args")
#
# ./script.sh --foo --bar --baz
# Error: getopt: unrecognized option '--baz'
Notes:
As you can see I don't need to filter for stdout as the condition already catched the error and stopped the script. So if the script does not stop, $args does not contain any prefixed content.
An alternative to sed -e 's/^/stderr/g' is xargs -d '\n' -I {} echo "stderr{}".
Variant to prefix stdout AND stderr
# smbclient localhost 1> >(sed -e 's/^/std/g') 2> >(sed -e 's/^/err/g')
std
stdlocalhost: Not enough '\' characters in service
stderrUsage: smbclient [-?EgBVNkPeC] [-?|--help] [--usage]
stderr [-R|--name-resolve=NAME-RESOLVE-ORDER] [-M|--message=HOST]
stderr [-I|--ip-address=IP] [-E|--stderr] [-L|--list=HOST]
stderr [-m|--max-protocol=LEVEL] [-T|--tar=<c|x>IXFqgbNan]
stderr [-D|--directory=DIR] [-c|--command=STRING] [-b|--send-buffer=BYTES]
stderr [-t|--timeout=SECONDS] [-p|--port=PORT] [-g|--grepable]
stderr [-B|--browse] [-d|--debuglevel=DEBUGLEVEL]
stderr [-s|--configfile=CONFIGFILE] [-l|--log-basename=LOGFILEBASE]
stderr [-V|--version] [--option=name=value]
stderr [-O|--socket-options=SOCKETOPTIONS] [-n|--netbiosname=NETBIOSNAME]
stderr [-W|--workgroup=WORKGROUP] [-i|--scope=SCOPE] [-U|--user=USERNAME]
stderr [-N|--no-pass] [-k|--kerberos] [-A|--authentication-file=FILE]
stderr [-S|--signing=on|off|required] [-P|--machine-pass] [-e|--encrypt]
stderr [-C|--use-ccache] [--pw-nt-hash] service <password>
This is an addendum to Jacques Gaudin's addendum to madmurphy's answer.
Unlike the source, this uses eval to execute multi-line command (multi-argument is ok as well thanks to "${#}").
Another caveat is this function will return 0 in any case, and output exit code to a third variable instead. IMO this is more apt for catch.
#!/bin/bash
# Overwrites existing values of provided variables in any case.
# SYNTAX:
# catch STDOUT_VAR_NAME STDERR_VAR_NAME EXIT_CODE_VAR_NAME COMMAND1 [COMMAND2 [...]]
function catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
IFS=$'\n' read -r -d '' "${3}";
return 0;
}\
< <(
(printf '\0%s\0%d\0' \
"$(
(
(
(
{
shift 3;
eval "${#}";
echo "${?}" 1>&3-;
} | tr -d '\0' 1>&4-
) 4>&2- 2>&1- | tr -d '\0' 1>&4-
) 3>&1- | exit "$(cat)"
) 4>&1-
)" "${?}" 1>&2
) 2>&1
)
}
# Simulation of here-doc
MULTILINE_SCRIPT_1='cat << EOF
foo
bar
with newlines
EOF
'
# Simulation of multiple streams
# Notice the lack of semi-colons, otherwise below code
# could become a one-liner and still work
MULTILINE_SCRIPT_2='echo stdout stream
echo error stream 1>&2
'
catch out err code "${MULTILINE_SCRIPT_1}" \
'printf "wait there is more\n" 1>&2'
printf "1)\n\tSTDOUT: ${out}\n\tSTDERR: ${err}\n\tCODE: ${code}\n"
echo ''
catch out err code "${MULTILINE_SCRIPT_2}" echo this multi-argument \
form works too '1>&2' \; \(exit 5\)
printf "2)\n\tSTDOUT: ${out}\n\tSTDERR: ${err}\n\tCODE: ${code}\n"
Output:
1)
STDOUT: foo
bar
with newlines
STDERR: wait there is more
CODE: 0
2)
STDOUT: stdout stream
STDERR: error stream
this multi-argument form works too
CODE: 5
If the command 1) no stateful side effects and 2) is computationally cheap, the easiest solution is to just run it twice. I've mainly used this for code that runs during the boot sequence when you don't yet know if the disk is going to be working. In my case it was a tiny some_command so there was no performance hit for running twice, and the command had no side effects.
The main benefit is that this is clean and easy to read. The solutions here are quite clever, but I would hate to be the one that has to maintain a script containing the more complicated solutions. I'd recommend the simple run-it-twice approach if your scenario works with that, as it's much cleaner and easier to maintain.
Example:
output=$(getopt -o '' -l test: -- "$#")
errout=$(getopt -o '' -l test: -- "$#" 2>&1 >/dev/null)
if [[ -n "$errout" ]]; then
echo "Option Error: $errout"
fi
Again, this is only ok to do because getopt has no side effects. I know it's performance-safe because my parent code calls this less than 100 times during the entire program, and the user will never notice 100 getopt calls vs 200 getopt calls.
Here's a simpler variation that isn't quite what the OP wanted, but is unlike any of the other options. You can get whatever you want by rearranging the file descriptors.
Test command:
%> cat xx.sh
#!/bin/bash
echo stdout
>&2 echo stderr
which by itself does:
%> ./xx.sh
stdout
stderr
Now, print stdout, capture stderr to a variable, & log stdout to a file
%> export err=$(./xx.sh 3>&1 1>&2 2>&3 >"out")
stdout
%> cat out
stdout
%> echo
$err
stderr
Or log stdout & capture stderr to a variable:
export err=$(./xx.sh 3>&1 1>out 2>&3 )
%> cat out
stdout
%> echo $err
stderr
You get the idea.
Realtime output and write to file:
#!/usr/bin/env bash
# File where store the output
log_file=/tmp/out.log
# Empty file
echo > ${log_file}
outToLog() {
# File where write (first parameter)
local f="$1"
# Start file output watcher in background
tail -f "${f}" &
# Capture background process PID
local pid=$!
# Write "stdin" to file
cat /dev/stdin >> "${f}"
# Kill background task
kill -9 ${pid}
}
(
# Long execution script example
echo a
sleep 1
echo b >&2
sleep 1
echo c >&2
sleep 1
echo d
) 2>&1 | outToLog "${log_file}"
# File result
echo '==========='
cat "${log_file}"
I've posted my solution to this problem here. It does use process substitution and requires Bash > v4 but also captures stdout, stderr and return code into variables you name in the current scope:
https://gist.github.com/pmarreck/5eacc6482bc19b55b7c2f48b4f1db4e8
The whole point of this exercise was so that I could assert on these things in a test suite. The fact that I just spent all afternoon figuring out this simple-sounding thing... I hope one of these solutions helps others!
Is it possible to store or capture stdout and stderr in different variables, without using a temp file? Right now I do this to get stdout in out and stderr in err when running some_command, but I'd
like to avoid the temp file.
error_file=$(mktemp)
out=$(some_command 2>$error_file)
err=$(< $error_file)
rm $error_file
Ok, it got a bit ugly, but here is a solution:
unset t_std t_err
eval "$( (echo std; echo err >&2) \
2> >(readarray -t t_err; typeset -p t_err) \
> >(readarray -t t_std; typeset -p t_std) )"
where (echo std; echo err >&2) needs to be replaced by the actual command. Output of stdout is saved into the array $t_std line by line omitting the newlines (the -t) and stderr into $t_err.
If you don't like arrays you can do
unset t_std t_err
eval "$( (echo std; echo err >&2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std) )"
which pretty much mimics the behavior of var=$(cmd) except for the value of $? which takes us to the last modification:
unset t_std t_err t_ret
eval "$( (echo std; echo err >&2; exit 2 ) \
2> >(t_err=$(cat); typeset -p t_err) \
> >(t_std=$(cat); typeset -p t_std); t_ret=$?; typeset -p t_ret )"
Here $? is preserved into $t_ret
Tested on Debian wheezy using GNU bash, Version 4.2.37(1)-release (i486-pc-linux-gnu).
I think before saying “you can't” do something, people should at least give it a try with their own hands…
Simple and clean solution, without using eval or anything exotic
1. A minimal version
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\0' "$(some_command)" 1>&2) 2>&1)
Requires: printf, read
2. A simple test
A dummy script for producing stdout and stderr: useless.sh
#!/bin/bash
#
# useless.sh
#
echo "This is stderr" 1>&2
echo "This is stdout"
The actual script that will capture stdout and stderr: capture.sh
#!/bin/bash
#
# capture.sh
#
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\0' "$(./useless.sh)" 1>&2) 2>&1)
echo 'Here is the captured stdout:'
echo "${CAPTURED_STDOUT}"
echo
echo 'And here is the captured stderr:'
echo "${CAPTURED_STDERR}"
echo
Output of capture.sh
Here is the captured stdout:
This is stdout
And here is the captured stderr:
This is stderr
3. How it works
The command
(printf '\0%s\0' "$(some_command)" 1>&2) 2>&1
sends the standard output of some_command to printf '\0%s\0', thus creating the string \0${stdout}\n\0 (where \0 is a NUL byte and \n is a new line character); the string \0${stdout}\n\0 is then redirected to the standard error, where the standard error of some_command was already present, thus composing the string ${stderr}\n\0${stdout}\n\0, which is then redirected back to the standard output.
Afterwards, the command
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
starts reading the string ${stderr}\n\0${stdout}\n\0 up until the first NUL byte and saves the content into ${CAPTURED_STDERR}. Then the command
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
keeps reading the same string up to the next NUL byte and saves the content into ${CAPTURED_STDOUT}.
4. Making it unbreakable
The solution above relies on a NUL byte for the delimiter between stderr and stdout, therefore it will not work if for any reason stderr contains other NUL bytes.
Although that will rarely happen, it is possible to make the script completely unbreakable by stripping all possible NUL bytes from stdout and stderr before passing both outputs to read (sanitization) – NUL bytes would anyway get lost, as it is not possible to store them into shell variables:
{
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
} < <((printf '\0%s\0' "$((some_command | tr -d '\0') 3>&1- 1>&2- 2>&3- | tr -d '\0')" 1>&2) 2>&1)
Requires: printf, read, tr
5. Preserving the exit status – the blueprint (without sanitization)
After thinking a bit about the ultimate approach, I have come out with a solution that uses printf to cache both stdout and the exit code as two different arguments, so that they never interfere.
The first thing I did was outlining a way to communicate the exit status to the third argument of printf, and this was something very easy to do in its simplest form (i.e. without sanitization).
{
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
(IFS=$'\n' read -r -d '' _ERRNO_; exit ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(some_command)" "${?}" 1>&2) 2>&1)
Requires: exit, printf, read
6. Preserving the exit status with sanitization – unbreakable (rewritten)
Things get very messy though when we try to introduce sanitization. Launching tr for sanitizing the streams does in fact overwrite our previous exit status, so apparently the only solution is to redirect the latter to a separate descriptor before it gets lost, keep it there until tr does its job twice, and then redirect it back to its place.
After some quite acrobatic redirections between file descriptors, this is what I came out with.
The code below is a rewriting of the example that I have removed. It also sanitizes possible NUL bytes in the streams, so that read can always work properly.
{
IFS=$'\n' read -r -d '' CAPTURED_STDOUT;
IFS=$'\n' read -r -d '' CAPTURED_STDERR;
(IFS=$'\n' read -r -d '' _ERRNO_; exit ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ some_command; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
Requires: exit, printf, read, tr
This solution is really robust. The exit code is always kept separated in a different descriptor until it reaches printf directly as a separate argument.
7. The ultimate solution – a general purpose function with exit status
We can also transform the code above to a general purpose function.
# SYNTAX:
# catch STDOUT_VARIABLE STDERR_VARIABLE COMMAND [ARG1[ ARG2[ ...[ ARGN]]]]
catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ shift 2; "${#}"; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Requires: cat, exit, printf, read, shift, tr
ChangeLog: 2022-06-17 // Replaced ${3} with shift 2; ${#} after Pavel Tankov's comment (Bash-only). 2023-01-18 // Replaced ${#} with "${#}" after cbugk's comment.
With the catch function we can launch the following snippet,
catch MY_STDOUT MY_STDERR './useless.sh'
echo "The \`./useless.sh\` program exited with code ${?}"
echo
echo 'Here is the captured stdout:'
echo "${MY_STDOUT}"
echo
echo 'And here is the captured stderr:'
echo "${MY_STDERR}"
echo
and get the following result:
The `./useless.sh` program exited with code 0
Here is the captured stdout:
This is stderr 1
This is stderr 2
And here is the captured stderr:
This is stdout 1
This is stdout 2
8. What happens in the last examples
Here follows a fast schematization:
some_command is launched: we then have some_command's stdout on the descriptor 1, some_command's stderr on the descriptor 2 and some_command's exit code redirected to the descriptor 3
stdout is piped to tr (sanitization)
stderr is swapped with stdout (using temporarily the descriptor 4) and piped to tr (sanitization)
the exit code (descriptor 3) is swapped with stderr (now descriptor 1) and piped to exit $(cat)
stderr (now descriptor 3) is redirected to the descriptor 1, end expanded as the second argument of printf
the exit code of exit $(cat) is captured by the third argument of printf
the output of printf is redirected to the descriptor 2, where stdout was already present
the concatenation of stdout and the output of printf is piped to read
9. The POSIX-compliant version #1 (breakable)
Process substitutions (the < <() syntax) are not POSIX-standard (although they de facto are). In a shell that does not support the < <() syntax the only way to reach the same result is via the <<EOF … EOF syntax. Unfortunately this does not allow us to use NUL bytes as delimiters, because these get automatically stripped out before reaching read. We must use a different delimiter. The natural choice falls onto the CTRL+Z character (ASCII character no. 26). Here is a breakable version (outputs must never contain the CTRL+Z character, or otherwise they will get mixed).
_CTRL_Z_=$'\cZ'
{
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" CAPTURED_STDERR;
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" CAPTURED_STDOUT;
(IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" _ERRNO_; exit ${_ERRNO_});
} <<EOF
$((printf "${_CTRL_Z_}%s${_CTRL_Z_}%d${_CTRL_Z_}" "$(some_command)" "${?}" 1>&2) 2>&1)
EOF
Requires: exit, printf, read
Note: As shift is Bash-only, in this POSIX-compliant version command + arguments must appear under the same quotes.
10. The POSIX-compliant version #2 (unbreakable, but not as good as the non-POSIX one)
And here is its unbreakable version, directly in function form (if either stdout or stderr contain CTRL+Z characters, the stream will be truncated, but will never be exchanged with another descriptor).
_CTRL_Z_=$'\cZ'
# SYNTAX:
# catch_posix STDOUT_VARIABLE STDERR_VARIABLE COMMAND
catch_posix() {
{
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" "${1}";
IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" "${2}";
(IFS=$'\n'"${_CTRL_Z_}" read -r -d "${_CTRL_Z_}" _ERRNO_; return ${_ERRNO_});
} <<EOF
$((printf "${_CTRL_Z_}%s${_CTRL_Z_}%d${_CTRL_Z_}" "$(((({ ${3}; echo "${?}" 1>&3-; } | cut -z -d"${_CTRL_Z_}" -f1 | tr -d '\0' 1>&4-) 4>&2- 2>&1- | cut -z -d"${_CTRL_Z_}" -f1 | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
EOF
}
Requires: cat, cut, exit, printf, read, tr
Answer's history
Here is a previous version of catch() before Pavel Tankov's comment (this version requires the additional arguments to be quoted together with the command):
# SYNTAX:
# catch STDOUT_VARIABLE STDERR_VARIABLE COMMAND [ARG1[ ARG2[ ...[ ARGN]]]]
catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
} < <((printf '\0%s\0%d\0' "$(((({ shift 2; ${#}; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Requires: cat, exit, printf, read, tr
Furthermore, I replaced an old example for propagating the exit status to the current shell, because, as Andy had pointed out in the comments, it was not as “unbreakable” as it was supposed to be (since it did not use printf to buffer one of the streams). For the record I paste the problematic code here:
Preserving the exit status (still unbreakable)
The following variant propagates also the exit status of some_command to the current shell:
{
IFS= read -r -d '' CAPTURED_STDOUT;
IFS= read -r -d '' CAPTURED_STDERR;
(IFS= read -r -d '' CAPTURED_EXIT; exit "${CAPTURED_EXIT}");
} < <((({ { some_command ; echo "${?}" 1>&3; } | tr -d '\0'; printf '\0'; } 2>&1- 1>&4- | tr -d '\0' 1>&4-) 3>&1- | xargs printf '\0%s\0' 1>&4-) 4>&1-)
Requires: printf, read, tr, xargs
Later, Andy submitted the following “suggested edit” for capturing the exit code:
Simple and clean solution saving the exit value
We can add to the end of stderr, a third piece of information, another NUL plus the exit status of the command. It will be outputted after stderr but before stdout
{
IFS= read -r -d '' CAPTURED_STDERR;
IFS= read -r -d '' CAPTURED_EXIT;
IFS= read -r -d '' CAPTURED_STDOUT;
} < <((printf '\0%s\n\0' "$(some_command; printf '\0%d' "${?}" 1>&2)" 1>&2) 2>&1)
His solution seemed to work, but had the minor problem that the exit status needed to be placed as the last fragment of the string, so that we are able to launch exit "${CAPTURED_EXIT}" within round brackets and not pollute the global scope, as I had tried to do in the removed example. The other problem was that, as the output of his innermost printf got immediately appended to the stderr of some_command, we could no more sanitize possible NUL bytes in stderr, because among these now there was also our NUL delimiter.
Trying to find the right solution to this problem was what led me to write § 5. Preserving the exit status – the blueprint (without sanitization), and the following sections.
This is for catching stdout and stderr in different variables. If you only want to catch stderr, leaving stdout as-is, there is a better and shorter solution.
To sum everything up for the benefit of the reader, here is an
Easy Reusable bash Solution
This version does use subshells and runs without tempfiles. (For a tempfile version which runs without subshells, see my other answer.)
: catch STDOUT STDERR cmd args..
catch()
{
eval "$({
__2="$(
{ __1="$("${#:3}")"; } 2>&1;
ret=$?;
printf '%q=%q\n' "$1" "$__1" >&2;
exit $ret
)";
ret="$?";
printf '%s=%q\n' "$2" "$__2" >&2;
printf '( exit %q )' "$ret" >&2;
} 2>&1 )";
}
Example use:
dummy()
{
echo "$3" >&2
echo "$2" >&1
return "$1"
}
catch stdout stderr dummy 3 $'\ndiffcult\n data \n\n\n' $'\nother\n difficult \n data \n\n'
printf 'ret=%q\n' "$?"
printf 'stdout=%q\n' "$stdout"
printf 'stderr=%q\n' "$stderr"
this prints
ret=3
stdout=$'\ndiffcult\n data '
stderr=$'\nother\n difficult \n data '
So it can be used without deeper thinking about it. Just put catch VAR1 VAR2 in front of any command args.. and you are done.
Some if cmd args..; then will become if catch VAR1 VAR2 cmd args..; then. Really nothing complex.
Addendum: Use in "strict mode"
catch works for me identically in strict mode. The only caveat is, that the example above returns error code 3, which, in strict mode, calls the ERR trap. Hence if you run some command under set -e which is expected to return arbitrary error codes (not only 0), you need to catch the return code into some variable like && ret=$? || ret=$? as shown below:
dummy()
{
echo "$3" >&2
echo "$2" >&1
return "$1"
}
catch stdout stderr dummy 3 $'\ndifficult\n data \n\n\n' $'\nother\n difficult \n data \n\n' && ret=$? || ret=$?
printf 'ret=%q\n' "$ret"
printf 'stdout=%q\n' "$stdout"
printf 'stderr=%q\n' "$stderr"
Discussion
Q: How does it work?
It just wraps ideas from the other answers here into a function, such that it can easily be resused.
catch() basically uses eval to set the two variables. This is similar to https://stackoverflow.com/a/18086548
Consider a call of catch out err dummy 1 2a 3b:
let's skip the eval "$({ and the __2="$( for now. I will come to this later.
__1="$("$("${#:3}")"; } 2>&1; executes dummy 1 2a 3b and stores its stdout into __1 for later use. So __1 becomes 2a. It also redirects stderr of dummy to stdout, such that the outer catch can gather stdout
ret=$?; catches the exit code, which is 1
printf '%q=%q\n' "$1" "$__1" >&2; then outputs out=2a to stderr. stderr is used here, as the current stdout already has taken over the role of stderr of the dummy command.
exit $ret then forwards the exit code (1) to the next stage.
Now to the outer __2="$( ... )":
This catches stdout of the above, which is the stderr of the dummy call, into variable __2. (We could re-use __1 here, but I used __2 to make it less confusing.). So __2 becomes 3b
ret="$?"; catches the (returned) return code 1 (from dummy) again
printf '%s=%q\n' "$2" "$__2" >&2; then outputs err=3a to stderr. stderr is used again, as it already was used to output the other variable out=2a.
printf '( exit %q )' "$ret" >&2; then outputs the code to set the proper return value. I did not find a better way, as assigning it to a variable needs a variable name, which then cannot be used as first or second argument to catch.
Please note that, as an optimization, we could have written those 2 printf as a single one like printf '%s=%q\n( exit %q ) "$__2" "$ret"` as well.
So what do we have so far?
We have following written to stderr:
out=2a
err=3b
( exit 1 )
where out is from $1, 2a is from stdout of dummy, err is from $2, 3b is from stderr of dummy, and the 1 is from the return code from dummy.
Please note that %q in the format of printf takes care for quoting, such that the shell sees proper (single) arguments when it comes to eval. 2a and 3b are so simple, that they are copied literally.
Now to the outer eval "$({ ... } 2>&1 )";:
This executes all of above which output the 2 variables and the exit, catches it (therefor the 2>&1) and parses it into the current shell using eval.
This way the 2 variables get set and the return code as well.
Q: It uses eval which is evil. So is it safe?
As long as printf %q has no bugs, it should be safe. But you always have to be very careful, just think about ShellShock.
Q: Bugs?
No obvious bugs are known, except following:
Catching big output needs big memory and CPU, as everything goes into variables and needs to be back-parsed by the shell. So use it wisely.
As usual $(echo $'\n\n\n\n') swallows all linefeeds, not only the last one. This is a POSIX requirement. If you need to get the LFs unharmed, just add some trailing character to the output and remove it afterwards like in following recipe (look at the trailing x which allows to read a softlink pointing to a file which ends on a $'\n'):
target="$(readlink -e "$file")x"
target="${target%x}"
Shell-variables cannot carry the byte NUL ($'\0'). They are simply ignores if they happen to occur in stdout or stderr.
The given command runs in a sub-subshell. So it has no access to $PPID, nor can it alter shell variables. You can catch a shell function, even builtins, but those will not be able to alter shell variables (as everything running within $( .. ) cannot do this). So if you need to run a function in current shell and catch it's stderr/stdout, you need to do this the usual way with tempfiles. (There are ways to do this such, that interrupting the shell normally does not leave debris behind, but this is complex and deserves it's own answer.)
Q: Bash version?
I think you need Bash 4 and above (due to printf %q)
Q: This still looks so awkward.
Right. Another answer here shows how it can be done in ksh much more cleanly. However I am not used to ksh, so I leave it to others to create a similar easy to reuse recipe for ksh.
Q: Why not use ksh then?
Because this is a bash solution
Q: The script can be improved
Of course you can squeeze out some bytes and create smaller or more incomprehensible solution. Just go for it ;)
Q: There is a typo. : catch STDOUT STDERR cmd args.. shall read # catch STDOUT STDERR cmd args..
Actually this is intended. : shows up in bash -x while comments are silently swallowed. So you can see where the parser is if you happen to have a typo in the function definition. It's an old debugging trick. But beware a bit, you can easily create some neat sideffects within the arguments of :.
Edit: Added a couple more ; to make it more easy to create a single-liner out of catch(). And added section how it works.
Technically, named pipes aren't temporary files and nobody here mentions them. They store nothing in the filesystem and you can delete them as soon as you connect them (so you won't ever see them):
#!/bin/bash -e
foo () {
echo stdout1
echo stderr1 >&2
sleep 1
echo stdout2
echo stderr2 >&2
}
rm -f stdout stderr
mkfifo stdout stderr
foo >stdout 2>stderr & # blocks until reader is connected
exec {fdout}<stdout {fderr}<stderr # unblocks `foo &`
rm stdout stderr # filesystem objects are no longer needed
stdout=$(cat <&$fdout)
stderr=$(cat <&$fderr)
echo $stdout
echo $stderr
exec {fdout}<&- {fderr}<&- # free file descriptors, optional
You can have multiple background processes this way and asynchronously collect their stdouts and stderrs at a convenient time, etc.
If you need this for one process only, you may just as well use hardcoded fd numbers like 3 and 4, instead of the {fdout}/{fderr} syntax (which finds a free fd for you).
This command sets both stdout (stdval) and stderr (errval) values in the present running shell:
eval "$( execcommand 2> >(setval errval) > >(setval stdval); )"
provided this function has been defined:
function setval { printf -v "$1" "%s" "$(cat)"; declare -p "$1"; }
Change execcommand to the captured command, be it "ls", "cp", "df", etc.
All this is based on the idea that we could convert all captured values to a text line with the help of the function setval, then setval is used to capture each value in this structure:
execcommand 2> CaptureErr > CaptureOut
Convert each capture value to a setval call:
execcommand 2> >(setval errval) > >(setval stdval)
Wrap everything inside an execute call and echo it:
echo "$( execcommand 2> >(setval errval) > >(setval stdval) )"
You will get the declare calls that each setval creates:
declare -- stdval="I'm std"
declare -- errval="I'm err"
To execute that code (and get the vars set) use eval:
eval "$( execcommand 2> >(setval errval) > >(setval stdval) )"
and finally echo the set vars:
echo "std out is : |$stdval| std err is : |$errval|
It is also possible to include the return (exit) value.
A complete bash script example looks like this:
#!/bin/bash --
# The only function to declare:
function setval { printf -v "$1" "%s" "$(cat)"; declare -p "$1"; }
# a dummy function with some example values:
function dummy { echo "I'm std"; echo "I'm err" >&2; return 34; }
# Running a command to capture all values
# change execcommand to dummy or any other command to test.
eval "$( dummy 2> >(setval errval) > >(setval stdval); <<<"$?" setval retval; )"
echo "std out is : |$stdval| std err is : |$errval| return val is : |$retval|"
Jonathan has the answer. For reference, this is the ksh93 trick. (requires a non-ancient version).
function out {
echo stdout
echo stderr >&2
}
x=${ { y=$(out); } 2>&1; }
typeset -p x y # Show the values
produces
x=stderr
y=stdout
The ${ cmds;} syntax is just a command substitution that doesn't create a subshell. The commands are executed in the current shell environment. The space at the beginning is important ({ is a reserved word).
Stderr of the inner command group is redirected to stdout (so that it applies to the inner substitution). Next, the stdout of out is assigned to y, and the redirected stderr is captured by x, without the usual loss of y to a command substitution's subshell.
It isn't possible in other shells, because all constructs which capture output require putting the producer into a subshell, which in this case, would include the assignment.
update: Now also supported by mksh.
This is a diagram showing how #madmurphy's very neat solution works.
And an indented version of the one-liner:
catch() {
{
IFS=$'\n' read -r -d '' "$out_var";
IFS=$'\n' read -r -d '' "$err_var";
(IFS=$'\n' read -r -d '' _ERRNO_; return ${_ERRNO_});
}\
< <(
(printf '\0%s\0%d\0' \
"$(
(
(
(
{ ${3}; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-
) 4>&2- 2>&1- | tr -d '\0' 1>&4-
) 3>&1- | exit "$(cat)"
) 4>&1-
)" "${?}" 1>&2
) 2>&1
)
}
Did not like the eval, so here is a solution that uses some redirection tricks to capture program output to a variable and then parses that variable to extract the different components. The -w flag sets the chunk size and influences the ordering of std-out/err messages in the intermediate format. 1 gives potentially high resolution at the cost of overhead.
#######
# runs "$#" and outputs both stdout and stderr on stdin, both in a prefixed format allowing both std in and out to be separately stored in variables later.
# limitations: Bash does not allow null to be returned from subshells, limiting the usefullness of applying this function to commands with null in the output.
# example:
# var=$(keepBoth ls . notHere)
# echo ls had the exit code "$(extractOne r "$var")"
# echo ls had the stdErr of "$(extractOne e "$var")"
# echo ls had the stdOut of "$(extractOne o "$var")"
keepBoth() {
(
prefix(){
( set -o pipefail
base64 -w 1 - | (
while read c
do echo -E "$1" "$c"
done
)
)
}
( (
"$#" | prefix o >&3
echo ${PIPESTATUS[0]} | prefix r >&3
) 2>&1 | prefix e >&1
) 3>&1
)
}
extractOne() { # extract
echo "$2" | grep "^$1" | cut --delimiter=' ' --fields=2 | base64 --decode -
}
For the benefit of the reader here is a solution using tempfiles.
The question was not to use tempfiles. However this might be due to the unwanted pollution of /tmp/ with tempfile in case the shell dies. In case of kill -9 some trap 'rm "$tmpfile1" "$tmpfile2"' 0 does not fire.
If you are in a situation where you can use tempfile, but want to never leave debris behind, here is a recipe.
Again it is called catch() (as my other answer) and has the same calling syntax:
catch stdout stderr command args..
# Wrappers to avoid polluting the current shell's environment with variables
: catch_read returncode FD variable
catch_read()
{
eval "$3=\"\`cat <&$2\`\"";
# You can use read instead to skip some fork()s.
# However read stops at the first NUL byte,
# also does no \n removal and needs bash 3 or above:
#IFS='' read -ru$2 -d '' "$3";
return $1;
}
: catch_1 tempfile variable comand args..
catch_1()
{
{
rm -f "$1";
"${#:3}" 66<&-;
catch_read $? 66 "$2";
} 2>&1 >"$1" 66<"$1";
}
: catch stdout stderr command args..
catch()
{
catch_1 "`tempfile`" "${2:-stderr}" catch_1 "`tempfile`" "${1:-stdout}" "${#:3}";
}
What it does:
It creates two tempfiles for stdout and stderr. However it nearly immediately removes these, such that they are only around for a very short time.
catch_1() catches stdout (FD 1) into a variable and moves stderr to stdout, such that the next ("left") catch_1 can catch that.
Processing in catch is done from right to left, so the left catch_1 is executed last and catches stderr.
The worst which can happen is, that some temporary files show up on /tmp/, but they are always empty in that case. (They are removed before they get filled.). Usually this should not be a problem, as under Linux tmpfs supports roughly 128K files per GB of main memory.
The given command can access and alter all local shell variables as well. So you can call a shell function which has sideffects!
This only forks twice for the tempfile call.
Bugs:
Missing good error handling in case tempfile fails.
This does the usual \n removal of the shell. See comment in catch_read().
You cannot use file descriptor 66 to pipe data to your command. If you need that, use another descriptor for the redirection, like 42 (note that very old shells only offer FDs up to 9).
This cannot handle NUL bytes ($'\0') in stdout and stderr. (NUL is just ignored. For the read variant everything behind a NUL is ignored.)
FYI:
Unix allows us to access deleted files, as long as you keep some reference to them around (such as an open filehandle). This way we can open and then remove them.
In the bash realm, #madmurphy's "7. The ultimate solution – a general purpose function with exit status" is the way to go that I've been massively using everywhere. Based on my experience I'm contributing minor updates making it really "ultimate" also in the following two scenarios:
complex command lines to have args correctly quoted and without the need of quoting the original commands which are now naturally typed as plain tokens. ( the replacement is this..."$(((({ "${#:3}" ; echo...)
our trusted friend "debug" options. xtrace and verbose work by injecting text into stderr... You can immagine for how long I was baffled by the erratic behaviour of scripts that seemed to work perfectly well just before the catch... And the problem actually was quite subtler and required to take care of xtrace and verbose options as mentioned here https://unix.stackexchange.com/a/21944
One of my use case scenarios, where you'll get why the entire quoting mechanism was a problem is the following. Also, to detect the length of a video and do something else in case of error, I needed some debug before figuring out how this fast ffprobe command fails on the given video:
catch end err ffprobe -i "${filename}" -show_entries format=duration -v warning -of csv='p=0'
This, in my experience so far, is the ultimate ultimate ;-) one, and hope may serve you as well. Credits to #madmurphy and all other contributors.
catch() {
if [ "$#" -lt 3 ]; then
echo USAGE: catch STDOUT_VAR STDERR_VAR COMMAND [CMD_ARGS...]
echo 'stdout-> ${STDOUT_VAR}' 'stderr-> ${STDERR_VAR}' 'exit-> ${?}'
echo -e "\n** NOTICE: FD redirects are used to make the magic happen."
echo " Shell's xtrace (set -x) and verbose (set -v) work by redirecting to stderr, which screws the magic up."
echo " xtrace (set -x) and verbose (set -v) modes are suspended during the execution of this function."
return 1
fi
# check "verbose" option, turn if off if enabled, and save restore status USE_V
[[ ${-/v} != $- ]] && set +v && USE_V="-v" || USE_V="+v"
# check "xtrace" option, turn if off if enabled, and save restore status USE_X
[[ ${-/x} != $- ]] && set +x && USE_X="-x" || USE_X="+x"
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
# restore the "xtrace" and "verbose" options before returning
(IFS=$'\n' read -r -d '' _ERRNO_; set $USE_X; set $USE_V; return "${_ERRNO_}");
} < <((printf '\0%s\0%d\0' "$(((({ "${#:3}" ; echo "${?}" 1>&3-; } | tr -d '\0' 1>&4-) 4>&2- 2>&1- | tr -d '\0' 1>&4-) 3>&1- | exit "$(cat)") 4>&1-)" "${?}" 1>&2) 2>&1)
}
Succinctly, I believe the answer is 'No'. The capturing $( ... ) only captures standard output to the variable; there isn't a way to get the standard error captured into a separate variable. So, what you have is about as neat as it gets.
What about... =D
GET_STDERR=""
GET_STDOUT=""
get_stderr_stdout() {
GET_STDERR=""
GET_STDOUT=""
unset t_std t_err
eval "$( (eval $1) 2> >(t_err=$(cat); typeset -p t_err) > >(t_std=$(cat); typeset -p t_std) )"
GET_STDERR=$t_err
GET_STDOUT=$t_std
}
get_stderr_stdout "command"
echo "$GET_STDERR"
echo "$GET_STDOUT"
One workaround, which is hacky but perhaps more intuitive than some of the suggestions on this page, is to tag the output streams, merge them, and split afterwards based on the tags. For example, we might tag stdout with a "STDOUT" prefix:
function someCmd {
echo "I am stdout"
echo "I am stderr" 1>&2
}
ALL=$({ someCmd | sed -e 's/^/STDOUT/g'; } 2>&1)
OUT=$(echo "$ALL" | grep "^STDOUT" | sed -e 's/^STDOUT//g')
ERR=$(echo "$ALL" | grep -v "^STDOUT")
```
If you know that stdout and/or stderr are of a restricted form, you can come up with a tag which does not conflict with their allowed content.
WARNING: NOT (yet?) WORKING!
The following seems a possible lead to get it working without creating any temp files and also on POSIX sh only; it requires base64 however and due to the encoding/decoding may not be that efficient and use also "larger" memory.
Even in the simple case, it would already fail, when the last stderr line has no newline. This can be fixed at least in some cases with replacing exe with "{ exe ; echo >&2 ; }", i.e. adding a newline.
The main problem is however that everything seems racy. Try using an exe like:
exe()
{
cat /usr/share/hunspell/de_DE.dic
cat /usr/share/hunspell/en_GB.dic >&2
}
and you'll see that e.g. parts of the base64 encoded line is on the top of the file, parts at the end, and the non-decoded stderr stuff in the middle.
Well, even if the idea below cannot be made working (which I assume), it may serve as an anti-example for people who may falsely believe it could be made working like this.
Idea (or anti-example):
#!/bin/sh
exe()
{
echo out1
echo err1 >&2
echo out2
echo out3
echo err2 >&2
echo out4
echo err3 >&2
echo -n err4 >&2
}
r="$( { exe | base64 -w 0 ; } 2>&1 )"
echo RAW
printf '%s' "$r"
echo RAW
o="$( printf '%s' "$r" | tail -n 1 | base64 -d )"
e="$( printf '%s' "$r" | head -n -1 )"
unset r
echo
echo OUT
printf '%s' "$o"
echo OUT
echo
echo ERR
printf '%s' "$e"
echo ERR
gives (with the stderr-newline fix):
$ ./ggg
RAW
err1
err2
err3
err4
b3V0MQpvdXQyCm91dDMKb3V0NAo=RAW
OUT
out1
out2
out3
out4OUT
ERR
err1
err2
err3
err4ERR
(At least on Debian's dash and bash)
Here is an variant of #madmurphy solution that should work for arbitrarily large stdout/stderr streams, maintain the exit return value, and handle nulls in the stream (by converting them to newlines)
function buffer_plus_null()
{
local buf
IFS= read -r -d '' buf || :
echo -n "${buf}"
printf '\0'
}
{
IFS= time read -r -d '' CAPTURED_STDOUT;
IFS= time read -r -d '' CAPTURED_STDERR;
(IFS= read -r -d '' CAPTURED_EXIT; exit "${CAPTURED_EXIT}");
} < <((({ { some_command ; echo "${?}" 1>&3; } | tr '\0' '\n' | buffer_plus_null; } 2>&1 1>&4 | tr '\0' '\n' | buffer_plus_null 1>&4 ) 3>&1 | xargs printf '%s\0' 1>&4) 4>&1 )
Cons:
The read commands are the most expensive part of the operation. For example: find /proc on a computer running 500 processes, takes 20 seconds (while the command was only 0.5 seconds). It takes 10 seconds to read in the first time, and 10 seconds more to read the second time, doubling the total time.
Explanation of buffer
The original solution was to an argument to printf to buffer the stream, however with the need to have the exit code come last, one solution is to buffer both stdout and stderr. I tried xargs -0 printf but then you quickly started hitting "max argument length limits". So I decided a solution was to write a quick buffer function:
Use read to store the stream in a variable
This read will terminate when the stream ends, or a null is received. Since we already removed the nulls, it ends when the stream is closed, and returns non-zero. Since this is expected behavior we add || : meaning "or true" so that the line always evaluates to true (0)
Now that I know the stream has ended, it's safe to start echoing it back out.
echo -n "${buf}" is a builtin command and thus not limited to the argument length limit
Lastly, add a null separator to the end.
This prefixes error messages (similar to the answer of #Warbo) and by that we are able to distinguish between stdout and stderr:
out=$(some_command 2> >(sed -e 's/^/stderr/g'))
err=$(echo "$out" | grep -oP "(?<=^stderr).*")
out=$(echo "$out" | grep -v '^stderr')
The (?<=string) part is called a positive lookbehind which excludes the string from the result.
How I use it
# cat ./script.sh
#!/bin/bash
# check script arguments
args=$(getopt -u -l "foo,bar" "fb" "$#" 2> >(sed -e 's/^/stderr/g') )
[[ $? -ne 0 ]] && echo -n "Error: " && echo "$args" | grep -oP "(?<=^stderr).*" && exit 1
mapfile -t args < <(xargs -n1 <<< "$args")
#
# ./script.sh --foo --bar --baz
# Error: getopt: unrecognized option '--baz'
Notes:
As you can see I don't need to filter for stdout as the condition already catched the error and stopped the script. So if the script does not stop, $args does not contain any prefixed content.
An alternative to sed -e 's/^/stderr/g' is xargs -d '\n' -I {} echo "stderr{}".
Variant to prefix stdout AND stderr
# smbclient localhost 1> >(sed -e 's/^/std/g') 2> >(sed -e 's/^/err/g')
std
stdlocalhost: Not enough '\' characters in service
stderrUsage: smbclient [-?EgBVNkPeC] [-?|--help] [--usage]
stderr [-R|--name-resolve=NAME-RESOLVE-ORDER] [-M|--message=HOST]
stderr [-I|--ip-address=IP] [-E|--stderr] [-L|--list=HOST]
stderr [-m|--max-protocol=LEVEL] [-T|--tar=<c|x>IXFqgbNan]
stderr [-D|--directory=DIR] [-c|--command=STRING] [-b|--send-buffer=BYTES]
stderr [-t|--timeout=SECONDS] [-p|--port=PORT] [-g|--grepable]
stderr [-B|--browse] [-d|--debuglevel=DEBUGLEVEL]
stderr [-s|--configfile=CONFIGFILE] [-l|--log-basename=LOGFILEBASE]
stderr [-V|--version] [--option=name=value]
stderr [-O|--socket-options=SOCKETOPTIONS] [-n|--netbiosname=NETBIOSNAME]
stderr [-W|--workgroup=WORKGROUP] [-i|--scope=SCOPE] [-U|--user=USERNAME]
stderr [-N|--no-pass] [-k|--kerberos] [-A|--authentication-file=FILE]
stderr [-S|--signing=on|off|required] [-P|--machine-pass] [-e|--encrypt]
stderr [-C|--use-ccache] [--pw-nt-hash] service <password>
This is an addendum to Jacques Gaudin's addendum to madmurphy's answer.
Unlike the source, this uses eval to execute multi-line command (multi-argument is ok as well thanks to "${#}").
Another caveat is this function will return 0 in any case, and output exit code to a third variable instead. IMO this is more apt for catch.
#!/bin/bash
# Overwrites existing values of provided variables in any case.
# SYNTAX:
# catch STDOUT_VAR_NAME STDERR_VAR_NAME EXIT_CODE_VAR_NAME COMMAND1 [COMMAND2 [...]]
function catch() {
{
IFS=$'\n' read -r -d '' "${1}";
IFS=$'\n' read -r -d '' "${2}";
IFS=$'\n' read -r -d '' "${3}";
return 0;
}\
< <(
(printf '\0%s\0%d\0' \
"$(
(
(
(
{
shift 3;
eval "${#}";
echo "${?}" 1>&3-;
} | tr -d '\0' 1>&4-
) 4>&2- 2>&1- | tr -d '\0' 1>&4-
) 3>&1- | exit "$(cat)"
) 4>&1-
)" "${?}" 1>&2
) 2>&1
)
}
# Simulation of here-doc
MULTILINE_SCRIPT_1='cat << EOF
foo
bar
with newlines
EOF
'
# Simulation of multiple streams
# Notice the lack of semi-colons, otherwise below code
# could become a one-liner and still work
MULTILINE_SCRIPT_2='echo stdout stream
echo error stream 1>&2
'
catch out err code "${MULTILINE_SCRIPT_1}" \
'printf "wait there is more\n" 1>&2'
printf "1)\n\tSTDOUT: ${out}\n\tSTDERR: ${err}\n\tCODE: ${code}\n"
echo ''
catch out err code "${MULTILINE_SCRIPT_2}" echo this multi-argument \
form works too '1>&2' \; \(exit 5\)
printf "2)\n\tSTDOUT: ${out}\n\tSTDERR: ${err}\n\tCODE: ${code}\n"
Output:
1)
STDOUT: foo
bar
with newlines
STDERR: wait there is more
CODE: 0
2)
STDOUT: stdout stream
STDERR: error stream
this multi-argument form works too
CODE: 5
If the command 1) no stateful side effects and 2) is computationally cheap, the easiest solution is to just run it twice. I've mainly used this for code that runs during the boot sequence when you don't yet know if the disk is going to be working. In my case it was a tiny some_command so there was no performance hit for running twice, and the command had no side effects.
The main benefit is that this is clean and easy to read. The solutions here are quite clever, but I would hate to be the one that has to maintain a script containing the more complicated solutions. I'd recommend the simple run-it-twice approach if your scenario works with that, as it's much cleaner and easier to maintain.
Example:
output=$(getopt -o '' -l test: -- "$#")
errout=$(getopt -o '' -l test: -- "$#" 2>&1 >/dev/null)
if [[ -n "$errout" ]]; then
echo "Option Error: $errout"
fi
Again, this is only ok to do because getopt has no side effects. I know it's performance-safe because my parent code calls this less than 100 times during the entire program, and the user will never notice 100 getopt calls vs 200 getopt calls.
Here's a simpler variation that isn't quite what the OP wanted, but is unlike any of the other options. You can get whatever you want by rearranging the file descriptors.
Test command:
%> cat xx.sh
#!/bin/bash
echo stdout
>&2 echo stderr
which by itself does:
%> ./xx.sh
stdout
stderr
Now, print stdout, capture stderr to a variable, & log stdout to a file
%> export err=$(./xx.sh 3>&1 1>&2 2>&3 >"out")
stdout
%> cat out
stdout
%> echo
$err
stderr
Or log stdout & capture stderr to a variable:
export err=$(./xx.sh 3>&1 1>out 2>&3 )
%> cat out
stdout
%> echo $err
stderr
You get the idea.
Realtime output and write to file:
#!/usr/bin/env bash
# File where store the output
log_file=/tmp/out.log
# Empty file
echo > ${log_file}
outToLog() {
# File where write (first parameter)
local f="$1"
# Start file output watcher in background
tail -f "${f}" &
# Capture background process PID
local pid=$!
# Write "stdin" to file
cat /dev/stdin >> "${f}"
# Kill background task
kill -9 ${pid}
}
(
# Long execution script example
echo a
sleep 1
echo b >&2
sleep 1
echo c >&2
sleep 1
echo d
) 2>&1 | outToLog "${log_file}"
# File result
echo '==========='
cat "${log_file}"
I've posted my solution to this problem here. It does use process substitution and requires Bash > v4 but also captures stdout, stderr and return code into variables you name in the current scope:
https://gist.github.com/pmarreck/5eacc6482bc19b55b7c2f48b4f1db4e8
The whole point of this exercise was so that I could assert on these things in a test suite. The fact that I just spent all afternoon figuring out this simple-sounding thing... I hope one of these solutions helps others!
Let's say I have a script like the following:
useless.sh
echo "This Is Error" 1>&2
echo "This Is Output"
And I have another shell script:
alsoUseless.sh
./useless.sh | sed 's/Output/Useless/'
I want to capture "This Is Error", or any other stderr from useless.sh, into a variable.
Let's call it ERROR.
Notice that I am using stdout for something. I want to continue using stdout, so redirecting stderr into stdout is not helpful, in this case.
So, basically, I want to do
./useless.sh 2> $ERROR | ...
but that obviously doesn't work.
I also know that I could do
./useless.sh 2> /tmp/Error
ERROR=`cat /tmp/Error`
but that's ugly and unnecessary.
Unfortunately, if no answers turn up here that's what I'm going to have to do.
I'm hoping there's another way.
Anyone have any better ideas?
It would be neater to capture the error file thus:
ERROR=$(</tmp/Error)
The shell recognizes this and doesn't have to run 'cat' to get the data.
The bigger question is hard. I don't think there's an easy way to do it. You'd have to build the entire pipeline into the sub-shell, eventually sending its final standard output to a file, so that you can redirect the errors to standard output.
ERROR=$( { ./useless.sh | sed s/Output/Useless/ > outfile; } 2>&1 )
Note that the semi-colon is needed (in classic shells - Bourne, Korn - for sure; probably in Bash too). The '{}' does I/O redirection over the enclosed commands. As written, it would capture errors from sed too.
WARNING: Formally untested code - use at own risk.
Redirected stderr to stdout, stdout to /dev/null, and then use the backticks or $() to capture the redirected stderr:
ERROR=$(./useless.sh 2>&1 >/dev/null)
alsoUseless.sh
This will allow you to pipe the output of your useless.sh script through a command such as sed and save the stderr in a variable named error. The result of the pipe is sent to stdout for display or to be piped into another command.
It sets up a couple of extra file descriptors to manage the redirections needed in order to do this.
#!/bin/bash
exec 3>&1 4>&2 #set up extra file descriptors
error=$( { ./useless.sh | sed 's/Output/Useless/' 2>&4 1>&3; } 2>&1 )
echo "The message is \"${error}.\""
exec 3>&- 4>&- # release the extra file descriptors
There are a lot of duplicates for this question, many of which have a slightly simpler usage scenario where you don't want to capture stderr and stdout and the exit code all at the same time.
if result=$(useless.sh 2>&1); then
stdout=$result
else
rc=$?
stderr=$result
fi
works for the common scenario where you expect either proper output in the case of success, or a diagnostic message on stderr in the case of failure.
Note that the shell's control statements already examine $? under the hood; so anything which looks like
cmd
if [ $? -eq 0 ], then ...
is just a clumsy, unidiomatic way of saying
if cmd; then ...
For the benefit of the reader, this recipe here
can be re-used as oneliner to catch stderr into a variable
still gives access to the return code of the command
Sacrifices a temporary file descriptor 3 (which can be changed by you of course)
And does not expose this temporary file descriptors to the inner command
If you want to catch stderr of some command into var you can do
{ var="$( { command; } 2>&1 1>&3 3>&- )"; } 3>&1;
Afterwards you have it all:
echo "command gives $? and stderr '$var'";
If command is simple (not something like a | b) you can leave the inner {} away:
{ var="$(command 2>&1 1>&3 3>&-)"; } 3>&1;
Wrapped into an easy reusable bash-function (probably needs version 3 and above for local -n):
: catch-stderr var cmd [args..]
catch-stderr() { local -n v="$1"; shift && { v="$("$#" 2>&1 1>&3 3>&-)"; } 3>&1; }
Explained:
local -n aliases "$1" (which is the variable for catch-stderr)
3>&1 uses file descriptor 3 to save there stdout points
{ command; } (or "$#") then executes the command within the output capturing $(..)
Please note that the exact order is important here (doing it the wrong way shuffles the file descriptors wrongly):
2>&1 redirects stderr to the output capturing $(..)
1>&3 redirects stdout away from the output capturing $(..) back to the "outer" stdout which was saved in file descriptor 3. Note that stderr still refers to where FD 1 pointed before: To the output capturing $(..)
3>&- then closes the file descriptor 3 as it is no more needed, such that command does not suddenly has some unknown open file descriptor showing up. Note that the outer shell still has FD 3 open, but command will not see it.
The latter is important, because some programs like lvm complain about unexpected file descriptors. And lvm complains to stderr - just what we are going to capture!
You can catch any other file descriptor with this recipe, if you adapt accordingly. Except file descriptor 1 of course (here the redirection logic would be wrong, but for file descriptor 1 you can just use var=$(command) as usual).
Note that this sacrifices file descriptor 3. If you happen to need that file descriptor, feel free to change the number. But be aware, that some shells (from the 1980s) might understand 99>&1 as argument 9 followed by 9>&1 (this is no problem for bash).
Also note that it is not particluar easy to make this FD 3 configurable through a variable. This makes things very unreadable:
: catch-var-from-fd-by-fd variable fd-to-catch fd-to-sacrifice command [args..]
catch-var-from-fd-by-fd()
{
local -n v="$1";
local fd1="$2" fd2="$3";
shift 3 || return;
eval exec "$fd2>&1";
v="$(eval '"$#"' "$fd1>&1" "1>&$fd2" "$fd2>&-")";
eval exec "$fd2>&-";
}
Security note: The first 3 arguments to catch-var-from-fd-by-fd must not be taken from a 3rd party. Always give them explicitly in a "static" fashion.
So no-no-no catch-var-from-fd-by-fd $var $fda $fdb $command, never do this!
If you happen to pass in a variable variable name, at least do it as follows:
local -n var="$var"; catch-var-from-fd-by-fd var 3 5 $command
This still will not protect you against every exploit, but at least helps to detect and avoid common scripting errors.
Notes:
catch-var-from-fd-by-fd var 2 3 cmd.. is the same as catch-stderr var cmd..
shift || return is just some way to prevent ugly errors in case you forget to give the correct number of arguments. Perhaps terminating the shell would be another way (but this makes it hard to test from commandline).
The routine was written such, that it is more easy to understand. One can rewrite the function such that it does not need exec, but then it gets really ugly.
This routine can be rewritten for non-bash as well such that there is no need for local -n. However then you cannot use local variables and it gets extremely ugly!
Also note that the evals are used in a safe fashion. Usually eval is considerered dangerous. However in this case it is no more evil than using "$#" (to execute arbitrary commands). However please be sure to use the exact and correct quoting as shown here (else it becomes very very dangerous).
# command receives its input from stdin.
# command sends its output to stdout.
exec 3>&1
stderr="$(command </dev/stdin 2>&1 1>&3)"
exitcode="${?}"
echo "STDERR: $stderr"
exit ${exitcode}
POSIX
STDERR can be captured with some redirection magic:
$ { error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&3 ; } 2>&1); } 3>&1
lrwxrwxrwx 1 rZZt rZZt 7 Aug 22 15:44 /bin -> usr/bin/
$ echo $error
ls: cannot access '/XXXX': No such file or directory
Note that piping of STDOUT of the command (here ls) is done inside the innermost { }. If you're executing a simple command (eg, not a pipe), you could remove these inner braces.
You can't pipe outside the command as piping makes a subshell in bash and zsh, and the assignment to the variable in the subshell wouldn't be available to the current shell.
bash
In bash, it would be better not to assume that file descriptor 3 is unused:
{ error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&$tmp ; } 2>&1); } {tmp}>&1;
exec {tmp}>&- # With this syntax the FD stays open
Note that this doesn't work in zsh.
Thanks to this answer for the general idea.
A simple solution
{ ERROR=$(./useless.sh 2>&1 1>&$out); } {out}>&1
echo "-"
echo $ERROR
Will produce:
This Is Output
-
This Is Error
Iterating a bit on Tom Hale's answer, I've found it possible to wrap the redirection yoga into a function for easier reuse. For example:
#!/bin/sh
capture () {
{ captured=$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
# Example usage; capturing dialog's output without resorting to temp files
# was what motivated me to search for this particular SO question
capture dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
choice=$captured
clear; echo $choice
It's almost certainly possible to simplify this further. Haven't tested especially-thoroughly, but it does appear to work with both bash and ksh.
EDIT: an alternative version of the capture function which stores the captured STDERR output into a user-specified variable (instead of relying on a global $captured), taking inspiration from Léa Gris's answer while preserving the ksh (and zsh) compatibility of the above implementation:
capture () {
if [ "$#" -lt 2 ]; then
echo "Usage: capture varname command [arg ...]"
return 1
fi
typeset var captured; captured="$1"; shift
{ read $captured <<<$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
And usage:
capture choice dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
clear; echo $choice
Here's how I did it :
#
# $1 - name of the (global) variable where the contents of stderr will be stored
# $2 - command to be executed
#
captureStderr()
{
local tmpFile=$(mktemp)
$2 2> $tmpFile
eval "$1=$(< $tmpFile)"
rm $tmpFile
}
Usage example :
captureStderr err "./useless.sh"
echo -$err-
It does use a temporary file. But at least the ugly stuff is wrapped in a function.
This is an interesting problem to which I hoped there was an elegant solution. Sadly, I end up with a solution similar to Mr. Leffler, but I'll add that you can call useless from inside a Bash function for improved readability:
#!/bin/bash
function useless {
/tmp/useless.sh | sed 's/Output/Useless/'
}
ERROR=$(useless)
echo $ERROR
All other kind of output redirection must be backed by a temporary file.
I think you want to capture stderr, stdout and exitcode if that is your intention you can use this code:
## Capture error when 'some_command() is executed
some_command_with_err() {
echo 'this is the stdout'
echo 'this is the stderr' >&2
exit 1
}
run_command() {
{
IFS=$'\n' read -r -d '' stderr;
IFS=$'\n' read -r -d '' stdout;
IFS=$'\n' read -r -d '' stdexit;
} < <((printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1)
stdexit=${stdexit:-0};
}
echo 'Run command:'
if ! run_command; then
## Show the values
typeset -p stdout stderr stdexit
else
typeset -p stdout stderr stdexit
fi
This scripts capture the stderr, stdout as well as the exitcode.
But Teo how it works?
First, we capture the stdout as well as the exitcode using printf '\0%s\0%d\0'. They are separated by the \0 aka 'null byte'.
After that, we redirect the printf to stderr by doing: 1>&2 and then we redirect all back to stdout using 2>&1. Therefore, the stdout will look like:
"<stderr>\0<stdout>\0<exitcode>\0"
Enclosing the printf command in <( ... ) performs process substitution. Process substitution allows a process’s input or output to be referred to using a filename. This means <( ... ) will pipe the stdout of (printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1into the stdin of the command group using the first <.
Then, we can capture the piped stdout from the stdin of the command group with read. This command reads a line from the file descriptor stdin and split it into fields. Only the characters found in $IFS are recognized as word delimiters. $IFS or Internal Field Separator is a variable that determines how Bash recognizes fields, or word boundaries, when it interprets character strings. $IFS defaults to whitespace (space, tab, and newline), but may be changed, for example, to parse a comma-separated data file. Note that $* uses the first character held in $IFS.
## Shows whitespace as a single space, ^I(horizontal tab), and newline, and display "$" at end-of-line.
echo "$IFS" | cat -vte
# Output:
# ^I$
# $
## Reads commands from string and assign any arguments to pos params
bash -c 'set w x y z; IFS=":-;"; echo "$*"'
# Output:
# w:x:y:z
for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a
# b
# c
IFS=$'\n'; for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a b
# c
That is why we defined IFS=$'\n' (newline) as delimiter.
Our script uses read -r -d '', where read -r does not allow backslashes to escape any characters, and -d '' continues until the first character '' is read, rather than newline.
Finally, replace some_command_with_err with your script file and you can capture and handle the stderr, stdout as well as the exitcode as your will.
This post helped me come up with a similar solution for my own purposes:
MESSAGE=`{ echo $ERROR_MESSAGE | format_logs.py --level=ERROR; } 2>&1`
Then as long as our MESSAGE is not an empty string, we pass it on to other stuff. This will let us know if our format_logs.py failed with some kind of python exception.
In zsh:
{ . ./useless.sh > /dev/tty } 2>&1 | read ERROR
$ echo $ERROR
( your message )
Capture AND Print stderr
ERROR=$( ./useless.sh 3>&1 1>&2 2>&3 | tee /dev/fd/2 )
Breakdown
You can use $() to capture stdout, but you want to capture stderr instead. So you swap stdout and stderr. Using fd 3 as the temporary storage in the standard swap algorithm.
If you want to capture AND print use tee to make a duplicate. In this case the output of tee will be captured by $() rather than go to the console, but stderr(of tee) will still go to the console so we use that as the second output for tee via the special file /dev/fd/2 since tee expects a file path rather than a fd number.
NOTE: That is an awful lot of redirections in a single line and the order matters. $() is grabbing the stdout of tee at the end of the pipeline and the pipeline itself routes stdout of ./useless.sh to the stdin of tee AFTER we swapped stdin and stdout for ./useless.sh.
Using stdout of ./useless.sh
The OP said he still wanted to use (not just print) stdout, like ./useless.sh | sed 's/Output/Useless/'.
No problem just do it BEFORE swapping stdout and stderr. I recommend moving it into a function or file (also-useless.sh) and calling that in place of ./useless.sh in the line above.
However, if you want to CAPTURE stdout AND stderr, then I think you have to fall back on temporary files because $() will only do one at a time and it makes a subshell from which you cannot return variables.
Improving on YellowApple's answer:
This is a Bash function to capture stderr into any variable
stderr_capture_example.sh:
#!/usr/bin/env bash
# Capture stderr from a command to a variable while maintaining stdout
# #Args:
# $1: The variable name to store the stderr output
# $2: Vararg command and arguments
# #Return:
# The Command's Returnn-Code or 2 if missing arguments
function capture_stderr {
[ $# -lt 2 ] && return 2
local stderr="$1"
shift
{
printf -v "$stderr" '%s' "$({ "$#" 1>&3; } 2>&1)"
} 3>&1
}
# Testing with a call to erroring ls
LANG=C capture_stderr my_stderr ls "$0" ''
printf '\nmy_stderr contains:\n%s' "$my_stderr"
Testing:
bash stderr_capture_example.sh
Output:
stderr_capture_example.sh
my_stderr contains:
ls: cannot access '': No such file or directory
This function can be used to capture the returned choice of a dialog command.
If you want to bypass the use of a temporary file you may be able to use process substitution. I haven't quite gotten it to work yet. This was my first attempt:
$ .useless.sh 2> >( ERROR=$(<) )
-bash: command substitution: line 42: syntax error near unexpected token `)'
-bash: command substitution: line 42: `<)'
Then I tried
$ ./useless.sh 2> >( ERROR=$( cat <() ) )
This Is Output
$ echo $ERROR # $ERROR is empty
However
$ ./useless.sh 2> >( cat <() > asdf.txt )
This Is Output
$ cat asdf.txt
This Is Error
So the process substitution is doing generally the right thing... unfortunately, whenever I wrap STDIN inside >( ) with something in $() in an attempt to capture that to a variable, I lose the contents of $(). I think that this is because $() launches a sub process which no longer has access to the file descriptor in /dev/fd which is owned by the parent process.
Process substitution has bought me the ability to work with a data stream which is no longer in STDERR, unfortunately I don't seem to be able to manipulate it the way that I want.
$ b=$( ( a=$( (echo stdout;echo stderr >&2) ) ) 2>&1 )
$ echo "a=>$a b=>$b"
a=>stdout b=>stderr
For error proofing your commands:
execute [INVOKING-FUNCTION] [COMMAND]
execute () {
function="${1}"
command="${2}"
error=$(eval "${command}" 2>&1 >"/dev/null")
if [ ${?} -ne 0 ]; then
echo "${function}: ${error}"
exit 1
fi
}
Inspired in Lean manufacturing:
Make errors impossible by design
Make steps the smallest
Finish items one by one
Make it obvious to anyone
I'll use find command
find / -maxdepth 2 -iname 'tmp' -type d
as non superuser for the demo. It should complain 'Permission denied' when acessing / dir.
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&- | tee /dev/stderr)"; } 3>&1 | tee /dev/fd/4 2>&1; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
that gives output:
terminal:
find: ‘/root’: Permission denied
/tmp
/var/tmp
find: ‘/lost+found’: Permission denied
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied
The terminal output has also /dev/stderr content the same way as if you were running that find command without any script. $out has /dev/stdout and $err has /dev/stderr content.
use:
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&-)"; } 3>&1 | tee /dev/fd/4; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
if you don't want to see /dev/stderr in the terminal output.
terminal:
/tmp
/var/tmp
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied