Simulate tee behaviour without subshell usage so that variable scope is not affected - tee

I need the capture the output of a command group in BASH to STDOUT and a log file. Consider this code with command grouping and it's output
#!/usr/bin/bash
main(){
declare -i mycode=1
echo "Declared mycode:${mycode}"
{
#command group
echo "mycode:${mycode}"
mycode=2
echo "mycode:${mycode}"
} 2>&1
echo "mycode:${mycode}"
}
main
the output is:
Declared mycode:1
mycode:1
mycode:2
mycode:2
I need to capture the command group output to a log file and STDOUT so I add tee as follows:
#!/usr/bin/bash
main(){
declare -i mycode=1
echo "Declared mycode:${mycode}"
{
#command group
echo "mycode:${mycode}"
mycode=2
echo "mycode:${mycode}"
} 2>&1 | tee ~/log.log
echo "mycode:${mycode}"
}
main
but now the output is as follows:
Declared mycode:1
mycode:1
mycode:2
mycode:1
So value of the mycode variable does not get set to 2 in the outer scope when tee is used as the left of tee will be run in a subshell. For various reason I need mycode defined in the global scope so I need to avoid subshells.
How can I achieve the behaviour of tee without a subshell whereby I can stream output to STDOUT and a log file.

Related

piping main method to tee will make trap method not see global variables

I have a trap method which neither access global variables nor receive variables using $*. It looks like this:
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
declare -a arr
finish_exit() {
echo "* = $*"
echo "arr = ${arr[*]}"
}
trap 'finish_exit "${arr[#]}"' EXIT
main() {
arr+=("hello")
arr+=("world")
}
main | tee -a /dev/null
The script prints ''.
If I remove the | tee -a ... snippet, the script prints 'hello\nworld' twice as expected.
Now, how can I pipe the output to a logfile WITHOUT loosing all context?
One solution would be to route everything from the script call, like so:
./myscript.sh >> /dev/null, but I think there should be a way to do this inside the script so I can log EVERY call, not just those run by cron.
Another solution I investigated was:
main() {
...
} >> /dev/null
But this will result in no output on the interactive shell.
Bonus karma for explanations why this subshell will "erase" global variables before the trap function is being called.
will make trap method not see global variables
The subshell does "sees" global variables, it does not execute the trap.
why this subshell will "erase" global variables
From https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html :
Command substitution, commands grouped with parentheses, and asynchronous commands are invoked in a subshell environment that is a duplicate of the shell environment, except that traps caught by the shell are reset to the values that the shell inherited from its parent at invocation.
how can I pipe the output to a logfile WITHOUT loosing all context?
#!/bin/bash
exec 1> >(tee -a logfile)
trap 'echo world' EXIT
echo hello
or
{
trap 'echo world' EXIT
echo hello
} | tee -a logfile
And research: https://serverfault.com/questions/103501/how-can-i-fully-log-all-bash-scripts-actions and similar.
The following script:
#!/usr/bin/env bash
set -euo pipefail
{
set -euo pipefail
IFS=$'\n\t'
declare -a arr
finish_exit() {
echo "* = $*"
echo "arr = ${arr[*]}"
}
trap 'finish_exit "${arr[#]}"' EXIT
main() {
arr+=("hello")
arr+=("world")
}
main
} | tee -a /dev/null
outputs for me:
* = hello
world
arr = hello
world
I added that set -o pipefail before the pipe, to preserve the exit status.

Stdout & Stderr redirection using variables

I am trying to create 2 variables in my script to redirect error & output to file and display only the errors in the screen and other variable to to display only output in the screen. When i put it as a variable its not working. Variable is coming empty. Any help?
#!/bin/bash
timestamp="`date +%Y%m%d%H%M%S`"
displayonlyerror="2>&1 >> /tmp/postinstall_output_$timestamp.log | tee -a /tmp/postinstall_output_$timestamp.log"
displayoutput="2>&1 |tee -a /tmp/postinstall_output_$timestamp.log"
echo "No screen session found" $displayoutput
ech "No screen session found" $displayerror
It won't work like that, shell parses redirections before performing expansions on other tokens. Instead, define displayonlyerror and displayoutput as functions like (file is the log file name, you change it):
displayonlyerror() {
"$#" 2>&1 >>file | tee -a file
}
displayoutput() {
"$#" 2>&1 | tee -a file
}
and use them like:
displayoutput cmd args
displayonlyerror cmd args

Send stderr and stdout to a function as variable and check type

I am redirecting the output of a script to a function I've written by using 2>&1 | MyFunction
This works great. However, I'm having a difficult time figuring out how to determine if the variable being passed is STDOUT or STDERR in MyFunction
Ex:
function MyFunction {
while read line; do
read IN
echo "LOG: $IN" >> $LOG_FILE;
done
}
Is there some way the perform a check on the passed variable if it is an instance of either STDOUT or STDERR so I can perform conditional rules based upon it?
The pipe will convey standard output (STDOUT) data.
The 2>&1 redirects the standard error ("2") to the file descriptor "1", which is the standard output.
This means the following line :
myCommand 2>&1 | MyFunction
can be read as "Run myCommand, redirect the standard error to the standard output, and run MyFunction on the output (which is standard error + standard output)"
Once you've written 2>&1, there is no way to know which line was initially sent to STDOUT or STDERR.
You can use this:
# call the function like:
# cmd | log err
# cmd | log info
log() {
type="${1}"
while read -r line ; do
if [ "${type}" = "err" ] ; then
echo "EE ${line}"
else
echo "II ${line}"
fi
done
}
# Check this: https://stackoverflow.com/a/9113604/171318
# Use another file descriptor to separate between stdout and stderr
{ stdbuf -eL -oL cmd 2>&3 | log info ; } 3>&1 1>&2 | log err
# PS: ^ stdbuf makes sure that lines in output are in order
Not directly, no. If you don't need to be compatible to other shells than Bash, you can use a bit of trickery in Bash to add a prefix to every line; for example
bash$ { perl -le 'print "oops"; die' 2> >(sed 's/^/stderr: /') > >(sed 's/^/stdout: /'); } 2>&1 | nl
1 stdout: oops
2 stderr: Died at -e line 1.
As you can see, the output from the two sed processes is recombined into standard output, which is then piped as standard input to the line numbering program nl. The braces group the commands so that the final redirection 2>&1 doesn't directly affect the Perl process.

Shell script calling an awk program - Assign the output to a variable [duplicate]

Let's say I have a script like the following:
useless.sh
echo "This Is Error" 1>&2
echo "This Is Output"
And I have another shell script:
alsoUseless.sh
./useless.sh | sed 's/Output/Useless/'
I want to capture "This Is Error", or any other stderr from useless.sh, into a variable.
Let's call it ERROR.
Notice that I am using stdout for something. I want to continue using stdout, so redirecting stderr into stdout is not helpful, in this case.
So, basically, I want to do
./useless.sh 2> $ERROR | ...
but that obviously doesn't work.
I also know that I could do
./useless.sh 2> /tmp/Error
ERROR=`cat /tmp/Error`
but that's ugly and unnecessary.
Unfortunately, if no answers turn up here that's what I'm going to have to do.
I'm hoping there's another way.
Anyone have any better ideas?
It would be neater to capture the error file thus:
ERROR=$(</tmp/Error)
The shell recognizes this and doesn't have to run 'cat' to get the data.
The bigger question is hard. I don't think there's an easy way to do it. You'd have to build the entire pipeline into the sub-shell, eventually sending its final standard output to a file, so that you can redirect the errors to standard output.
ERROR=$( { ./useless.sh | sed s/Output/Useless/ > outfile; } 2>&1 )
Note that the semi-colon is needed (in classic shells - Bourne, Korn - for sure; probably in Bash too). The '{}' does I/O redirection over the enclosed commands. As written, it would capture errors from sed too.
WARNING: Formally untested code - use at own risk.
Redirected stderr to stdout, stdout to /dev/null, and then use the backticks or $() to capture the redirected stderr:
ERROR=$(./useless.sh 2>&1 >/dev/null)
alsoUseless.sh
This will allow you to pipe the output of your useless.sh script through a command such as sed and save the stderr in a variable named error. The result of the pipe is sent to stdout for display or to be piped into another command.
It sets up a couple of extra file descriptors to manage the redirections needed in order to do this.
#!/bin/bash
exec 3>&1 4>&2 #set up extra file descriptors
error=$( { ./useless.sh | sed 's/Output/Useless/' 2>&4 1>&3; } 2>&1 )
echo "The message is \"${error}.\""
exec 3>&- 4>&- # release the extra file descriptors
There are a lot of duplicates for this question, many of which have a slightly simpler usage scenario where you don't want to capture stderr and stdout and the exit code all at the same time.
if result=$(useless.sh 2>&1); then
stdout=$result
else
rc=$?
stderr=$result
fi
works for the common scenario where you expect either proper output in the case of success, or a diagnostic message on stderr in the case of failure.
Note that the shell's control statements already examine $? under the hood; so anything which looks like
cmd
if [ $? -eq 0 ], then ...
is just a clumsy, unidiomatic way of saying
if cmd; then ...
For the benefit of the reader, this recipe here
can be re-used as oneliner to catch stderr into a variable
still gives access to the return code of the command
Sacrifices a temporary file descriptor 3 (which can be changed by you of course)
And does not expose this temporary file descriptors to the inner command
If you want to catch stderr of some command into var you can do
{ var="$( { command; } 2>&1 1>&3 3>&- )"; } 3>&1;
Afterwards you have it all:
echo "command gives $? and stderr '$var'";
If command is simple (not something like a | b) you can leave the inner {} away:
{ var="$(command 2>&1 1>&3 3>&-)"; } 3>&1;
Wrapped into an easy reusable bash-function (probably needs version 3 and above for local -n):
: catch-stderr var cmd [args..]
catch-stderr() { local -n v="$1"; shift && { v="$("$#" 2>&1 1>&3 3>&-)"; } 3>&1; }
Explained:
local -n aliases "$1" (which is the variable for catch-stderr)
3>&1 uses file descriptor 3 to save there stdout points
{ command; } (or "$#") then executes the command within the output capturing $(..)
Please note that the exact order is important here (doing it the wrong way shuffles the file descriptors wrongly):
2>&1 redirects stderr to the output capturing $(..)
1>&3 redirects stdout away from the output capturing $(..) back to the "outer" stdout which was saved in file descriptor 3. Note that stderr still refers to where FD 1 pointed before: To the output capturing $(..)
3>&- then closes the file descriptor 3 as it is no more needed, such that command does not suddenly has some unknown open file descriptor showing up. Note that the outer shell still has FD 3 open, but command will not see it.
The latter is important, because some programs like lvm complain about unexpected file descriptors. And lvm complains to stderr - just what we are going to capture!
You can catch any other file descriptor with this recipe, if you adapt accordingly. Except file descriptor 1 of course (here the redirection logic would be wrong, but for file descriptor 1 you can just use var=$(command) as usual).
Note that this sacrifices file descriptor 3. If you happen to need that file descriptor, feel free to change the number. But be aware, that some shells (from the 1980s) might understand 99>&1 as argument 9 followed by 9>&1 (this is no problem for bash).
Also note that it is not particluar easy to make this FD 3 configurable through a variable. This makes things very unreadable:
: catch-var-from-fd-by-fd variable fd-to-catch fd-to-sacrifice command [args..]
catch-var-from-fd-by-fd()
{
local -n v="$1";
local fd1="$2" fd2="$3";
shift 3 || return;
eval exec "$fd2>&1";
v="$(eval '"$#"' "$fd1>&1" "1>&$fd2" "$fd2>&-")";
eval exec "$fd2>&-";
}
Security note: The first 3 arguments to catch-var-from-fd-by-fd must not be taken from a 3rd party. Always give them explicitly in a "static" fashion.
So no-no-no catch-var-from-fd-by-fd $var $fda $fdb $command, never do this!
If you happen to pass in a variable variable name, at least do it as follows:
local -n var="$var"; catch-var-from-fd-by-fd var 3 5 $command
This still will not protect you against every exploit, but at least helps to detect and avoid common scripting errors.
Notes:
catch-var-from-fd-by-fd var 2 3 cmd.. is the same as catch-stderr var cmd..
shift || return is just some way to prevent ugly errors in case you forget to give the correct number of arguments. Perhaps terminating the shell would be another way (but this makes it hard to test from commandline).
The routine was written such, that it is more easy to understand. One can rewrite the function such that it does not need exec, but then it gets really ugly.
This routine can be rewritten for non-bash as well such that there is no need for local -n. However then you cannot use local variables and it gets extremely ugly!
Also note that the evals are used in a safe fashion. Usually eval is considerered dangerous. However in this case it is no more evil than using "$#" (to execute arbitrary commands). However please be sure to use the exact and correct quoting as shown here (else it becomes very very dangerous).
# command receives its input from stdin.
# command sends its output to stdout.
exec 3>&1
stderr="$(command </dev/stdin 2>&1 1>&3)"
exitcode="${?}"
echo "STDERR: $stderr"
exit ${exitcode}
POSIX
STDERR can be captured with some redirection magic:
$ { error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&3 ; } 2>&1); } 3>&1
lrwxrwxrwx 1 rZZt rZZt 7 Aug 22 15:44 /bin -> usr/bin/
$ echo $error
ls: cannot access '/XXXX': No such file or directory
Note that piping of STDOUT of the command (here ls) is done inside the innermost { }. If you're executing a simple command (eg, not a pipe), you could remove these inner braces.
You can't pipe outside the command as piping makes a subshell in bash and zsh, and the assignment to the variable in the subshell wouldn't be available to the current shell.
bash
In bash, it would be better not to assume that file descriptor 3 is unused:
{ error=$( { { ls -ld /XXXX /bin | tr o Z ; } 1>&$tmp ; } 2>&1); } {tmp}>&1;
exec {tmp}>&- # With this syntax the FD stays open
Note that this doesn't work in zsh.
Thanks to this answer for the general idea.
A simple solution
{ ERROR=$(./useless.sh 2>&1 1>&$out); } {out}>&1
echo "-"
echo $ERROR
Will produce:
This Is Output
-
This Is Error
Iterating a bit on Tom Hale's answer, I've found it possible to wrap the redirection yoga into a function for easier reuse. For example:
#!/bin/sh
capture () {
{ captured=$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
# Example usage; capturing dialog's output without resorting to temp files
# was what motivated me to search for this particular SO question
capture dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
choice=$captured
clear; echo $choice
It's almost certainly possible to simplify this further. Haven't tested especially-thoroughly, but it does appear to work with both bash and ksh.
EDIT: an alternative version of the capture function which stores the captured STDERR output into a user-specified variable (instead of relying on a global $captured), taking inspiration from Léa Gris's answer while preserving the ksh (and zsh) compatibility of the above implementation:
capture () {
if [ "$#" -lt 2 ]; then
echo "Usage: capture varname command [arg ...]"
return 1
fi
typeset var captured; captured="$1"; shift
{ read $captured <<<$( { { "$#" ; } 1>&3 ; } 2>&1); } 3>&1
}
And usage:
capture choice dialog --menu "Pick one!" 0 0 0 \
"FOO" "Foo" \
"BAR" "Bar" \
"BAZ" "Baz"
clear; echo $choice
Here's how I did it :
#
# $1 - name of the (global) variable where the contents of stderr will be stored
# $2 - command to be executed
#
captureStderr()
{
local tmpFile=$(mktemp)
$2 2> $tmpFile
eval "$1=$(< $tmpFile)"
rm $tmpFile
}
Usage example :
captureStderr err "./useless.sh"
echo -$err-
It does use a temporary file. But at least the ugly stuff is wrapped in a function.
This is an interesting problem to which I hoped there was an elegant solution. Sadly, I end up with a solution similar to Mr. Leffler, but I'll add that you can call useless from inside a Bash function for improved readability:
#!/bin/bash
function useless {
/tmp/useless.sh | sed 's/Output/Useless/'
}
ERROR=$(useless)
echo $ERROR
All other kind of output redirection must be backed by a temporary file.
I think you want to capture stderr, stdout and exitcode if that is your intention you can use this code:
## Capture error when 'some_command() is executed
some_command_with_err() {
echo 'this is the stdout'
echo 'this is the stderr' >&2
exit 1
}
run_command() {
{
IFS=$'\n' read -r -d '' stderr;
IFS=$'\n' read -r -d '' stdout;
IFS=$'\n' read -r -d '' stdexit;
} < <((printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1)
stdexit=${stdexit:-0};
}
echo 'Run command:'
if ! run_command; then
## Show the values
typeset -p stdout stderr stdexit
else
typeset -p stdout stderr stdexit
fi
This scripts capture the stderr, stdout as well as the exitcode.
But Teo how it works?
First, we capture the stdout as well as the exitcode using printf '\0%s\0%d\0'. They are separated by the \0 aka 'null byte'.
After that, we redirect the printf to stderr by doing: 1>&2 and then we redirect all back to stdout using 2>&1. Therefore, the stdout will look like:
"<stderr>\0<stdout>\0<exitcode>\0"
Enclosing the printf command in <( ... ) performs process substitution. Process substitution allows a process’s input or output to be referred to using a filename. This means <( ... ) will pipe the stdout of (printf '\0%s\0%d\0' "$(some_command_with_err)" "${?}" 1>&2) 2>&1into the stdin of the command group using the first <.
Then, we can capture the piped stdout from the stdin of the command group with read. This command reads a line from the file descriptor stdin and split it into fields. Only the characters found in $IFS are recognized as word delimiters. $IFS or Internal Field Separator is a variable that determines how Bash recognizes fields, or word boundaries, when it interprets character strings. $IFS defaults to whitespace (space, tab, and newline), but may be changed, for example, to parse a comma-separated data file. Note that $* uses the first character held in $IFS.
## Shows whitespace as a single space, ^I(horizontal tab), and newline, and display "$" at end-of-line.
echo "$IFS" | cat -vte
# Output:
# ^I$
# $
## Reads commands from string and assign any arguments to pos params
bash -c 'set w x y z; IFS=":-;"; echo "$*"'
# Output:
# w:x:y:z
for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a
# b
# c
IFS=$'\n'; for l in $(printf %b 'a b\nc'); do echo "$l"; done
# Output:
# a b
# c
That is why we defined IFS=$'\n' (newline) as delimiter.
Our script uses read -r -d '', where read -r does not allow backslashes to escape any characters, and -d '' continues until the first character '' is read, rather than newline.
Finally, replace some_command_with_err with your script file and you can capture and handle the stderr, stdout as well as the exitcode as your will.
This post helped me come up with a similar solution for my own purposes:
MESSAGE=`{ echo $ERROR_MESSAGE | format_logs.py --level=ERROR; } 2>&1`
Then as long as our MESSAGE is not an empty string, we pass it on to other stuff. This will let us know if our format_logs.py failed with some kind of python exception.
In zsh:
{ . ./useless.sh > /dev/tty } 2>&1 | read ERROR
$ echo $ERROR
( your message )
Capture AND Print stderr
ERROR=$( ./useless.sh 3>&1 1>&2 2>&3 | tee /dev/fd/2 )
Breakdown
You can use $() to capture stdout, but you want to capture stderr instead. So you swap stdout and stderr. Using fd 3 as the temporary storage in the standard swap algorithm.
If you want to capture AND print use tee to make a duplicate. In this case the output of tee will be captured by $() rather than go to the console, but stderr(of tee) will still go to the console so we use that as the second output for tee via the special file /dev/fd/2 since tee expects a file path rather than a fd number.
NOTE: That is an awful lot of redirections in a single line and the order matters. $() is grabbing the stdout of tee at the end of the pipeline and the pipeline itself routes stdout of ./useless.sh to the stdin of tee AFTER we swapped stdin and stdout for ./useless.sh.
Using stdout of ./useless.sh
The OP said he still wanted to use (not just print) stdout, like ./useless.sh | sed 's/Output/Useless/'.
No problem just do it BEFORE swapping stdout and stderr. I recommend moving it into a function or file (also-useless.sh) and calling that in place of ./useless.sh in the line above.
However, if you want to CAPTURE stdout AND stderr, then I think you have to fall back on temporary files because $() will only do one at a time and it makes a subshell from which you cannot return variables.
Improving on YellowApple's answer:
This is a Bash function to capture stderr into any variable
stderr_capture_example.sh:
#!/usr/bin/env bash
# Capture stderr from a command to a variable while maintaining stdout
# #Args:
# $1: The variable name to store the stderr output
# $2: Vararg command and arguments
# #Return:
# The Command's Returnn-Code or 2 if missing arguments
function capture_stderr {
[ $# -lt 2 ] && return 2
local stderr="$1"
shift
{
printf -v "$stderr" '%s' "$({ "$#" 1>&3; } 2>&1)"
} 3>&1
}
# Testing with a call to erroring ls
LANG=C capture_stderr my_stderr ls "$0" ''
printf '\nmy_stderr contains:\n%s' "$my_stderr"
Testing:
bash stderr_capture_example.sh
Output:
stderr_capture_example.sh
my_stderr contains:
ls: cannot access '': No such file or directory
This function can be used to capture the returned choice of a dialog command.
If you want to bypass the use of a temporary file you may be able to use process substitution. I haven't quite gotten it to work yet. This was my first attempt:
$ .useless.sh 2> >( ERROR=$(<) )
-bash: command substitution: line 42: syntax error near unexpected token `)'
-bash: command substitution: line 42: `<)'
Then I tried
$ ./useless.sh 2> >( ERROR=$( cat <() ) )
This Is Output
$ echo $ERROR # $ERROR is empty
However
$ ./useless.sh 2> >( cat <() > asdf.txt )
This Is Output
$ cat asdf.txt
This Is Error
So the process substitution is doing generally the right thing... unfortunately, whenever I wrap STDIN inside >( ) with something in $() in an attempt to capture that to a variable, I lose the contents of $(). I think that this is because $() launches a sub process which no longer has access to the file descriptor in /dev/fd which is owned by the parent process.
Process substitution has bought me the ability to work with a data stream which is no longer in STDERR, unfortunately I don't seem to be able to manipulate it the way that I want.
$ b=$( ( a=$( (echo stdout;echo stderr >&2) ) ) 2>&1 )
$ echo "a=>$a b=>$b"
a=>stdout b=>stderr
For error proofing your commands:
execute [INVOKING-FUNCTION] [COMMAND]
execute () {
function="${1}"
command="${2}"
error=$(eval "${command}" 2>&1 >"/dev/null")
if [ ${?} -ne 0 ]; then
echo "${function}: ${error}"
exit 1
fi
}
Inspired in Lean manufacturing:
Make errors impossible by design
Make steps the smallest
Finish items one by one
Make it obvious to anyone
I'll use find command
find / -maxdepth 2 -iname 'tmp' -type d
as non superuser for the demo. It should complain 'Permission denied' when acessing / dir.
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&- | tee /dev/stderr)"; } 3>&1 | tee /dev/fd/4 2>&1; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
that gives output:
terminal:
find: ‘/root’: Permission denied
/tmp
/var/tmp
find: ‘/lost+found’: Permission denied
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied
The terminal output has also /dev/stderr content the same way as if you were running that find command without any script. $out has /dev/stdout and $err has /dev/stderr content.
use:
#!/bin/bash
echo "terminal:"
{ err="$(find / -maxdepth 2 -iname 'tmp' -type d 2>&1 1>&3 3>&-)"; } 3>&1 | tee /dev/fd/4; out=$(cat /dev/fd/4)
echo "stdout:" && echo "$out"
echo "stderr:" && echo "$err"
if you don't want to see /dev/stderr in the terminal output.
terminal:
/tmp
/var/tmp
stdout:
/tmp
/var/tmp
stderr:
find: ‘/root’: Permission denied
find: ‘/lost+found’: Permission denied

Bash - getting return code, stdout and stderr form piped invocations

I made a simple logger which has method logMETHOD. It's job is to
Put stderr and stdout to a variable log (and later to my global _LOG variable)
Print stderr of invoked method on stderr and stdout on stdout so I can see it in a console.
Return the return code of a invoked function.
It's invocation looks like this:
logMETHOD myMethod arg1 arg2 arg3
I figured out how to put standard and error output to both log variable and a console but I cannot get the right return code.
My code so far:
function logMETHOD {
exec 5>&1
local log
log="$($1 ${#:2} 2>&1 | tee /dev/fd/5)"
local retVal=$?
_LOG+=$log$'\n'
return $retVal
}
Unfortunately the return code I get comes from (probably) assigning a value (or from tee maybe).
BONUS QUESTION:
Is there a possibility to achieve my goals without 2>&1 which connects stdout with stderr also for console?
I tested solution with 'PIPESTATUS' but the code is still 0.
function main {
logMETHOD alwaysError
}
function logMETHOD {
exec 5>&1
local log
local retVal
log="$( "$#" 2>&1 | tee /dev/fd/5 )"
retVal=${PIPESTATUS[0]}
echo "RETVAL: $retVal"
echo "LOG: $log"
_LOG+=$log$'\n'
return $retVal
}
function alwaysError {
return 1
}
main $#
PIPESTATUS would be a good solution, but here it already holds the return value of the log=... assignment. If you want the return value of the "$#"... you have to write it like this:
log="$( "$#" 2>&1 | tee /dev/fd/5; echo ${PIPESTATUS[0]}>/tmp/retval )"
retVal=$(</tmp/retval)
Assigning it to a variable would not work, because its scope would not extend to the calling shell, so you have to resort to using a tempfile.
As for stderr, $() can only extract stdout, therefore you have to use a tempfile for that too, if you want to handle it separately.
log="$( "$#" 2>/tmp/stderr | tee /dev/fd/5; echo ${PIPESTATUS[0]}>/tmp/retval )"
stderr_log=$(</tmp/stderr)
retVal=$(</tmp/retval)
If you just want to spare the redirection:
log="$( "$#" |& tee /dev/fd/5; echo ${PIPESTATUS[0]}>/tmp/retval )"
From man bash:
If |& is used, command's standard error, in addition to its standard output, is connected to command2's standard input through the pipe; it is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command.
The simplest fix is to use PIPESTATUS to retrieve the exit status of the command.
# There's no need to split $1 off of $# to run the command.
log=$( "$#" 2>&1 | tee /dev/fd/5 )
local retVal=${PIPESTATUS[0]}

Resources