bash: How to prepend a string to stderr lines and combine both stdout and stderr in exact order and store in one variable in bash? - bash

What I have done so far is :
#!/bin/bash
exec 2> >(sed 's/^/ERROR= /')
var=$(
sleep 1 ;
hostname ;
ifconfig | wc -l ;
ls /sfsd;
ls hasdh;
mkdir /tmp/asdasasd/asdasd/asdasd;
ls /tmp ;
)
echo "$var"
This does prepend ERROR= at the start of each error lines, but displays all errors first and then stdout, (not in order in which it was executed).
If we skip storing the output in variable and execute the commands directly, the output comes in desired order.
Any expert opinion would be appreciated.

The primary problem with your script is that the command substitution $(...) only captures the subshell's standard output; the subshell's standard error still just flows through to the parent shell's standard error. As it happens, you've redirected the parent shell's standard error in a way that ends up populating the parent shell's standard output; but that completely circumvents the $(...), which is only capturing the subshell's standard output.
Do you see what I mean?
So, you can fix that by redirecting the subshell's standard error in a way that ends up populating its standard output, which is what gets captured:
var=$(
exec 2> >(sed 's/^/ERROR= /')
sleep 1
hostname
ifconfig | wc -l
ls /sfsd
ls hasdh
mkdir /tmp/asdasasd/asdasd/asdasd
ls /tmp
)
echo "$var"
Even so, this does not guarantee proper ordering of lines. The problem is that sed is running in parallel with everything else in the subshell, so while it's just received an error-line and is busy planning to write to standard output, one of the later commands in the subshell can be plowing ahead and already writing more things to standard output!
You can improve that by launching sed separately for each command, so that the shell will wait for sed to complete before proceeding to the next command:
var=$(
sleep 1 2> >(sed 's/^/ERROR= /')
hostname 2> >(sed 's/^/ERROR= /')
{ ifconfig | wc -l ; } 2> >(sed 's/^/ERROR= /')
ls /sfsd 2> >(sed 's/^/ERROR= /')
ls hasdh 2> >(sed 's/^/ERROR= /')
mkdir /tmp/asdasasd/asdasd/asdasd 2> >(sed 's/^/ERROR= /')
ls /tmp 2> >(sed 's/^/ERROR= /')
)
echo "$var"
Even so, sed will be running concurrently with each command, so if any of those commands is a complicated command that writes both to standard output and to standard error, then the order that that command's output is captured in may not match the order in which the command actually wrote it. But this should probably be good enough for your purposes.
You can improve the readability a bit by creating a wrapper function for the simple-command (non-pipeline) case:
var=$(
function fix-stderr () {
"$#" 2> >(sed 's/^/ERROR= /')
}
fix-stderr sleep 1
fix-stderr hostname
fix-stderr eval 'ifconfig | wc -l' # using eval to get a simple command
fix-stderr ls /sfsd
fix-stderr ls hasdh
fix-stderr mkdir /tmp/asdasasd/asdasd/asdasd
fix-stderr ls /tmp
)
echo "$var"

The sed command runs asynchronously from the rest of the shell; its output goes to standard error as soon as it processes its input from the commands in the command substitution. The standard output of those commands, however, are captured in $var and not displayed until the echo command runs.
Even if you weren't capturing the output, there is a chance the standard error and standard output of those commands wouldn't appear as you expect, because the sed command that ultimately produces the error messages might not be scheduled by the OS when you expect it to be, delaying the appearance of the error messages.
When you run a command in the usual way from the terminal, that command's standard error and standard output point to the same file: the terminal itself. As such, writes to the file maintain the order in which they occur in the program. As soon as you pipe one or the other to another process, you lose all control over how the two are spliced back together, if ever. In your case, you are redirecting standard error to sed, which writes modified lines back to standard output. But you have no control over when the the OS schedules sed to run and when your shell runs, so you can't control the order in which lines are written.
It helps to redirect standard error separately for each command:
tag_error () { sed 's/^/ERROR= /'; }
hostname 2> >(tag_error)
{ ifconfig | wc -l ; } 2> >(tag_error)
# etc
but this still doesn't guarantee that writes from within the same program are ordered as if they were all writing to the same file.
(ruakh has covered how to combine this with capturing standard output, so I won't bother adding it now. See his answer.)

One possible solution would be putting the commands in an array then execute them within a loop:
declare -a cmds=('sleep 1' 'hostname' 'eval ifconfig | wc -l' 'ls /sfsd' 'ls /tmp' 'ls hasdh')
for i in "${cmds[#]}"; do
$i 2> >(sed -E 's/^/ERROR=/')
done
When an error occurs it should print in the same order that it occurred in the execution. Using a command such as sh script.sh within the array should also reveal any stdout or stderr from the resulting external script. For piped command an eval will likely be needed as well.

Related

What is the difference between using process substitution vs. a pipe?

I came across an example for the using tee utility in the tee info page:
wget -O - http://example.com/dvd.iso | tee >(sha1sum > dvd.sha1) > dvd.iso
I looked up the >(...) syntax and found something called "process substitution". From what I understand, it makes a process look like a file that another process could write/append its output to. (Please correct me if I'm wrong on that point.)
How is this different from a pipe? (|) I see a pipe is being used in the above example—is it just a precedence issue? or is there some other difference?
There's no benefit here, as the line could equally well have been written like this:
wget -O - http://example.com/dvd.iso | tee dvd.iso | sha1sum > dvd.sha1
The differences start to appear when you need to pipe to/from multiple programs, because these can't be expressed purely with |. Feel free to try:
# Calculate 2+ checksums while also writing the file
wget -O - http://example.com/dvd.iso | tee >(sha1sum > dvd.sha1) >(md5sum > dvd.md5) > dvd.iso
# Accept input from two 'sort' processes at the same time
comm -12 <(sort file1) <(sort file2)
They're also useful in certain cases where you for any reason can't or don't want to use pipelines:
# Start logging all error messages to file as well as disk
# Pipes don't work because bash doesn't support it in this context
exec 2> >(tee log.txt)
ls doesntexist
# Sum a column of numbers
# Pipes don't work because they create a subshell
sum=0
while IFS= read -r num; do (( sum+=num )); done < <(curl http://example.com/list.txt)
echo "$sum"
# apt-get something with a generated config file
# Pipes don't work because we want stdin available for user input
apt-get install -c <(sed -e "s/%USER%/$USER/g" template.conf) mysql-server
Another major difference is the propagation of return values / exit codes (I'll use simpler commands to illustrate):
Pipe:
$ ls -l /notthere | tee listing.txt
ls: cannot access '/notthere': No such file or directory
$ echo $?
0
-> exit code of tee is propagated
Process substitution:
$ ls -l /notthere > >(tee listing.txt)
ls: cannot access '/notthere': No such file or directory
$ echo $?
2
-> exit code of ls is propagated
There are of course several methods to work around this (e.g. set -o pipefail, variable PIPESTATUS), but I think it's worth mentioning since this is the default behavior.
Yet another rather subtle, yet potentially annoying difference lies in subprocess termination (best illustrated using commands that produce lots of output):
Pipe:
#!/usr/bin/env bash
tar --create --file /tmp/etc-backup.tar --verbose --directory /etc . 2>&1 | tee /tmp/etc-backup.log
retval=${PIPESTATUS[0]}
(( ${retval} == 0 )) && echo -e "\n*** SUCCESS ***\n" || echo -e "\n*** FAILURE (EXIT CODE: ${retval}) ***\n"
-> after the line containing the pipe construct, all commands of the pipe have already terminated (otherwise PIPESTATUS could not contain their respective exit codes)
Process substitution:
#!/usr/bin/env bash
tar --create --file /tmp/etc-backup.tar --verbose --directory /etc . &> >(tee /tmp/etc-backup.log)
retval=$?
(( ${retval} == 0 )) && echo -e "\n*** SUCCESS ***\n" || echo -e "\n*** FAILURE (EXIT CODE: ${retval}) ***\n"
-> after the line containing the process substitution, the command within >(...), i.e. tee in this example, may still be running, potentially causing desynchronized console output (SUCCESS / FAILURE message gets mixed in with still flowing tar output) [*]
[*] Can be reproduced on the framebuffer console, but does not seem to affect GUI terminals like KDE's Konsole (likely due to different buffering strategies).

Copy *unbuffered* stdout to file from within bash script itself

I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-

Hook all command output within bash

Just for fun I want to pipe all output text in a terminal to espeak. For example, after this is set up I should be able to type echo hi and hear "hi" spoken, or ls and hear my directory contents listed.
The only promising method to capture output I've found so far is from here: http://www.linuxjournal.com/content/bash-redirections-using-exec
This is what I have so far:
npipe=/tmp/$$.tmp
mknod $npipe p
tee /dev/tty <$npipe | espeak &
espeakpid=$!
exec 1>&-
exec 1>$npipe
trap "rm -f $npipe; kill $espeakpid" EXIT
It works (also printing a bunch of "Done" jobs), but creating the named pipe, removing with trap and printing the output with tee all just seems a bit messy. Is there a simpler way?
This is one way:
exec > >(exec tee >(exec xargs -n 1 -d '\n' espeak -- &>/dev/null))
If you want to be able to restore back to original output stream:
exec 3>&1 ## Store original stdout to fd 3.
exec 4> >(exec tee >(exec xargs -n 1 -d '\n' espeak -- &>/dev/null)) ## Open "espeak" as fd 4.
exec >&4 ## Redirect stdout to "espeak".
exec >&3 ## Redirect back to normal.
I use xargs -n 1 because espeak doesn't do anything until EOF is reached so we summon an instance of it per line. This can be customized of course, but there's your answer. And of course a while read loop can also be an option for this.
I also use exec's in process substitution to make sure we get rid of unnecessary subshells.
Seems like it's way easier than that - I just tested this and it works:
$ echo "these are not the droids you are looking for" | espeak --stdin
The --stdin flag is the key. From espeak's man page:
--stdin
Read text input from stdin instead of a file
And if what you want to hear is a very long output, I guess you can use xargs if you run into Argument list too long errors...

Copy stderr and stdout to a file as well as the screen in ksh

I'm looking for a solution (similar to the bash code below) to copy both stdout and stderr to a file in addition to the screen within ksh on Solaris.
The following code works great in the bash shell:
#!/usr/bin/bash
# Clear the logfile
>logfile.txt
# Redirect all script output to a logfile as well as their normal locations
exec > >(tee -a logfile.txt)
exec 2> >(tee -a logfile.txt >&2)
date
ls -l /non-existent/path
For some reason this is throwing a syntax error on Solaris. I assume it's because I can't do process substitution, and I've seen some posts suggesting the use of mkfifo, but I've yet to come up with a working solution.
Does anyone know of a way that all output can be redirected to a file in addition to the default locations?
Which version of ksh are you using? The >() is not supported in ksh88, but is supported in ksh93 - the bash code should work unchanged (aside from the #! line) on ksh93.
If you are stuck with ksh88 (poor thing!) then you can emulate the bash/ksh93 behaviour using a named pipe:
#!/bin/ksh
# Clear the logfile
>logfile.txt
pipe1="/tmp/mypipe1.$$"
pipe2="/tmp/mypipe2.$$"
trap 'rm "$pipe1" "$pipe2"' EXIT
mkfifo "$pipe1"
mkfifo "$pipe2"
tee -a logfile.txt < "$pipe1" &
tee -a logfile.txt >&2 < "$pipe2" &
# Redirect all script output to a logfile as well as their normal locations
exec >"$pipe1"
exec 2>"$pipe2"
date
ls -l /non-existent/path
The above is a second version to enable stderr to be redirected to a different file.
How about this:
(some commands ...) 2>&1 | tee logfile.txt
Add -a to the tee command line for subsequent invocations to append rather than overwrite.
In ksh, the following works very well for me
LOG=log_file.$(date +%Y%m%d%H%M%S).txt
{
ls
date
... whatever command
} 2>&1 | tee -a $LOG

How do I get both STDOUT and STDERR to go to the terminal and a log file?

I have a script which will be run interactively by non-technical users. The script writes status updates to STDOUT so that the user can be sure that the script is running OK.
I want both STDOUT and STDERR redirected to the terminal (so that the user can see that the script is working as well as see if there was a problem). I also want both streams redirected to a log file.
I've seen a bunch of solutions on the net. Some don't work and others are horribly complicated. I've developed a workable solution (which I'll enter as an answer), but it's kludgy.
The perfect solution would be a single line of code that could be incorporated into the beginning of any script that sends both streams to both the terminal and a log file.
EDIT: Redirecting STDERR to STDOUT and piping the result to tee works, but it depends on the users remembering to redirect and pipe the output. I want the logging to be fool-proof and automatic (which is why I'd like to be able to embed the solution into the script itself.)
Use "tee" to redirect to a file and the screen. Depending on the shell you use, you first have to redirect stderr to stdout using
./a.out 2>&1 | tee output
or
./a.out |& tee output
In csh, there is a built-in command called "script" that will capture everything that goes to the screen to a file. You start it by typing "script", then doing whatever it is you want to capture, then hit control-D to close the script file. I don't know of an equivalent for sh/bash/ksh.
Also, since you have indicated that these are your own sh scripts that you can modify, you can do the redirection internally by surrounding the whole script with braces or brackets, like
#!/bin/sh
{
... whatever you had in your script before
} 2>&1 | tee output.file
Approaching half a decade later...
I believe this is the "perfect solution" sought by the OP.
Here's a one liner you can add to the top of your Bash script:
exec > >(tee -a $HOME/logfile) 2>&1
Here's a small script demonstrating its use:
#!/usr/bin/env bash
exec > >(tee -a $HOME/logfile) 2>&1
# Test redirection of STDOUT
echo test_stdout
# Test redirection of STDERR
ls test_stderr___this_file_does_not_exist
(Note: This only works with Bash. It will not work with /bin/sh.)
Adapted from here; the original did not, from what I can tell, catch STDERR in the logfile. Fixed with a note from here.
The Pattern
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
This redirects both stdout and stderr separately, and it sends separate copies of stdout and stderr to the caller (which might be your terminal).
In zsh, it will not proceed to the next statement until the tees have finished.
In bash, you may find that the final few lines of output appear after whatever statement comes next.
In either case, the right bits go to the right places.
Explanation
Here's a script (stored in ./example):
#! /usr/bin/env bash
the_cmd()
{
echo out;
1>&2 echo err;
}
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
Here's a session:
$ foo=$(./example)
err
$ echo $foo
out
$ cat stdout.txt
out
$ cat stderr.txt
err
Here's how it works:
Both tee processes are started, their stdins are assigned to file descriptors. Because they're enclosed in process substitutions, the paths to those file descriptors are substituted in the calling command, so now it looks something like this:
the_cmd 1> /proc/self/fd/13 2> /proc/self/fd/14
the_cmd runs, writing stdout to the first file descriptor, and stderr to the second one.
In the bash case, once the_cmd finishes, the following statement happens immediately (if your terminal is the caller, then you will see your prompt appear).
In the zsh case, once the_cmd finishes, the shell waits for both of the tee processes to finish before moving on. More on this here.
The first tee process, which is reading from the_cmd's stdout, writes a copy of that stdout back to the caller because that's what tee does. Its outputs are not redirected, so they make it back to the caller unchanged
The second tee process has it's stdout redirected to the caller's stderr (which is good, because it's stdin is reading from the_cmd's stderr). So when it writes to its stdout, those bits go to the caller's stderr.
This keeps stderr separate from stdout both in the files and in the command's output.
If the first tee writes any errors, they'll show up in both the stderr file and in the command's stderr, if the second tee writes any errors, they'll only show up only in the terminal's stderr.
the to redirect stderr to stdout append this at your command: 2>&1
For outputting to terminal and logging into file you should use tee
Both together would look like this:
mycommand 2>&1 | tee mylogfile.log
EDIT: For embedding into your script you would do the same. So your script
#!/bin/sh
whatever1
whatever2
...
whatever3
would end up as
#!/bin/sh
( whatever1
whatever2
...
whatever3 ) 2>&1 | tee mylogfile.log
EDIT:
I see I got derailed and ended up answering a different question from the one asked. The answer to the real question is at the bottom of Paul Tomblin's answer. (If you want to enhance that solution to redirect stdout and stderr separately for some reason, you could use the technique I describe here.)
I've been wanting an answer that preserves the distinction between stdout and stderr.
Unfortunately all of the answers given so far that preserve that distinction
are race-prone: they risk programs seeing incomplete input, as I pointed out in comments.
I think I finally found an answer that preserves the distinction,
is not race prone, and isn't terribly fiddly either.
First building block: to swap stdout and stderr:
my_command 3>&1 1>&2 2>&3-
Second building block: if we wanted to filter (e.g. tee) only stderr,
we could accomplish that by swapping stdout&stderr, filtering, and then swapping back:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
Now the rest is easy: we can add a stdout filter, either at the beginning:
{ { my_command | stdout_filter;} 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
or at the end:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3- | stdout_filter
To convince myself that both of the above commands work, I used the following:
alias my_command='{ echo "to stdout"; echo "to stderr" >&2;}'
alias stdout_filter='{ sleep 1; sed -u "s/^/teed stdout: /" | tee stdout.txt;}'
alias stderr_filter='{ sleep 2; sed -u "s/^/teed stderr: /" | tee stderr.txt;}'
Output is:
...(1 second pause)...
teed stdout: to stdout
...(another 1 second pause)...
teed stderr: to stderr
and my prompt comes back immediately after the "teed stderr: to stderr", as expected.
Footnote about zsh:
The above solution works in bash (and maybe some other shells, I'm not sure), but it doesn't work in zsh. There are two reasons it fails in zsh:
the syntax 2>&3- isn't understood by zsh; that has to be rewritten
as 2>&3 3>&-
in zsh (unlike other shells), if you redirect a file descriptor
that's already open, in some cases (I don't completely understand how it decides) it does a built-in tee-like behavior instead. To avoid this, you have to close each fd prior to
redirecting it.
So, for example, my second solution has to be rewritten for zsh as {my_command 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stderr_filter;} 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stdout_filter (which works in bash too, but is awfully verbose).
On the other hand, you can take advantage of zsh's mysterious built-in implicit teeing to get a much shorter solution for zsh, which doesn't run tee at all:
my_command >&1 >stdout.txt 2>&2 2>stderr.txt
(I wouldn't have guessed from the docs I found that the >&1 and 2>&2 are the thing that trigger zsh's implicit teeing; I found that out by trial-and-error.)
Use the tee program and dup stderr to stdout.
program 2>&1 | tee > logfile
Use the script command in your script (man 1 script)
Create a wrapper shellscript (2 lines) that sets up script() and then calls exit.
Part 1: wrap.sh
#!/bin/sh
script -c './realscript.sh'
exit
Part 2: realscript.sh
#!/bin/sh
echo 'Output'
Result:
~: sh wrap.sh
Script started, file is typescript
Output
Script done, file is typescript
~: cat typescript
Script started on fr. 12. des. 2008 kl. 18.07 +0100
Output
Script done on fr. 12. des. 2008 kl. 18.07 +0100
~:
I created a script called "RunScript.sh". The contents of this script is:
${APP_HOME}/${1}.sh ${2} ${3} ${4} ${5} ${6} 2>&1 | tee -a ${APP_HOME}/${1}.log
I call it like this:
./RunScript.sh ScriptToRun Param1 Param2 Param3 ...
This works, but it requires the application's scripts to be run via an external script. It's a bit kludgy.
A year later, here's an old bash script for logging anything. For example,
teelog make ... logs to a generated log name (and see the trick for logging nested makes too.)
#!/bin/bash
me=teelog
Version="2008-10-9 oct denis-bz"
Help() {
cat <<!
$me anycommand args ...
logs the output of "anycommand ..." as well as displaying it on the screen,
by running
anycommand args ... 2>&1 | tee `day`-command-args.log
That is, stdout and stderr go to both the screen, and to a log file.
(The Unix "tee" command is named after "T" pipe fittings, 1 in -> 2 out;
see http://en.wikipedia.org/wiki/Tee_(command) ).
The default log file name is made up from "command" and all the "args":
$me cmd -opt dir/file logs to `day`-cmd--opt-file.log .
To log to xx.log instead, either export log=xx.log or
$me log=xx.log cmd ...
If "logdir" is set, logs are put in that directory, which must exist.
An old xx.log is moved to /tmp/\$USER-xx.log .
The log file has a header like
# from: command args ...
# run: date pwd etc.
to show what was run; see "From" in this file.
Called as "Log" (ln -s $me Log), Log anycommand ... logs to a file:
command args ... > `day`-command-args.log
and tees stderr to both the log file and the terminal -- bash only.
Some commands that prompt for input from the console, such as a password,
don't prompt if they "| tee"; you can only type ahead, carefully.
To log all "make" s, including nested ones like
cd dir1; \$(MAKE)
cd dir2; \$(MAKE)
...
export MAKE="$me make"
!
# See also: output logging in screen(1).
exit 1
}
#-------------------------------------------------------------------------------
# bzutil.sh denisbz may2008 --
day() { # 30mar, 3mar
/bin/date +%e%h | tr '[A-Z]' '[a-z]' | tr -d ' '
}
edate() { # 19 May 2008 15:56
echo `/bin/date "+%e %h %Y %H:%M"`
}
From() { # header # from: $* # run: date pwd ...
case `uname` in Darwin )
mac=" mac `sw_vers -productVersion`"
esac
cut -c -200 <<!
${comment-#} from: $#
${comment-#} run: `edate` in $PWD `uname -n` $mac `arch`
!
# mac $PWD is pwd -L not -P real
}
# log name: day-args*.log, change this if you like --
logfilename() {
log=`day`
[[ $1 == "sudo" ]] && shift
for arg
do
log="$log-${arg##*/}" # basename
(( ${#log} >= 100 )) && break # max len 100
done
# no blanks etc in logfilename please, tr them to "-"
echo $logdir/` echo "$log".log | tr -C '.:+=[:alnum:]_\n' - `
}
#-------------------------------------------------------------------------------
case "$1" in
-v* | --v* )
echo "$0 version: $Version"
exit 1 ;;
"" | -* )
Help
esac
# scan log= etc --
while [[ $1 == [a-zA-Z_]*=* ]]; do
export "$1"
shift
done
: ${logdir=.}
[[ -w $logdir ]] || {
echo >&2 "error: $me: can't write in logdir $logdir"
exit 1
}
: ${log=` logfilename "$#" `}
[[ -f $log ]] &&
/bin/mv "$log" "/tmp/$USER-${log##*/}"
case ${0##*/} in # basename
log | Log ) # both to log, stderr to caller's stderr too --
{
From "$#"
"$#"
} > $log 2> >(tee /dev/stderr) # bash only
# see http://wooledge.org:8000/BashFAQ 47, stderr to a pipe
;;
* )
#-------------------------------------------------------------------------------
{
From "$#" # header: from ... date pwd etc.
"$#" 2>&1 # run the cmd with stderr and stdout both to the log
} | tee $log
# mac tee buffers stdout ?
esac
This question seems has not been gracefully solved yet.
Everytime I search "how to output to stdout and stderr at same time", google directs me to this post.
Today, I finally found a simple & effective way to solve almost all these kind of needs.
The essential idea is the tee command which can print to multiple output at same time, and linux-specific /proc/self/fd/{1,2,...} to represent stdout, stderr...
print stdin to both stdout and stderr
tee /proc/self/fd/2
print stdin to both stdout and stderr, and file
tee /proc/self/fd/2 file
Hope this be helpful.
Here is a solution working for bash by redirection, by combining the solution of "kvantour, MatrixManAtYrService" and "Jason Sydes":
#!/bin/bash
exec 1> >(tee x.log) 2> >(tee x.err >&2)
echo "test for log"
echo "test for err" 1>&2
Save the script above as x.sh. After running ./x.sh, the x.log only include the stdout, while the x.err only include the stderr.

Resources