Bash process substitution and syncing - bash

(Possibly related to Do some programs not accept process substitution for input files?)
In some Bash unit test scripts I'm using the following trick to log and display stdout and stderr of a command:
command > >(tee "${stdoutF}") 2> >(tee "${stderrF}" >&2)
This process produces some output to stdout, so the $stdoutF file gets some data. Then I run another command which does not output any data:
diff -r "$source" "$target" > >(tee "${stdoutF}") 2> >(tee "${stderrF}" >&2)
However, it doesn't look like this process always finishes successfully before the test for emptiness is run (using shunit-ng):
assertNull 'Unexpected output to stdout' "$(<"$stdoutF")"
In a 100 run test this failed 25 times.
Should it be sufficient to call sync before testing the file for emptiness:
sync
assertNull 'Unexpected output to stdout' "$(<"$stdoutF")"
... and/or should it work by forcing the sequence of the commands:
diff -r "$source" "$target" \
> >(tee "${stdoutF}"; assertNull 'Unexpected output to stdout' "$(<"$stdoutF")")
2> >(tee "${stderrF}" >&2)
... and/or is it possible to tee it somehow to assertNull directly instead of a file?
Update: sync is not the answer - See Gilles' response below.
Update 2: Discussion taken further to Save stdout, stderr and stdout+stderr synchronously. Thanks for the answers!

In bash, a process substitution substitution command foo > >(bar) finishes as soon as foo finishes. (This is not discussed in the documentation.) You can check this with
: > >(sleep 1; echo a)
This command returns immediately, then prints a asynchronously one second later.
In your case, the tee command takes just one little bit of time to finish after command completes. Adding sync gave tee enough time to complete, but this doesn't remove the race condition, any more than adding a sleep would, it just makes the race more unlikely to manifest.
More generally, sync does not have any internally observable effect: it only makes a difference if you want to access device where your filesystems are stored under a different operating system instance. In clearer terms, if your system loses power, only data written before the last sync is guaranteed to be available after you reboot.
As for removing the race condition, here are a few of possible approaches:
Explicitly synchronize all substituted processes.
mkfifo sync.pipe
command > >(tee -- "$stdoutF"; echo >sync.pipe)
2> >(tee -- "$stderrF"; echo >sync.pipe)
read line < sync.pipe; read line < sync.pipe
Use a different temporary file name for each command instead of reusing $stdoutF and $stderrF, and enforce that the temporary file is always newly created.
Give up on process substitution and use pipes instead.
{ { command | tee -- "$stdoutF" 1>&3; } 2>&1 \
| tee -- "$stderrF" 1>&2; } 3>&1
If you need the command's return status, bash puts it in ${PIPESTATUS[0]}.
{ { command | tee -- "$stdoutF" 1>&3; exit ${PIPESTATUS[0]}; } 2>&1 \
| tee -- "$stderrF" 1>&2; } 3>&1
if [ ${PIPESTATUS[0]} -ne 0 ]; then echo command failed; fi

I sometimes put a guard:
: > >(sleep 1; echo a; touch guard) \
&& while true; do
[ -f "guard" ] && { rm guard; break; }
sleep 0.2
done

Insert a sleep 5 or whatnot in place of sync to answer your last question

Related

Bash use of zenity with console redirection

In efforts to create more manageable scripts that write their own output to only one location themselves (via 'exec > file'), is there a better solution than below for combining stdout redirection + zenity (which in this use relies on piped stdout)?
parent.sh:
#!/bin/bash
exec >> /var/log/parent.out
( true; sh child.sh ) | zenity --progress --pulsate --auto-close --text='Executing child.sh')
[[ "$?" != "0" ]] && exit 1
...
child.sh:
#!/bin/bash
exec >> /var/log/child.out
echo 'Now doing child.sh things..'
...
When doing something like-
sh child.sh | zenity --progress --pulsate --auto-close --text='Executing child.sh'
zenity never receives stdout from child.sh since it is being redirected from within child.sh. Even though it seems to be a bit of a hack, is using a subshell containing a 'true' + execution of child.sh acceptable? Or is there a better way to manage stdout?
I get that 'tee' is acceptable to use in this scenario, though I would rather not have to write out child.sh's logfile location each time I want to execute child.sh.
Your redirection exec > stdout.txt will lead to error.
$ exec > stdout.txt
$ echo hello
$ cat stdout.txt
cat: stdout.txt: input file is output file
You need an intermediary file descriptor.
$ exec 3> stdout.txt
$ echo hello >&3
$ cat stdout.txt
hello

What is the difference between using process substitution vs. a pipe?

I came across an example for the using tee utility in the tee info page:
wget -O - http://example.com/dvd.iso | tee >(sha1sum > dvd.sha1) > dvd.iso
I looked up the >(...) syntax and found something called "process substitution". From what I understand, it makes a process look like a file that another process could write/append its output to. (Please correct me if I'm wrong on that point.)
How is this different from a pipe? (|) I see a pipe is being used in the above example—is it just a precedence issue? or is there some other difference?
There's no benefit here, as the line could equally well have been written like this:
wget -O - http://example.com/dvd.iso | tee dvd.iso | sha1sum > dvd.sha1
The differences start to appear when you need to pipe to/from multiple programs, because these can't be expressed purely with |. Feel free to try:
# Calculate 2+ checksums while also writing the file
wget -O - http://example.com/dvd.iso | tee >(sha1sum > dvd.sha1) >(md5sum > dvd.md5) > dvd.iso
# Accept input from two 'sort' processes at the same time
comm -12 <(sort file1) <(sort file2)
They're also useful in certain cases where you for any reason can't or don't want to use pipelines:
# Start logging all error messages to file as well as disk
# Pipes don't work because bash doesn't support it in this context
exec 2> >(tee log.txt)
ls doesntexist
# Sum a column of numbers
# Pipes don't work because they create a subshell
sum=0
while IFS= read -r num; do (( sum+=num )); done < <(curl http://example.com/list.txt)
echo "$sum"
# apt-get something with a generated config file
# Pipes don't work because we want stdin available for user input
apt-get install -c <(sed -e "s/%USER%/$USER/g" template.conf) mysql-server
Another major difference is the propagation of return values / exit codes (I'll use simpler commands to illustrate):
Pipe:
$ ls -l /notthere | tee listing.txt
ls: cannot access '/notthere': No such file or directory
$ echo $?
0
-> exit code of tee is propagated
Process substitution:
$ ls -l /notthere > >(tee listing.txt)
ls: cannot access '/notthere': No such file or directory
$ echo $?
2
-> exit code of ls is propagated
There are of course several methods to work around this (e.g. set -o pipefail, variable PIPESTATUS), but I think it's worth mentioning since this is the default behavior.
Yet another rather subtle, yet potentially annoying difference lies in subprocess termination (best illustrated using commands that produce lots of output):
Pipe:
#!/usr/bin/env bash
tar --create --file /tmp/etc-backup.tar --verbose --directory /etc . 2>&1 | tee /tmp/etc-backup.log
retval=${PIPESTATUS[0]}
(( ${retval} == 0 )) && echo -e "\n*** SUCCESS ***\n" || echo -e "\n*** FAILURE (EXIT CODE: ${retval}) ***\n"
-> after the line containing the pipe construct, all commands of the pipe have already terminated (otherwise PIPESTATUS could not contain their respective exit codes)
Process substitution:
#!/usr/bin/env bash
tar --create --file /tmp/etc-backup.tar --verbose --directory /etc . &> >(tee /tmp/etc-backup.log)
retval=$?
(( ${retval} == 0 )) && echo -e "\n*** SUCCESS ***\n" || echo -e "\n*** FAILURE (EXIT CODE: ${retval}) ***\n"
-> after the line containing the process substitution, the command within >(...), i.e. tee in this example, may still be running, potentially causing desynchronized console output (SUCCESS / FAILURE message gets mixed in with still flowing tar output) [*]
[*] Can be reproduced on the framebuffer console, but does not seem to affect GUI terminals like KDE's Konsole (likely due to different buffering strategies).

How can I conditionally copy output to a file without repeating echo/printf statements? [duplicate]

I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).

How do I get both STDOUT and STDERR to go to the terminal and a log file?

I have a script which will be run interactively by non-technical users. The script writes status updates to STDOUT so that the user can be sure that the script is running OK.
I want both STDOUT and STDERR redirected to the terminal (so that the user can see that the script is working as well as see if there was a problem). I also want both streams redirected to a log file.
I've seen a bunch of solutions on the net. Some don't work and others are horribly complicated. I've developed a workable solution (which I'll enter as an answer), but it's kludgy.
The perfect solution would be a single line of code that could be incorporated into the beginning of any script that sends both streams to both the terminal and a log file.
EDIT: Redirecting STDERR to STDOUT and piping the result to tee works, but it depends on the users remembering to redirect and pipe the output. I want the logging to be fool-proof and automatic (which is why I'd like to be able to embed the solution into the script itself.)
Use "tee" to redirect to a file and the screen. Depending on the shell you use, you first have to redirect stderr to stdout using
./a.out 2>&1 | tee output
or
./a.out |& tee output
In csh, there is a built-in command called "script" that will capture everything that goes to the screen to a file. You start it by typing "script", then doing whatever it is you want to capture, then hit control-D to close the script file. I don't know of an equivalent for sh/bash/ksh.
Also, since you have indicated that these are your own sh scripts that you can modify, you can do the redirection internally by surrounding the whole script with braces or brackets, like
#!/bin/sh
{
... whatever you had in your script before
} 2>&1 | tee output.file
Approaching half a decade later...
I believe this is the "perfect solution" sought by the OP.
Here's a one liner you can add to the top of your Bash script:
exec > >(tee -a $HOME/logfile) 2>&1
Here's a small script demonstrating its use:
#!/usr/bin/env bash
exec > >(tee -a $HOME/logfile) 2>&1
# Test redirection of STDOUT
echo test_stdout
# Test redirection of STDERR
ls test_stderr___this_file_does_not_exist
(Note: This only works with Bash. It will not work with /bin/sh.)
Adapted from here; the original did not, from what I can tell, catch STDERR in the logfile. Fixed with a note from here.
The Pattern
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
This redirects both stdout and stderr separately, and it sends separate copies of stdout and stderr to the caller (which might be your terminal).
In zsh, it will not proceed to the next statement until the tees have finished.
In bash, you may find that the final few lines of output appear after whatever statement comes next.
In either case, the right bits go to the right places.
Explanation
Here's a script (stored in ./example):
#! /usr/bin/env bash
the_cmd()
{
echo out;
1>&2 echo err;
}
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
Here's a session:
$ foo=$(./example)
err
$ echo $foo
out
$ cat stdout.txt
out
$ cat stderr.txt
err
Here's how it works:
Both tee processes are started, their stdins are assigned to file descriptors. Because they're enclosed in process substitutions, the paths to those file descriptors are substituted in the calling command, so now it looks something like this:
the_cmd 1> /proc/self/fd/13 2> /proc/self/fd/14
the_cmd runs, writing stdout to the first file descriptor, and stderr to the second one.
In the bash case, once the_cmd finishes, the following statement happens immediately (if your terminal is the caller, then you will see your prompt appear).
In the zsh case, once the_cmd finishes, the shell waits for both of the tee processes to finish before moving on. More on this here.
The first tee process, which is reading from the_cmd's stdout, writes a copy of that stdout back to the caller because that's what tee does. Its outputs are not redirected, so they make it back to the caller unchanged
The second tee process has it's stdout redirected to the caller's stderr (which is good, because it's stdin is reading from the_cmd's stderr). So when it writes to its stdout, those bits go to the caller's stderr.
This keeps stderr separate from stdout both in the files and in the command's output.
If the first tee writes any errors, they'll show up in both the stderr file and in the command's stderr, if the second tee writes any errors, they'll only show up only in the terminal's stderr.
the to redirect stderr to stdout append this at your command: 2>&1
For outputting to terminal and logging into file you should use tee
Both together would look like this:
mycommand 2>&1 | tee mylogfile.log
EDIT: For embedding into your script you would do the same. So your script
#!/bin/sh
whatever1
whatever2
...
whatever3
would end up as
#!/bin/sh
( whatever1
whatever2
...
whatever3 ) 2>&1 | tee mylogfile.log
EDIT:
I see I got derailed and ended up answering a different question from the one asked. The answer to the real question is at the bottom of Paul Tomblin's answer. (If you want to enhance that solution to redirect stdout and stderr separately for some reason, you could use the technique I describe here.)
I've been wanting an answer that preserves the distinction between stdout and stderr.
Unfortunately all of the answers given so far that preserve that distinction
are race-prone: they risk programs seeing incomplete input, as I pointed out in comments.
I think I finally found an answer that preserves the distinction,
is not race prone, and isn't terribly fiddly either.
First building block: to swap stdout and stderr:
my_command 3>&1 1>&2 2>&3-
Second building block: if we wanted to filter (e.g. tee) only stderr,
we could accomplish that by swapping stdout&stderr, filtering, and then swapping back:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
Now the rest is easy: we can add a stdout filter, either at the beginning:
{ { my_command | stdout_filter;} 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
or at the end:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3- | stdout_filter
To convince myself that both of the above commands work, I used the following:
alias my_command='{ echo "to stdout"; echo "to stderr" >&2;}'
alias stdout_filter='{ sleep 1; sed -u "s/^/teed stdout: /" | tee stdout.txt;}'
alias stderr_filter='{ sleep 2; sed -u "s/^/teed stderr: /" | tee stderr.txt;}'
Output is:
...(1 second pause)...
teed stdout: to stdout
...(another 1 second pause)...
teed stderr: to stderr
and my prompt comes back immediately after the "teed stderr: to stderr", as expected.
Footnote about zsh:
The above solution works in bash (and maybe some other shells, I'm not sure), but it doesn't work in zsh. There are two reasons it fails in zsh:
the syntax 2>&3- isn't understood by zsh; that has to be rewritten
as 2>&3 3>&-
in zsh (unlike other shells), if you redirect a file descriptor
that's already open, in some cases (I don't completely understand how it decides) it does a built-in tee-like behavior instead. To avoid this, you have to close each fd prior to
redirecting it.
So, for example, my second solution has to be rewritten for zsh as {my_command 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stderr_filter;} 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stdout_filter (which works in bash too, but is awfully verbose).
On the other hand, you can take advantage of zsh's mysterious built-in implicit teeing to get a much shorter solution for zsh, which doesn't run tee at all:
my_command >&1 >stdout.txt 2>&2 2>stderr.txt
(I wouldn't have guessed from the docs I found that the >&1 and 2>&2 are the thing that trigger zsh's implicit teeing; I found that out by trial-and-error.)
Use the tee program and dup stderr to stdout.
program 2>&1 | tee > logfile
Use the script command in your script (man 1 script)
Create a wrapper shellscript (2 lines) that sets up script() and then calls exit.
Part 1: wrap.sh
#!/bin/sh
script -c './realscript.sh'
exit
Part 2: realscript.sh
#!/bin/sh
echo 'Output'
Result:
~: sh wrap.sh
Script started, file is typescript
Output
Script done, file is typescript
~: cat typescript
Script started on fr. 12. des. 2008 kl. 18.07 +0100
Output
Script done on fr. 12. des. 2008 kl. 18.07 +0100
~:
I created a script called "RunScript.sh". The contents of this script is:
${APP_HOME}/${1}.sh ${2} ${3} ${4} ${5} ${6} 2>&1 | tee -a ${APP_HOME}/${1}.log
I call it like this:
./RunScript.sh ScriptToRun Param1 Param2 Param3 ...
This works, but it requires the application's scripts to be run via an external script. It's a bit kludgy.
A year later, here's an old bash script for logging anything. For example,
teelog make ... logs to a generated log name (and see the trick for logging nested makes too.)
#!/bin/bash
me=teelog
Version="2008-10-9 oct denis-bz"
Help() {
cat <<!
$me anycommand args ...
logs the output of "anycommand ..." as well as displaying it on the screen,
by running
anycommand args ... 2>&1 | tee `day`-command-args.log
That is, stdout and stderr go to both the screen, and to a log file.
(The Unix "tee" command is named after "T" pipe fittings, 1 in -> 2 out;
see http://en.wikipedia.org/wiki/Tee_(command) ).
The default log file name is made up from "command" and all the "args":
$me cmd -opt dir/file logs to `day`-cmd--opt-file.log .
To log to xx.log instead, either export log=xx.log or
$me log=xx.log cmd ...
If "logdir" is set, logs are put in that directory, which must exist.
An old xx.log is moved to /tmp/\$USER-xx.log .
The log file has a header like
# from: command args ...
# run: date pwd etc.
to show what was run; see "From" in this file.
Called as "Log" (ln -s $me Log), Log anycommand ... logs to a file:
command args ... > `day`-command-args.log
and tees stderr to both the log file and the terminal -- bash only.
Some commands that prompt for input from the console, such as a password,
don't prompt if they "| tee"; you can only type ahead, carefully.
To log all "make" s, including nested ones like
cd dir1; \$(MAKE)
cd dir2; \$(MAKE)
...
export MAKE="$me make"
!
# See also: output logging in screen(1).
exit 1
}
#-------------------------------------------------------------------------------
# bzutil.sh denisbz may2008 --
day() { # 30mar, 3mar
/bin/date +%e%h | tr '[A-Z]' '[a-z]' | tr -d ' '
}
edate() { # 19 May 2008 15:56
echo `/bin/date "+%e %h %Y %H:%M"`
}
From() { # header # from: $* # run: date pwd ...
case `uname` in Darwin )
mac=" mac `sw_vers -productVersion`"
esac
cut -c -200 <<!
${comment-#} from: $#
${comment-#} run: `edate` in $PWD `uname -n` $mac `arch`
!
# mac $PWD is pwd -L not -P real
}
# log name: day-args*.log, change this if you like --
logfilename() {
log=`day`
[[ $1 == "sudo" ]] && shift
for arg
do
log="$log-${arg##*/}" # basename
(( ${#log} >= 100 )) && break # max len 100
done
# no blanks etc in logfilename please, tr them to "-"
echo $logdir/` echo "$log".log | tr -C '.:+=[:alnum:]_\n' - `
}
#-------------------------------------------------------------------------------
case "$1" in
-v* | --v* )
echo "$0 version: $Version"
exit 1 ;;
"" | -* )
Help
esac
# scan log= etc --
while [[ $1 == [a-zA-Z_]*=* ]]; do
export "$1"
shift
done
: ${logdir=.}
[[ -w $logdir ]] || {
echo >&2 "error: $me: can't write in logdir $logdir"
exit 1
}
: ${log=` logfilename "$#" `}
[[ -f $log ]] &&
/bin/mv "$log" "/tmp/$USER-${log##*/}"
case ${0##*/} in # basename
log | Log ) # both to log, stderr to caller's stderr too --
{
From "$#"
"$#"
} > $log 2> >(tee /dev/stderr) # bash only
# see http://wooledge.org:8000/BashFAQ 47, stderr to a pipe
;;
* )
#-------------------------------------------------------------------------------
{
From "$#" # header: from ... date pwd etc.
"$#" 2>&1 # run the cmd with stderr and stdout both to the log
} | tee $log
# mac tee buffers stdout ?
esac
This question seems has not been gracefully solved yet.
Everytime I search "how to output to stdout and stderr at same time", google directs me to this post.
Today, I finally found a simple & effective way to solve almost all these kind of needs.
The essential idea is the tee command which can print to multiple output at same time, and linux-specific /proc/self/fd/{1,2,...} to represent stdout, stderr...
print stdin to both stdout and stderr
tee /proc/self/fd/2
print stdin to both stdout and stderr, and file
tee /proc/self/fd/2 file
Hope this be helpful.
Here is a solution working for bash by redirection, by combining the solution of "kvantour, MatrixManAtYrService" and "Jason Sydes":
#!/bin/bash
exec 1> >(tee x.log) 2> >(tee x.err >&2)
echo "test for log"
echo "test for err" 1>&2
Save the script above as x.sh. After running ./x.sh, the x.log only include the stdout, while the x.err only include the stderr.

Append text to stderr redirects in bash

Right now I'm using exec to redirect stderr to an error log with
exec 2>> ${errorLog}
The only downside is that I have to start each run with a timestamp since exec just pushes the text straight into the log file. Is there a way to redirect stderr but allow me to append text to it, such as a time stamp?
This is very interesting. I've asked a guy who knows bash quite well, and he told me this way:
foo() { while IFS='' read -r line; do echo "$(date) $line" >> file.txt; done; };
First, that creates a function reading one line of raw input from stdin, while the assignment to IFS makes it doesn't ignore blanks. Having read one line, it outputs it with the appropriate data prepended. Then you have to tell bash to redirect stderr into that function:
exec 2> >(foo)
Everything you write into stderr will now go through the foo function. Note when you do it in an interactive shell, you won't see the prompt anymore, because it's printed to stderr, and the read in foo is line buffered :)
You could simple just use:
exec 1> >( sed "s/^/$(date '+[%F %T]'): /" | tee -a ${LOGFILE}) 2>&1
This will not completely solve your Problem regarding Prompt not shown (itt will show after a short time, but not realtime, since the pipe will cache some data...), but will display the output 1:1 on stdout as well as in the file.
The only problem is, that I could not solve, is, to do this from a function, since that opens a subshell, where the exec is useless for the main program...
This example redirects stdout and stderr without loosing the original stdout and stderr. Also errors in the stdout handler are logged to the stderr handler. The file descriptors are saved in variables and closed in the child processes. Bash takes care, that no collisions occur.
#! /bin/bash
stamp ()
{
local LINE
while IFS='' read -r LINE; do
echo "$(date '+%Y-%m-%d %H:%M:%S,%N %z') $$ $LINE"
done
}
exec {STDOUT}>&1
exec {STDERR}>&2
exec 2> >(exec {STDOUT}>&-; exec {STDERR}>&-; exec &>> stderr.log; stamp)
exec > >(exec {STDOUT}>&-; exec {STDERR}>&-; exec >> stdout.log; stamp)
for n in $(seq 3); do
echo loop $n >&$STDOUT
echo o$n
echo e$n >&2
done
This requires a current Bash version but thanks to Shellshock one can rely on this nowadays.
cat q23123 2> tmp_file ;cat tmp_file | sed -e "s/^/$(date '+[%F %T]'): /g" >> output.log; rm -f tmp_file

Resources