How to log a shell script output within the script - shell

I have a ksh93 script (but the question is not ksh related).
Currently, I run my script with something like :
./script 2>&1 | tee logfile
I'm wondering what should I had in my script to get exactly the same result (screen output and a logfile containing both STDOUT and STDERR output).
Of course, I want to avoid adding '| tee logfile' for each echo/print I do.
Of course, a way to do that could be to wrap my script in another one that simply run './script 2>&1 | tee logfile' but I was wondering if this could be done inside the script itself.

If the current contents of your script is:
command1 arg1
command2 arg2
You can wrap that lot (inside the script) like this:
{
command1 arg1
command2 arg2
} 2>&1 | tee logfile
The { to } code is now a unit of I/O redirection within the script; the I/O redirection at the end applies to all the enclosed commands. It does not create a subshell; any variables set in the commands are available to the script after the I/O redirection.
The { and } are slightly peculiar syntactically; in particular, } must be preceded by a semicolon or newline.

One common technique that is somewhat fragile is:
#!/bin/sh
test -z "$NOEXEC" && { NOEXEC=1 exec "$0" "$#" 2>&1 | tee logfile; exit; }
...
This will discard the return value and exit with the value returned by tee. This may or may not be an issue, and may be the desired behavior.

Just place your code in a function and call it.
your_func(){
#your code
}
your_func $# 2>&1 | tee logfile

Related

Execute command if another is successful and log stdout and stderr to file in bash

I have two commands, let's call them command1 and command2.
What I need to do is:
have command 2 execute only if command 1 is successful
have stdout and stderr of both commands redirected to a file, let's call it log.txt
How would I write it in bash in a way that's simple to understand later?
I guess that you want
( command1 && command2 ) >& /tmp/log.txt
the >& is a bashism (also exists in zsh) to redirect both stderr and stdout
the parenthesis create a subshell by grouping commands.
(to put that in the background, add a & at end of line, after final .txt )
Wrap {}'s around a block of code to redirect all its i/o. You can stack all the code you want collectively handled, or keep it short and sweet.
Use && to execute a following command only if the previous command succeeded.
( Use || to execute a following command only if the previous command failed. )
This lets you set up if/then/else structures without much additional syntax.
{ cmd1 && cmd2; } >log.txt 2>&1
But if you want it to be really easy to understand later, maybe even to someone less familiar with bash, you can collect all the output from an actual if/then/elif/else structure too -
if cmd1 # if cmd1 succeeds
then cmd2 # then run cmd2
fi >log.txt 2>&1 # 2>&1 send stderr where stdout went
Don't underestimate the value of keeping it simple...or of adding comments.

Trick an application into thinking it's a pipe, not interactive

I'm looking for an opposite of this:
Trick an application into thinking its stdin is interactive, not a pipe
I'd like to get the output of a command on stdout, but make it think it's writing into a pipe.
The usual solution is to | cat but I have the additional requirement that this is cross platform (ie sh, not bash) and returns a valid exit code if the command fails. Normally I would use pipefail but this isn't available everywhere.
I've tried various incantations of stty but haven't been successful.
You could always use a named pipe:
mkfifo tmp.pipe
# Reader runs in background
cat tmp.pipe &
# Producer in foreground
your_command > tmp.pipe
command_rtn=$?
rm tmp.pipe
Alternately, if you don't need the output in realtime and the output is reasonably small:
output=$(your_command)
command_rtn=$?
echo "${output}"
Or you can write the exit status to a file doing something terrible like:
{ your_command; echo $? > rtn_file; } | cat
command_rtn=$(cat rtn_file)

Filter stderr AND get initial return code

Within a shell script I must run a command for which I need to determine what the return code is, but it turns out the output of the command goes to stderr AND also outputs the user's password (a parameter to the command unfortunately; bad, I know).
I would at least like to filter the passwd from being displayed back.
cmd ${OPTIONS}
RETURNCODE=$?
gives me the return code of the command
cmd ${OPTIONS} 3>&1 1>&2 2>&3 | sed "s:${PASSWD}:******:"
RETURNCODE=$?
Successfully filters the PASSWD but the return code is always 0 - that of the the sed, not the initial command.
Any tricks ?
There are several techniques. In bash, you can check the array PIPESTATUS. For a portable solution, you can do things like:
RETURNCODE=$({ { cmd $OPTIONS 3>&1 1>&2 2>&3; echo $? >&4; } | sed ... >&2; } 4>&1 )
This has the nice side-effect of retaining the behavior of cmd, and the output of sed goes to stderr in the same way that the output of cmd does. (Whether or not that is actually desirable is a different question!)

How to use stdout and stderr io-redirection to get sane error/warning messages output from a program?

I have a program that outputs to stdout and stderr but doesn't make use of them in the correct way. Some errors go to stdout, some go do stderr, non error stuff goes to stderr and it prints way to much info on stdout. To fix this I want to make a pipeline to do:
Save all output of $cmd (from both stderr and stdout) to a file $logfile (don't print it to screen).
Filter out all warning and error messages on stderr and stdout (from warning|error to empty line) and colorize only "error" words (redirect output to stderr).
Save output of step 2 to a file $logfile:r.stderr.
Exit with the correct exit code from the command.
So far I have this:
$!/bin/zsh
# using zsh 4.2.0
setopt no_multios
# Don't error out if sed or grep don't find a match:
alias -g grep_err_warn="(sed -n '/error\|warning/I,/^$/p' || true)"
alias -g color_err="(grep --color -i -C 1000 error 1>&2 || true)"
alias -g filter='tee $logfile | grep_err_warn | tee $logfile:r.stderr | color_err'
# use {} around command to avoid possible race conditions:
{ eval $cmd } 2>&1 | filter
exit $pipestatus[1]
I've tried many things but can't get it to work. I've read "From Bash to Z Shell", many posts, etc. My problems currently are:
Only stdin goes into the filter
Note: the $cmd is a shell script that calls a binary with a /usr/bin/time -p prefix. This seems to cause issues with pipelines and is why I'm wrapping the command in {…} all the output goes into the pipe.
I don't have zsh available.
I did notice that your {..}'d statement is not correct.
You always need a semicolon before the closing `}'.
When I added that in bash, I could prove to my satisfaction that stderr was being redirected to stdout.
Try
{ eval $cmd ; } 2>&1 | filter
# ----------^
Also, you wrote
Save all output of $cmd (form stderr
and stdout) to a file $logfile
I don't see any mention of $logfile in your code.
You should be able to get all output into logfile (while losing the specficity of stderr stream), with
yourCommand 2>&1 | tee ${logFile} | ....
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
Don't use aliases in scripts, use functions (global aliases are especially looking for trouble). Not that you actually need functions here. You also don't need || true (unless you're running under set -e, in which case you should turn it off here). Other than that, your script looks ok; what is it choking on?
{ eval $cmd } |
tee $logfile |
sed -n '/error\|warning/I,/^$/p' |
tee $logfile:r.stderr |
grep --color -i -C 1000 error 1>&2
exit $pipestatus[1]
I'm also not sure what you meant by the sed expression; I don't quite understand your requirement 2.
The original post was mostly correct, except for an optimization by Gilles (to turn off set -e so the || true's are not needed.
#!/bin/zsh
# using zsh 4.2.0
setopt no_multios
#setopt no_errexit # set -e # don't turn this on
{ eval $cmd } 2>&1 |
tee $logfile |
sed -n '/error\|warning/I,/^$/p' |
tee $logfile:r.stderr |
grep --color -i -C 1000 error 1>&2
exit $pipestatus[1]
The part that confused me was the mixing of stdout and stderr led to them being interleaved and the sed -n '/error\|warning/I,/^$/p' (which prints out from and error || warning to the next empty line) was printing out a lot more than expected which made it seem like the command wasn't working.

piping output through sed but retain exit status [duplicate]

This question already has answers here:
Pipe output and capture exit status in Bash
(16 answers)
Closed 9 years ago.
I pipe the output of a long-running build process through sed for syntax-highlighting, implemented as a wrapper around "mvn".
Further I have a "monitor" script that notifies me on the desktop when the build is finished. The monitor script checks the exit state of its argument and reports "Success" or "Failure".
By piping the maven output through sed, the exit status is always "ok", even when the build fails.
How can I pipe the correct exit status through sed as well?
Are there alternatives ?
Maybe the PIPESTATUS variable can help.
If you are using Bash, there's an option to use the set -o pipefail option, but since it's bash dependent, it's not portable, and won't work from a crontab, unless you wrap the whole thing in a bash env (bad solution).
http://bclary.com/blog/2006/07/20/pipefail-testing-pipeline-exit-codes/
This is a well known pain in the rear. If you are using bash (and probably many other modern sh variants), you can access the PIPESTATUS array to get the return value of a program earlier in the pipe. (Typically, the return value of the pipe is the return value of the last program in the pipe.) If you are using a shell that doesn't have PIPESTATUS (or if you want portability), you can do something like this:
#!/bin/sh
# run 'echo foo | false | sed s/f/t/', recording the status
# of false in RV
eval $( { { echo foo | false; printf RV=$? >&4; } |
sed s/f/t/ >&3; } 4>&1; ) 3>&1
echo RV=$RV
# run 'echo foo | cat | sed s/f/t/', recording the status
# of cat in RV
eval $( { { echo foo | cat; printf RV=$? >&4; } |
sed s/f/t/ >&3; } 4>&1; ) 3>&1
echo RV=$RV
In each case, RV will contain the return value of the false and the cat, respectively.
Bastil, because the pipe doesn't care about the exit status, you can only know whether sed exits sanely or not. I would enhance the sed script (or perhaps consider using a 3-liner Perl script) to exit with a failure status if the expected text isn't found, something like in pseudocode:
read($stdin)
if blank
exit(1) // output was blank, or on $stderr
else
regular expression substitution here
end
// natural exit success here
You could do it as a perl one-liner, and the same can be done in sedscript (but not in a sed one-liner, as far as I know)
Perhaps you could use a named pipe? Here's an example:
FIFODIR=`mktemp -d`
FIFO=$FIFODIR/fifo
mkfifo $FIFO
cat $FIFO & # An arbitrary pipeline
if false > $FIFO
then
echo "Build succeeded"
else
echo "Build failed" # This line WILL execute
fi
rm -r $FIFODIR
A week later I got a solution:
Originally I wwanted to do
monitor "mvn blah | sed -e SomeHighlightRegEx"
where monitor would reacts on exit status of sed (instead of mvn).
It's easier to do
monitor "mvn blah" | sed -e SomeHiglightRegEx
Note that this pipes the output of monitor through sed, while the monitor script reacts on status of mvn.
Thanks anyway for the other ideas.

Resources