bash direct output to log file in addition to stdout - bash

I am trying to log the changes made by my script to a log file. At a very high level, my script has functions and I want to be able to log that information to a file. I used "tee -a" but that messed up the functionality in a number of ways.
Is there a simple way to achieve this task ?
Update : Corrected typo below
function1(){ ... }
function2(){ ... }
#main
function1 | tee -a /tmp/logfile
function2 | tee -a /tmp/logfile

(edited to reflect question edits)
You can incorporate the tee into the function definition:
function() { { ...<original function definition goes here>; } | tee -a output; }
so you don't need to invoke tee each time you call the function. Obviously, if the function modifies file descriptors, you will need to do a little more work. Also, keep in mind that this changes the buffering. If commands called from within function1 have a tty for their stdout, they will probably line buffer their output, but if their stdout is a pipe (which it is if you are piping to tee) the output will be block buffered. This may be the root cause of the differences you are seeing. Also, this only captures the output of one file descriptor. Perhaps you have commands writing to stderr. You will need to provide more details about the way the pipe to tee changes the behavior of the script.

Related

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Shell-Script Logging

i want to implement a shell-script that runs the command xyz and stores its output in a variable, but at the same time forwarding the commands output to the shell-scripts stdout.
This is because I want to launch this script via launchd, let it automatically log the script's output, but then also let the script push the individual commands output to the web. The script should not simply buffer the commands output and print it after it ran, but rather in real time.
Is something like this possible, and if, how do you implement it?
Thanks
thel30n
you are looking for the command:
$VAR=$(echo 'test' | tee /dev/tty)
test
echo $VAR
test
I believe there is no way to save log into a shell variable and avoid buffering command's output at the same time. But the alternative way would be saving log messages to a file using tee(1). For example:
LOGFILE=/path/to/logfile
run_and_log() {
$# | tee -a "$LOGFILE"
}
run_and_log xyz

Bash: How to redirect stdin, stdout and stderr to a log file for an install log

I currently have a bunch of installer scripts which log stderr/stdout to a log file that works well but I need to also redirect stdin for user responses to the same log file. The install scripts sometime call functions in a shared library (an include), which may also read user input. I thought about adding a custom read function but this will require altering the shared library and wondered if there's a way to do this from the calling script.
At the moment the scripts are similar to this:
#!/usr/bin/bash
. ./libInstall
INSTALL_LOG="./install.log"
( (
echo "INFO: Installing..."
# Run some arbitrary commands...
# Read some input...
read ANSWER1
read ANSWER2
# Call function in libInstall which will prompt the user...
funcWhichAsksAQuestion ANSWER3
echo "INFO: Installation Complete"
) 2>&1 ) | tee -a "${INSTALL_LOG}"
If I change "( (" to reflect the line below I can tee off stdin to the log file:
cat - 2> /dev/null | tee -a ${INSTALL_LOG} | ( (
This works but requires 2 carriage returns once the script ends, presumably because the pipe is broken.
It's almost there but I'd it to work without having to press enter twice at the end to get back to the shell prompt.
These scripts have to be fairly portable to work on RHEL >=5, AIX >=5.1, Solaris >=9 with the lowest bash version being v2.05 I believe.
Any ideas how I can achieve this?
Thanks
Why not just add 'echo "\n\n"' after your "installation complete" line? Granted, you'll have two extra lines in your log file, but those seem relatively harmless.
I believe you have to return twice because of how tee is implemented. It "uses" one return by itself, and the other two come from the 'read' calls (well, one read, one funcWhichAsksAQuestion).

Can I only show stdout/stderr in case of a trapped error in bash?

How I wish it to work:
When no errors are bash trapped (if nothing returns a non-zero exit code [unless overwritten by || true]), be silent. Hide stdout and stderr.
When an error is bash trapped, be verbose. Write stdout and stderr.
In my script only stdout and stderr is missing.
#!/bin/bash
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
error_handler() {
local return_code="$?"
local last_err="$BASH_COMMAND"
local stdout= # How to read FD 5?
local stderr= # How to read FD 6?
exec 1>&5
exec 2>&6
echo "ERROR!
scriptname: $0
BASH_COMMAND: $last_err
\$?: $return_code
stdout: $stdout
stderr: $stderr
" 1>&2
exit 1
}
trap "error_handler" ERR
echo "Some message..."
# Some command fails, i.e. return a non-zero exit code.
mkdir
I could probably redirect stdout/stderr to a temporary file and use cat to show it in case an error was bash trapped. Would be a bit better, if that temporary file wasn't required. Any idea?
Credit:
This question was inspired by question How to undo exec > /dev/null in bash? and answer by Charles Duffy
Let's look at the I/O redirection carefully:
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
We see that file descriptor 5 is a duplicate of the original standard output, but that standard output is going to /dev/null. Similarly, 6 is a duplicate of standard error, but standard error is going to /dev/null.
Now let's consider what happens when you run:
ls -l /dev/null /dev/not-actually/there
The ls command writes the output for /dev/null to /dev/null because that's where its standard output is directed. Similarly, it writes the error for the non-existent file /dev/not-actually/there to /dev/null because that's where its standard error is directed.
Thus, both the standard output and standard error of the command are irrevocably lost.
Given the expressed requirements, there isn't going to be a simple solution. Your best bet is probably to redirect both standard output and standard error to the same file (but be aware that the interleaving of error and normal output may be different because the output is a file). Alternatively, you can direct standard output and standard error to two separate files and show them when necessary.
Note that you will need to consider emptying the output file(s) after each command (letting the trap report the contents before the file(s) is/are emptied) so that you don't report the standard output or standard error of commands 1-9 when command 10 fails.
Doing this neatly and handling pipelines correctly, etc, is not trivial. I'm not sure whether to suggest a function that's passed the command and arguments (tricky for pipelines) or some other technique.
I've used the 'capture everything in one file' technique in cron-run scripts that mail the output when appropriate. It isn't wholly satisfactory, but it is a lot better than not having the error messages at all.
You can consider playing with expect and/or pseudo-ttys, but doing a good job will be really hard.
Your file descriptors 5 and 6 are write-only. There's no way for the shell to read its own output; bidirectional pipes are deadlocks waiting to happen even when it's not the same process on both ends.
I would go with the temp file idea.
The actual paths to I/O files are hidden from the shells, and other applications; you need a program that knows how to dig for the details. lsof may come to your rescue, if your system supports it. Try adding the following in your error routine:
local name0="$(basename "$0")";
lsof -p$$ -d5,6 2>/dev/null |
egrep "^${name0:0:5}[^ ]* +[^ ]+ +[^ ]+ +[56][a-zA-Z]* "
This will require some tweaking to get it to be robust (short program names, program names with spaces in them, ...), and more friendly (stdout for "4" and "stderr for "3"... say replacing the program name in column 1). But when you tweak beware the huge variations in output formats you may encounter. Not just between systems but between different file types on the same system. I leave this as an exercise for the student.

Is there a way in a shell script to figure out where its output is redirected?

We have scripts of following nature (in cron)
someScript.sh > /tmp/cronlog/somescript.$(date +%Y%m%d).log 2>&1
Now is there a way by which with in someScript.sh I can figure out what file the output has gone in to?
The script sends email with summary. At the same time I would like to mention that details could be found in so and so output file - with in the email.
I am aware of the construct if [ -t 1 ] to detect stdout etc but how to get the output file name?
Note that I want this to be generic so that some one can change the output file in cron and the script does not need to be modified.
The simplest thing I could think is that:
readlink -f /proc/$$/fd/1
$$ is the PID of the script (inside the script). On most unix systems, /proc/[pid] is the pseudo-directory containing info for process [pid].
/proc/[pid]/fd is a directory containing a list of symlinks for the open file-descriptors of the process. fd/0 is input, fd/1 is the output of the script, etc.
readlink then gives you the target file or tty if you don't redirect the output.
Of course, if you want to display it, you have to display it somewhere else than standard ouput, or it will be redirected! To debug, try the std error (2).
Various callings give those results on my box (script.sh just calls readlink -f /proc/$$/fd/1 >&2)
# ./script.sh
/dev/pts/0
# ./script.sh > /var/tmp/foo
/var/tmp/foo
# ./script.sh | more
/proc/12132/fd/pipe:[916212]
Rather than trying to find a hack (and that too platform dependent) its better to take a slightly different approach here.
Set your cron job like this:
someScript.sh /tmp/cronlog/somescript.$(date +%Y%m%d).log
i.e. without and > or 2>&1 (stdout/stderr streams redirections) and just pass an argument with the desired logfile name.
Now inside someScript.sh redirect streams to your log file like this:
LOGFILE=$1
exec &>${LOGFILE}
And finally you can then message your clients that:
"output details could be found in ${LOGFILE}"

Resources