I am trying to implement something which my logic says can't be done. But I need your help to understand why can't it be.
Short Version of Question
Is it possible to log stdout+stderr of a script in csh without using file redirection ( >& or tee ).
Detailed Explanation of Question
I have a requirement with a csh script (script1) where I am not allowed to use file redirection.(I will give the reason in a while)
So that means I can't use something like
echo just checking >& logfile
hence I can't use this or tee to create my logfile.
I also have a another script (script2) which is a top level script.
I can either run script1 in standalone mode or through script2.
In either case i need to create a log(stdout+stderr) of script1 in logfile.
There are two possible(but not complete) option for that
write this line in script2
./script1 >& logfile
But then I can't log script1 in logfile when script1 is run in standalone mode.
Another option is to use file redirections in script1 like this:
echo test starting >> logfile
echo test over
In this case thee are two disadvantages:
1) "test over" prints before "test starting" , i.e. the order of occurring of command logs is not certain.
2) It's tedious to put >>& after every statement if I am intending to cover whole script.
Now is there any other way,I can get what I need. That is I can run script1 without file redirection and still get to log its stdout+stderr in logfile.
You mention csh, so this may not help you. On the other had, it may motivate you to stop using csh for scripts, a task for which it is notoriously inappropriate. In sh, you can simply do:
#!/bin/sh
exec > logfile 2>&1
echo foo
To write foo (and the output and errors of all subsequent commands) to the logfile
Related
I have a simple bash script that launches an executable in the background and redirects stdout + stderr to a log file:
#!/usr/bin/bash
myexec >& logfile &
It works. However, output from myexec isn't the only thing that gets redirected: any messages that bash emits while attempting to invoke myexec are also going to logfile. To wit, if bash doesn't find myexec, I don't get to see the myexec: No such file or directory error because it went straight to logfile instead of to the terminal. This behavior annoys me because I end up not knowing whether the script succeeded in starting up myexec.
It occurs to me that the script could just test for the existence of myexec before trying to invoke it, but I'm wondering whether there isn't a way to do the redirection itself in such a way that only myexec's output, and not the shell's, gets redirected.
It's not possible to separate the outputs in the way the OP describes. As Charles Duffy explains in his comment, the system call that opens (or fails to open) the executable myexec takes place after Bash has forked a new process, at which point all of the I/O redirection has already been set up. There is, however, a workaround that suffices for the purpose stated in the OP, namely, "knowing whether the script succeeded in starting up myexec":
myexec > logfile 2>&1 && echo "ok" >&2 || echo "nope." >&2
I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"
I'd like to write .sh script that runs several scripts in the same directory one-by-one without running them concurrently (e.x. while the first one is still executing, the second one doesn't start executing).
Could you tell me the command, that could be written in front of script's name that does the actual thing?
I've tried source but it gives the following message for every listed script
./outer_script.sh: source: not found
source is a non-standard extension introduced by bash. POSIX specifies that you must use the . command. Other than the name, they are identical.
However, you probably don't want to source, because that is only supposed to be used when you need the script to be able to change the state of the script calling it. It is like a #include or import statement in other languages.
You would usually want to just run the script directly as a command, i.e. do not prefix it with source nor with any other command.
As a quick example of not using source:
for script in scripts/*; do
"$script"
done
If the above does not work, ensure that you've set the executable bit (chmod a+x) on the necessary scripts.
That is normal behavior of the bash script. i.e. if you have three scripts:
script1.sh:
echo "starting"
./script2.sh
./script3.sh
echo "done"
script2.sh:
while [ 1 ]; do
echo "script2"
sleep 2
done
and script3.sh:
echo "script3"
The output is:
starting
script2
script2
script2
...
and script3.sh will never be executed, unless you modify script1.sh to be:
echo "starting"
./script2.sh &
./script3.sh &
echo "done"
in which case the output will be something like:
starting
done
script2
script3
script2
script2
...
So in this case I assume your second level scripts contain something that starts new processes.
Have you included the line #!bin/bash in your outer_script? Some OS's don't consider it to be bash by default and source is bash command. Else just call the scripts using ./path/to/script.sh inside the outer_script
I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"
We have scripts of following nature (in cron)
someScript.sh > /tmp/cronlog/somescript.$(date +%Y%m%d).log 2>&1
Now is there a way by which with in someScript.sh I can figure out what file the output has gone in to?
The script sends email with summary. At the same time I would like to mention that details could be found in so and so output file - with in the email.
I am aware of the construct if [ -t 1 ] to detect stdout etc but how to get the output file name?
Note that I want this to be generic so that some one can change the output file in cron and the script does not need to be modified.
The simplest thing I could think is that:
readlink -f /proc/$$/fd/1
$$ is the PID of the script (inside the script). On most unix systems, /proc/[pid] is the pseudo-directory containing info for process [pid].
/proc/[pid]/fd is a directory containing a list of symlinks for the open file-descriptors of the process. fd/0 is input, fd/1 is the output of the script, etc.
readlink then gives you the target file or tty if you don't redirect the output.
Of course, if you want to display it, you have to display it somewhere else than standard ouput, or it will be redirected! To debug, try the std error (2).
Various callings give those results on my box (script.sh just calls readlink -f /proc/$$/fd/1 >&2)
# ./script.sh
/dev/pts/0
# ./script.sh > /var/tmp/foo
/var/tmp/foo
# ./script.sh | more
/proc/12132/fd/pipe:[916212]
Rather than trying to find a hack (and that too platform dependent) its better to take a slightly different approach here.
Set your cron job like this:
someScript.sh /tmp/cronlog/somescript.$(date +%Y%m%d).log
i.e. without and > or 2>&1 (stdout/stderr streams redirections) and just pass an argument with the desired logfile name.
Now inside someScript.sh redirect streams to your log file like this:
LOGFILE=$1
exec &>${LOGFILE}
And finally you can then message your clients that:
"output details could be found in ${LOGFILE}"