Redirect an executable's stdout/stderr but not invocation-time errors from the shell - bash

I have a simple bash script that launches an executable in the background and redirects stdout + stderr to a log file:
#!/usr/bin/bash
myexec >& logfile &
It works. However, output from myexec isn't the only thing that gets redirected: any messages that bash emits while attempting to invoke myexec are also going to logfile. To wit, if bash doesn't find myexec, I don't get to see the myexec: No such file or directory error because it went straight to logfile instead of to the terminal. This behavior annoys me because I end up not knowing whether the script succeeded in starting up myexec.
It occurs to me that the script could just test for the existence of myexec before trying to invoke it, but I'm wondering whether there isn't a way to do the redirection itself in such a way that only myexec's output, and not the shell's, gets redirected.

It's not possible to separate the outputs in the way the OP describes. As Charles Duffy explains in his comment, the system call that opens (or fails to open) the executable myexec takes place after Bash has forked a new process, at which point all of the I/O redirection has already been set up. There is, however, a workaround that suffices for the purpose stated in the OP, namely, "knowing whether the script succeeded in starting up myexec":
myexec > logfile 2>&1 && echo "ok" >&2 || echo "nope." >&2

Related

bash hangs when exec > > is called and an additional bash script is executed with output to stdin [duplicate]

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

Want to redirect the output of the nohup command [duplicate]

I have a problem with the nohup command.
When I run my job, I have a lot of data. The output nohup.out becomes too large and my process slows down. How can I run this command without getting nohup.out?
The nohup command only writes to nohup.out if the output would otherwise go to the terminal. If you have redirected the output of the command somewhere else - including /dev/null - that's where it goes instead.
nohup command >/dev/null 2>&1 # doesn't create nohup.out
Note that the >/dev/null 2>&1 sequence can be abbreviated to just >&/dev/null in most (but not all) shells.
If you're using nohup, that probably means you want to run the command in the background by putting another & on the end of the whole thing:
nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with nohup automatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:
nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running disown with no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (A disowned process gets no signals forwarded to it automatically by its parent shell - but without nohup, it will still receive a HUP signal sent via other means, such as a manual kill command. A nohup'ed process ignores any and all HUP signals, no matter how they are sent.)
Explanation:
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (<, <<, >, >>) and pipe (|) operators do.
The pipe is the simplest of these... command1 | command2 arranges for the standard output of command1 to feed directly into the standard input of command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So 0<infile reads standard input from the file named infile, while 2>>logfile appends standard error to the end of the file named logfile. If you don't specify a number, then input redirection defaults to fd 0 (< is the same as 0<), while output redirection defaults to fd 1 (> is the same as 1>).
Also, you can combine file descriptors together: 2>&1 means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence >/dev/null 2>&1 means "send standard output to /dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was /dev/null). Basically, "throw away whatever this command writes to either file descriptor".
When nohup detects that neither its standard error nor output is attached to a terminal, it doesn't bother to create nohup.out, but assumes that the output is already redirected where the user wants it to go.
The /dev/null device works for input, too; if you run a command with </dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do >/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
nohup some_command > /dev/null 2>&1&
That's all you need to do!
Have you tried redirecting all three I/O streams:
nohup ./yourprogram > foo.out 2> foo.err < /dev/null &
You might want to use the detach program. You use it like nohup but it doesn't produce an output log unless you tell it to. Here is the man page:
NAME
detach - run a command after detaching from the terminal
SYNOPSIS
detach [options] [--] command [args]
Forks a new process, detaches is from the terminal, and executes com‐
mand with the specified arguments.
OPTIONS
detach recognizes a couple of options, which are discussed below. The
special option -- is used to signal that the rest of the arguments are
the command and args to be passed to it.
-e file
Connect file to the standard error of the command.
-f Run in the foreground (do not fork).
-i file
Connect file to the standard input of the command.
-o file
Connect file to the standard output of the command.
-p file
Write the pid of the detached process to file.
EXAMPLE
detach xterm
Start an xterm that will not be closed when the current shell exits.
AUTHOR
detach was written by Robbert Haarman. See http://inglorion.net/ for
contact information.
Note I have no affiliation with the author of the program. I'm only a satisfied user of the program.
Following command will let you run something in the background without getting nohup.out:
nohup command |tee &
In this way, you will be able to get console output while running script on the remote server:
sudo bash -c "nohup /opt/viptel/viptel_bin/log.sh $* &> /dev/null" &
Redirecting the output of sudo causes sudo to reask for the password, thus an awkward mechanism is needed to do this variant.
If you have a BASH shell on your mac/linux in-front of you, you try out the below steps to understand the redirection practically :
Create a 2 line script called zz.sh
#!/bin/bash
echo "Hello. This is a proper command"
junk_errorcommand
The echo command's output goes into STDOUT filestream (file descriptor 1).
The error command's output goes into STDERR filestream (file descriptor 2)
Currently, simply executing the script sends both STDOUT and STDERR to the screen.
./zz.sh
Now start with the standard redirection :
zz.sh > zfile.txt
In the above, "echo" (STDOUT) goes into the zfile.txt. Whereas "error" (STDERR) is displayed on the screen.
The above is the same as :
zz.sh 1> zfile.txt
Now you can try the opposite, and redirect "error" STDERR into the file. The STDOUT from "echo" command goes to the screen.
zz.sh 2> zfile.txt
Combining the above two, you get:
zz.sh 1> zfile.txt 2>&1
Explanation:
FIRST, send STDOUT 1 to zfile.txt
THEN, send STDERR 2 to STDOUT 1 itself (by using &1 pointer).
Therefore, both 1 and 2 goes into the same file (zfile.txt)
Eventually, you can pack the whole thing inside nohup command & to run it in the background:
nohup zz.sh 1> zfile.txt 2>&1&
You can run the below command.
nohup <your command> & > <outputfile> 2>&1 &
e.g.
I have a nohup command inside script
./Runjob.sh > sparkConcuurent.out 2>&1

Unix redirection issue: </dev/null 1>&- 2>&- &

Unix redirection:
Recently I faced an issue where one of the script was using the command below to execute it in background. The issue was that the script was executing twice when it is started.
For example:
In the script I put an echo "Hello" to print to log file. When the script executed I saw in the log file that it printed twice at the same time. Can any one tell me what caused here to execute the script twice.
nohup <runScript> </dev/null 1>&- 2>&- &
The original version of your question was slightly confusing. The subject line asks about (with command and argument inferred):
somecmd arg1 </dev/null 1>&- 2>&- &
The body of the question appeared to ask about:
nohup &- 2>&- &
which could reasonably be inferred to mean:
nohup somecmd arg1 &- 2>&- &
The edited version of your question is also confusing — though the change was just to indent code fragment. The notation <runscript> is ill-chosen when you are asking about I/O redirections. I'm guessing that what you wrote as <runscript> is equivalent to me writing somecmd, rather than redirecting standard input from runscript and an ill-formed output redirection. However, the revised
code does at least match the subject line:
nohup runScript </dev/null 1>&- 2>&- &
So, I'll ignore the &- notation (a previous version of this answer did not).
Notation </dev/null 1>&- 2>&- &
The first command line redirects standard input from /dev/null, and closes both standard output and standard error and executes the command in background. Redirecting from /dev/null is good; closing standard output and standard error is not so good — programs are entitled to have those three file descriptors open, and that can be done by redirecting to /dev/null too:
somecmd arg1 </dev/null >/dev/null 2>&1 &
or:
somecmd arg1 </dev/null >/dev/null 2>/dev/null &
There is not much difference between these two.
Double running
There is nothing in any of the code that would account for the script being run twice, or the output appearing in a log file twice. Since you have not shown the script that was run, we cannot deduce any cause from that. On the whole, the charge would be 'operator error' — you managed to run the command twice. If you want us to look into that, you'll have to provide a reproducible script that:
Shows the script to be run.
Empties the log file.
Runs the script once with your chosen notation.
Shows that the log file contains two entries.
Without such a reproducible script, there's nothing anyone can do to help you.

Can I only show stdout/stderr in case of a trapped error in bash?

How I wish it to work:
When no errors are bash trapped (if nothing returns a non-zero exit code [unless overwritten by || true]), be silent. Hide stdout and stderr.
When an error is bash trapped, be verbose. Write stdout and stderr.
In my script only stdout and stderr is missing.
#!/bin/bash
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
error_handler() {
local return_code="$?"
local last_err="$BASH_COMMAND"
local stdout= # How to read FD 5?
local stderr= # How to read FD 6?
exec 1>&5
exec 2>&6
echo "ERROR!
scriptname: $0
BASH_COMMAND: $last_err
\$?: $return_code
stdout: $stdout
stderr: $stderr
" 1>&2
exit 1
}
trap "error_handler" ERR
echo "Some message..."
# Some command fails, i.e. return a non-zero exit code.
mkdir
I could probably redirect stdout/stderr to a temporary file and use cat to show it in case an error was bash trapped. Would be a bit better, if that temporary file wasn't required. Any idea?
Credit:
This question was inspired by question How to undo exec > /dev/null in bash? and answer by Charles Duffy
Let's look at the I/O redirection carefully:
exec 5>&1 >/dev/null
exec 6>&2 2>/dev/null
We see that file descriptor 5 is a duplicate of the original standard output, but that standard output is going to /dev/null. Similarly, 6 is a duplicate of standard error, but standard error is going to /dev/null.
Now let's consider what happens when you run:
ls -l /dev/null /dev/not-actually/there
The ls command writes the output for /dev/null to /dev/null because that's where its standard output is directed. Similarly, it writes the error for the non-existent file /dev/not-actually/there to /dev/null because that's where its standard error is directed.
Thus, both the standard output and standard error of the command are irrevocably lost.
Given the expressed requirements, there isn't going to be a simple solution. Your best bet is probably to redirect both standard output and standard error to the same file (but be aware that the interleaving of error and normal output may be different because the output is a file). Alternatively, you can direct standard output and standard error to two separate files and show them when necessary.
Note that you will need to consider emptying the output file(s) after each command (letting the trap report the contents before the file(s) is/are emptied) so that you don't report the standard output or standard error of commands 1-9 when command 10 fails.
Doing this neatly and handling pipelines correctly, etc, is not trivial. I'm not sure whether to suggest a function that's passed the command and arguments (tricky for pipelines) or some other technique.
I've used the 'capture everything in one file' technique in cron-run scripts that mail the output when appropriate. It isn't wholly satisfactory, but it is a lot better than not having the error messages at all.
You can consider playing with expect and/or pseudo-ttys, but doing a good job will be really hard.
Your file descriptors 5 and 6 are write-only. There's no way for the shell to read its own output; bidirectional pipes are deadlocks waiting to happen even when it's not the same process on both ends.
I would go with the temp file idea.
The actual paths to I/O files are hidden from the shells, and other applications; you need a program that knows how to dig for the details. lsof may come to your rescue, if your system supports it. Try adding the following in your error routine:
local name0="$(basename "$0")";
lsof -p$$ -d5,6 2>/dev/null |
egrep "^${name0:0:5}[^ ]* +[^ ]+ +[^ ]+ +[56][a-zA-Z]* "
This will require some tweaking to get it to be robust (short program names, program names with spaces in them, ...), and more friendly (stdout for "4" and "stderr for "3"... say replacing the program name in column 1). But when you tweak beware the huge variations in output formats you may encounter. Not just between systems but between different file types on the same system. I leave this as an exercise for the student.

Getting stdout+stderr in a log file

I am trying to implement something which my logic says can't be done. But I need your help to understand why can't it be.
Short Version of Question
Is it possible to log stdout+stderr of a script in csh without using file redirection ( >& or tee ).
Detailed Explanation of Question
I have a requirement with a csh script (script1) where I am not allowed to use file redirection.(I will give the reason in a while)
So that means I can't use something like
echo just checking >& logfile
hence I can't use this or tee to create my logfile.
I also have a another script (script2) which is a top level script.
I can either run script1 in standalone mode or through script2.
In either case i need to create a log(stdout+stderr) of script1 in logfile.
There are two possible(but not complete) option for that
write this line in script2
./script1 >& logfile
But then I can't log script1 in logfile when script1 is run in standalone mode.
Another option is to use file redirections in script1 like this:
echo test starting >> logfile
echo test over
In this case thee are two disadvantages:
1) "test over" prints before "test starting" , i.e. the order of occurring of command logs is not certain.
2) It's tedious to put >>& after every statement if I am intending to cover whole script.
Now is there any other way,I can get what I need. That is I can run script1 without file redirection and still get to log its stdout+stderr in logfile.
You mention csh, so this may not help you. On the other had, it may motivate you to stop using csh for scripts, a task for which it is notoriously inappropriate. In sh, you can simply do:
#!/bin/sh
exec > logfile 2>&1
echo foo
To write foo (and the output and errors of all subsequent commands) to the logfile

Resources