My intent was to have all the output of my bash script displayed on the console and logged to a file.
Here is my script that works as expected.
#!/bin/bash
LOG_FILE="test_log.log"
touch $LOG_FILE
# output to console and to logfile
exec > >(tee $LOG_FILE) 2>&1
echo "Starting command ls"
ls -al
echo "End of script"
However I do not understand why it works that way.
I expected to have exec >>(tee $LOG_FILE) 2>&1 work but it fails although exec >>$LOG_FILE 2>&1 indeed works.
I could not find the reason for the construction exec > >(command ) in the bash manual nor in advanced bash scripting. Can you explain the logic behind it ?
The >(tee $LOG_FILE) is an example of Process substitution, you might wish to search for that. Advanced Shell Scriptng and Bash manual
Using the syntax, <(program) for capturing output and >(program) for feeding input, we can pass data just one record at a time. It is more powerful than command substitution (backticks, or $( )) because it substitutes for a filename, not text. Therefore anywhere a file is normally specified we can substitute a program's standard output or input (although process substitution on input is not all that common).
This is particularly useful where a program does not use standard streams for what you want.
Note that in your example you are missing a space, exec >>(tee $LOG_FILE) 2>&1 is wrong (you will get a syntax error). Rather,
exec > >(tee $LOG_FILE) 2>&1
is correct, that space is critical.
So, the exec > part changes file descriptor 1 (the default), also known as stdout or standard output, to refer to "whatever comes next", in this case it is the process substitution, although normally it would be a filename.
2>&1 redirects file descriptor 2 (stderr or standard error) to refer to the same place as file descriptor 1 (stdout or standard out). Important: if you omit the & you end-up with a file called 1 rather than successful redirection.
Once you have called the exec line above, then you have changed the current process's standard output, so output from the commands which follow go to that tee process instead of to regular stdout.
Related
I'd like to know if there's a way in bash have the current command statement printed to both stdout and a log file, in addition to the command's output. For example:
runme.sh:
# do some setup, e.g. create script.log
echo "Fully logged command"
Would write the following to stdout and to script.log:
+ echo 'Fully logged command'
Fully logged command
For example, if I use these lines early in the script:
set -x
exec > >(tee -ai script.log)
This produces the command output from set -x in stdout but not in the log file.
I have done a bit of testing, and as it appears, set -x prints its messages to stderr. This, however, means that you need to redirect stderr to stdout and pipe stdout to tee.
So if you are doing this:
set -x
exec 2>&1 > >(tee -ai output.log)
... you are neatly getting everything that Bash executes in your log file as well, together with any output produced by any commands that you are executing.
But beware: Any formatting that may be applied by your programs are lost.
As a side note, as has been explained in some answers here, any pipes are created before any redirections take effect. So when you are redirecting stderr to a piped stdout, that is also going to wind up in said pipe.
I'm developing a BASH script which invokes another BASH script which prints a line to stdout. That output is captured by the first BASH script and used later. It works, but it has the downside that any other output which is printed by the second script will cause this part to behave unexpectedly, because there will be extra content.
main.sh
#!/bin/bash
# Invoke worker.sh and capture its standard output to stats
stats=$(worker.sh --generate-stats)
echo "stats=$stats"
worker.sh
#!/bin/bash
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
In this over-simplified example, it's not a problem to use this construct, but as worker.sh grows in size and complexity, it's hard to remember that no other command can print to stdout without confounding the behavior, and if someone else works on worker.sh without realizing they can't print to stdout, it can easily get fouled. So what is considered good practice to generate output in one script and use it in the other?
I'm wondering if a fifo would be appropriate, or another file descriptor, or just a plain file. Or if exec should be used in this case, something like what is shown here https://www.tldp.org/LDP/abs/html/x17974.html:
#!/bin/bash
exec 6>&1 # Link file descriptor #6 with stdout.
# Saves stdout.
exec >&2 # stdout now goes to stderr
echo "Didn't know I shouldn't print to stdout"
exec 1>&6 6>&- # Restore stdout and close file descriptor #6.
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
But I wouldn't want to use that if it's not considered good practice.
Many command-line utilities have quiet and verbose modes; it's generally considered good practice to have the most verbose output (debugging, tracing, etc.) be separated to standard error anyway, but it's common to have normal output be formatted for human legibility (e.g. include table headings and column separators) and quiet mode output be just the bare data for programmatic use. (For one example, see docker images vs docker images -q). So that would be my recommendation - have worker.sh take a flag indicating whether its output is being consumed programmatically, and write it such that its output is all sent via a function that checks that flag and filters appropriately.
Maybe a different approach would be for the second script to test to see if it's stdout is being used programatically:
gash.sh:
#!/bin/bash
data=$(./another.sh)
echo "Received $data"
another.sh:
#!/bin/bash
# for -t see man isatty(3). 1 is file descriptor 1 - stdout
if [ -t 1 ]; then
echo "stdout is a terminal"
else
echo "stdout is not a terminal"
fi
Gives (where $ is a generic keyboard prompt):
$ bash gash.sh
Received stdout is not a terminal
$ bash another.sh
stdout is a terminal
You could then set a flag to change script behaviour (ls(1) does a similar thing). However, you should be prepared for this:
$ bash another.sh|more
stdout is not a terminal
$ bash another.sh > out.txt
$ cat out.txt
stdout is not a terminal
I would like to route a file descriptor to multiple places at the same time. For instance I would like every command in my script to print stdout to /dev/ps/9 and ./myscript.stdout at the same time.
I'm looking to achieve similar results as piping every command in a script (or a section of a script) into tee, perhaps with file descriptors. I also want the ability to restore default output behavior later in the script.
This code doesn't work, but it's an attempt at expressing my intent. To restore stdout as FD 1 later, I copy it into FD 4.
exec 3>(tee /dev/ps/9 ./myscript.stdout)
exec 4>&1
exec 1>&3
Restore normal output behavior, deleting FDs 3 and 4.
exec 1>&4
exec 4>&-
exec 3>&-
I would like every command in my script to print stdout to /dev/ps/9
and ./myscript.stdout at the same time.
exec 1> >(tee ./myscript.stdout >/dev/ps/9)
The above combines redirection and process substitution. With redirection alone, one can send stdout to a file. For example:
exec 1> filename
However, with bash, filenames can often be replaced with commands. This is called process substitution and it looks like >(some command) or <(some command) depending on whether one wants to write-to or read-from the process. In our case, we want to write to a tee command. Thus:
exec 1> >(some command)
Or, more specifically:
exec 1> >(tee ./myscript.stdout >/dev/ps/9)
Note that we have to maintain the space between the redirect (1>) and the process substitution (>(tee ./myscript.stdout >/dev/ps/9). Without the space, it would look like we were trying to append to a file whose name starts with a parens and this would generate a bash error.
For more information on this see the sections entitled "REDIRECTION" and "Process Substitution" in man bash.
#!/bin/bash
random=$$ # generate a random seed number to name the log files with
out=out.$random
err=err.$random
dev=`echo $(who -m) | cut -d' ' -f2` # for finding the right pseudo-terminal
: >$out # create the log files or empty their contents
: >$err # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
exec 1> >(tee ./$out >/dev/$dev) # I don't know how this works but it does
exec 2> >(tee ./$err >/dev/$dev) # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
echo # writing directly to the pts in /dev doesn't look right until sending a blank line
##########################################
echo 'hello'
for i in `seq 0 1 10`; do
echo $i
done
bad_command
THANKS #John1024
Here's a script for anybody else wishing to test this out.
Could somebody please explain in detail the exec lines to me.
For instance, why is there a blank space after the arrow in:
exec 1>
?
#!/bin/bash
logfile=$$.log
exec > $logfile 2>&1 | tee
echo "test"
The $$ does a random seed number which is optional.
It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.
There seem to be two bash idioms for redirecting STDOUT and STDERR to a file:
fooscript &> foo
... and ...
fooscript > foo 2>&1
What's the difference? It seems to me that the first one is just a shortcut for the second one, but my coworker contends that the second one will produce no output even if there's an error with the initial redirect, whereas the first one will spit redirect errors to STDOUT.
EDIT: Okay... it seems like people are not understanding what I am asking, so I will try to clarify:
Can anyone give me an example where the two specific lines lines written above will yield different behavior?
From the bash manual:
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
The phrase "semantically equivalent" should settle the issue with your coworker.
The situation where the two lines have different behavior is when your script is not running in bash but some simpler shell in the sh family, e.g. dash (which I believe is used as /bin/sh in some Linux distros because it is more lightweight than bash). In that case,
fooscript &> foo
is interpreted as two commands: the first one runs fooscript in the background, and the second one truncates the file foo. The command
fooscript > foo 2>&1
runs fooscript in the foreground and redirects its output and standard error to the file foo. In bash I think the lines will always do the same thing.
The main reason to use 2>&1, in my experience, is when you want to append all output to a file rather than overwrite the file. With &> syntax, you can't append. So with 2>&1, you can write something like program >> alloutput.log 2>&1 and get stdout and stderr output appended to the log file.
&>foo is less typing than >foo 2>&1, and less error-prone (you can't get it in the wrong order), but achieves the same result.
2>&1 is confusing, because you need to put it after the 1> redirect, unless stdout is being redirected to a | pipe, in which case it goes before...
$ some-command 2>&1 >foo # does the unexpected
$ some-command >foo 2>&1 # does the same as
$ some-command &>foo # this and
$ some-command >&foo # compatible with other shells, but trouble if the filename is numeric
$ some-command 2>&1 | less # but the redirect goes *before* the pipe here...
&> foo # Will take all and redirect all output to foo.
2>&1 # will redirect stderr to stdout.
2>&1 depends on the order in which it is specified on the command line. Where &> sends both stdout and stderr to wherever, 2>&1 sends stderr to where stdout is currently going at that point in the command line. Thus:
command > file 2>&1
is different than:
command 2>&1 > file
where the former is redirecting both stdout and stderr to file, the latter redirects stderr to where stdout is going before it is redirected to the file (in this case, probably the terminal.) This is useful if you wanted to do something like:
command 2>&1 > file | less
Where you want to use less to page through the output of stderr and store the output of stdout to a file.
2>&1 might be useful for cases where you aren't redirecting stdout to somewhere else, but rather you just want stderr to be sent to the same place (such as the console) as stdout (perhaps if the command you are running is already sending stderr somewhere else other than the console).
I know its a pretty old posting..but sharing what could answer the question :
* ..will yield different behavior?
Scenario:
When you are trying to use "tee" and want to preserve the exit code using process substitution in bash...
someScript.sh 2>&1 >( tee "toALog") # this fails to capture the out put to log
where as :
someScript.sh >& >( tee "toALog") # works fine!