I'd like to know if there's a way in bash have the current command statement printed to both stdout and a log file, in addition to the command's output. For example:
runme.sh:
# do some setup, e.g. create script.log
echo "Fully logged command"
Would write the following to stdout and to script.log:
+ echo 'Fully logged command'
Fully logged command
For example, if I use these lines early in the script:
set -x
exec > >(tee -ai script.log)
This produces the command output from set -x in stdout but not in the log file.
I have done a bit of testing, and as it appears, set -x prints its messages to stderr. This, however, means that you need to redirect stderr to stdout and pipe stdout to tee.
So if you are doing this:
set -x
exec 2>&1 > >(tee -ai output.log)
... you are neatly getting everything that Bash executes in your log file as well, together with any output produced by any commands that you are executing.
But beware: Any formatting that may be applied by your programs are lost.
As a side note, as has been explained in some answers here, any pipes are created before any redirections take effect. So when you are redirecting stderr to a piped stdout, that is also going to wind up in said pipe.
Related
My intent was to have all the output of my bash script displayed on the console and logged to a file.
Here is my script that works as expected.
#!/bin/bash
LOG_FILE="test_log.log"
touch $LOG_FILE
# output to console and to logfile
exec > >(tee $LOG_FILE) 2>&1
echo "Starting command ls"
ls -al
echo "End of script"
However I do not understand why it works that way.
I expected to have exec >>(tee $LOG_FILE) 2>&1 work but it fails although exec >>$LOG_FILE 2>&1 indeed works.
I could not find the reason for the construction exec > >(command ) in the bash manual nor in advanced bash scripting. Can you explain the logic behind it ?
The >(tee $LOG_FILE) is an example of Process substitution, you might wish to search for that. Advanced Shell Scriptng and Bash manual
Using the syntax, <(program) for capturing output and >(program) for feeding input, we can pass data just one record at a time. It is more powerful than command substitution (backticks, or $( )) because it substitutes for a filename, not text. Therefore anywhere a file is normally specified we can substitute a program's standard output or input (although process substitution on input is not all that common).
This is particularly useful where a program does not use standard streams for what you want.
Note that in your example you are missing a space, exec >>(tee $LOG_FILE) 2>&1 is wrong (you will get a syntax error). Rather,
exec > >(tee $LOG_FILE) 2>&1
is correct, that space is critical.
So, the exec > part changes file descriptor 1 (the default), also known as stdout or standard output, to refer to "whatever comes next", in this case it is the process substitution, although normally it would be a filename.
2>&1 redirects file descriptor 2 (stderr or standard error) to refer to the same place as file descriptor 1 (stdout or standard out). Important: if you omit the & you end-up with a file called 1 rather than successful redirection.
Once you have called the exec line above, then you have changed the current process's standard output, so output from the commands which follow go to that tee process instead of to regular stdout.
Is it possible within a bash script, to make all output, except the output i specifically output with echo, go to a log file, BUT if there's errors in the output it should show in the terminal (and the log file also ofcourse)?
Here is what you can do by using an additional file descriptor:
#!/bin/bash
# open fd=3 redirecting to 1 (stdout)
exec 3>&1
# redirect stdout/stderr to a file but show stderr on terminal
exec >file.log 2> >(tee >(cat >&3))
# function echo to show echo output on terminal
echo() {
# call actual echo command and redirect output to fd=3
command echo "$#" >&3
}
# script starts here
echo "show me"
printf "=====================\n"
printf "%s\n" "hide me"
ls foo-foo
date
tty
echo "end of run"
# close fd=3
exec 3>&-
After you run your script it will display following on terminal:
show me
ls: cannot access 'foo-foo': No such file or directory
end of run
If you do cat file.log then it shows:
=====================
hide me
ls: cannot access 'foo-foo': No such file or directory
Fri Dec 2 14:20:47 EST 2016
/dev/ttys002
On terminal we're only getting output of echo command and all the errors.
In the log file we're getting error and remaining output from script.
UNIX terminals usually provide two output file decriptors, stdout and stderr, both of which go to the terminal by default.
Well behaved programs send their "standard" output to stdout, and errors to stderr. So for example echo writes to stdout. grep writes matching lines to stdout, but if something goes wrong, for example a file can't be read, the error goes to stderr.
You can redirect these with > (for stdout) and 2> (for stderr). So:
myscript >log 2>errors
Writes output to log and errors to errors.
So part of your requirement can be met simply with:
command >log
... errors will continue to go to the terminal, via stdout.
Your extra requirement is "except the output i specifically output with echo".
It might be enough for you that your echos go to stderr:
echo "Processing next part" >&2
The >&2 redirects stdout from this command to stderr. This is the standard way of outputting errors (and sometimes informational output) in shell scripts.
If you need more than this, you might want to do something more complicated with more file descriptors. Try: https://unix.stackexchange.com/questions/18899/when-would-you-use-an-additional-file-descriptor
Well behaved UNIX programs tend to avoid doing complicated things with extra file descriptors. The convention is to restrict yourself to stdout and stderr, with any further outputs being specified as filenames in the command line parameters.
Suppose that a script of mine is invoked like this:
(script.sh 1>&2) 2>err
Is there a way to re-direct the output of one of the commands run by the script to standard output? I tried to do 2>&1 for that command, but that did not help. This answer suggests a solution for Windows command shell and re-directs to a file instead of the standard output.
For a simple example, suppose that the script is:
#!/bin/sh
# ... many commands whose output will go to `stderr`
echo aaa # command whose output needs to go to `stdout`; tried 2>&1
# ... many commands whose output will go to `stderr`
How do I cause the output of that echo to go to stdout (a sign of that would be that it would appear on the screen) when the script is invoked as shown above?
Send it to stderr in the script
echo this goes to stderr
echo so does this
echo this will end up in stdout >&2
echo more stderr
Run as
{ ./script.sh 3>&2 2>&1 1>&3 ; } 2>err
err contains
this goes to stderr
so does this
more stderr
Output to stdout
this will end up in stdout
This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 6 years ago.
I know that in Linux, to redirect output from the screen to a file, I can either use the > or tee. However, I'm not sure why part of the output is still output to the screen and not written to the file.
Is there a way to redirect all output to file?
That part is written to stderr, use 2> to redirect it. For example:
foo > stdout.txt 2> stderr.txt
or if you want in same file:
foo > allout.txt 2>&1
Note: this works in (ba)sh, check your shell for proper syntax
All POSIX operating systems have 3 streams: stdin, stdout, and stderr. stdin is the input, which can accept the stdout or stderr. stdout is the primary output, which is redirected with >, >>, or |. stderr is the error output, which is handled separately so that any exceptions do not get passed to a command or written to a file that it might break; normally, this is sent to a log of some kind, or dumped directly, even when the stdout is redirected. To redirect both to the same place, use:
$command &> /some/file
EDIT: thanks to Zack for pointing out that the above solution is not portable--use instead:
$command > file 2>&1
If you want to silence the error, do:
$command 2> /dev/null
To get the output on the console AND in a file file.txt for example.
make 2>&1 | tee file.txt
Note: & (in 2>&1) specifies that 1 is not a file name but a file descriptor.
Use this - "require command here" > log_file_name 2>&1
Detail description of redirection operator in Unix/Linux.
The > operator redirects the output usually to a file but it can be to a device. You can also use >> to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
> file redirects stdout to file
1> file redirects stdout to file
2> file redirects stderr to file
&> file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
Credits to osexp2003 and j.a. …
Instead of putting:
&>> your_file.log
behind a line in:
crontab -e
I use:
#!/bin/bash
exec &>> your_file.log
…
at the beginning of a BASH script.
Advantage: You have the log definitions within your script. Good for Git etc.
You can use exec command to redirect all stdout/stderr output of any commands later.
sample script:
exec 2> your_file2 > your_file1
your other commands.....
It might be the standard error. You can redirect it:
... > out.txt 2>&1
Command:
foo >> output.txt 2>&1
appends to the output.txt file, without replacing the content.
Use >> to append:
command >> file
There seem to be two bash idioms for redirecting STDOUT and STDERR to a file:
fooscript &> foo
... and ...
fooscript > foo 2>&1
What's the difference? It seems to me that the first one is just a shortcut for the second one, but my coworker contends that the second one will produce no output even if there's an error with the initial redirect, whereas the first one will spit redirect errors to STDOUT.
EDIT: Okay... it seems like people are not understanding what I am asking, so I will try to clarify:
Can anyone give me an example where the two specific lines lines written above will yield different behavior?
From the bash manual:
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
The phrase "semantically equivalent" should settle the issue with your coworker.
The situation where the two lines have different behavior is when your script is not running in bash but some simpler shell in the sh family, e.g. dash (which I believe is used as /bin/sh in some Linux distros because it is more lightweight than bash). In that case,
fooscript &> foo
is interpreted as two commands: the first one runs fooscript in the background, and the second one truncates the file foo. The command
fooscript > foo 2>&1
runs fooscript in the foreground and redirects its output and standard error to the file foo. In bash I think the lines will always do the same thing.
The main reason to use 2>&1, in my experience, is when you want to append all output to a file rather than overwrite the file. With &> syntax, you can't append. So with 2>&1, you can write something like program >> alloutput.log 2>&1 and get stdout and stderr output appended to the log file.
&>foo is less typing than >foo 2>&1, and less error-prone (you can't get it in the wrong order), but achieves the same result.
2>&1 is confusing, because you need to put it after the 1> redirect, unless stdout is being redirected to a | pipe, in which case it goes before...
$ some-command 2>&1 >foo # does the unexpected
$ some-command >foo 2>&1 # does the same as
$ some-command &>foo # this and
$ some-command >&foo # compatible with other shells, but trouble if the filename is numeric
$ some-command 2>&1 | less # but the redirect goes *before* the pipe here...
&> foo # Will take all and redirect all output to foo.
2>&1 # will redirect stderr to stdout.
2>&1 depends on the order in which it is specified on the command line. Where &> sends both stdout and stderr to wherever, 2>&1 sends stderr to where stdout is currently going at that point in the command line. Thus:
command > file 2>&1
is different than:
command 2>&1 > file
where the former is redirecting both stdout and stderr to file, the latter redirects stderr to where stdout is going before it is redirected to the file (in this case, probably the terminal.) This is useful if you wanted to do something like:
command 2>&1 > file | less
Where you want to use less to page through the output of stderr and store the output of stdout to a file.
2>&1 might be useful for cases where you aren't redirecting stdout to somewhere else, but rather you just want stderr to be sent to the same place (such as the console) as stdout (perhaps if the command you are running is already sending stderr somewhere else other than the console).
I know its a pretty old posting..but sharing what could answer the question :
* ..will yield different behavior?
Scenario:
When you are trying to use "tee" and want to preserve the exit code using process substitution in bash...
someScript.sh 2>&1 >( tee "toALog") # this fails to capture the out put to log
where as :
someScript.sh >& >( tee "toALog") # works fine!