This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 1 year ago.
I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?
Take a look here. It should be:
yourcommand &> filename
It redirects both standard output and standard error to file filename.
do_something 2>&1 | tee -a some_file
This is going to redirect standard error to standard output and standard output to some_file and print it to standard output.
You can redirect stderr to stdout and the stdout into a file:
some_command >file.log 2>&1
See Chapter 20. I/O Redirection
This format is preferred over the most popular &> format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable - 2 (is standard error) redirected to 1 (standard output).
# Close standard output file descriptor
exec 1<&-
# Close standard error file descriptor
exec 2<&-
# Open standard output as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect standard error to standard output
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.
To the author of the original post,
It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.
Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...
exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
trap 'cleanup' INT QUIT TERM EXIT
get_pids_of_ppid() {
local ppid="$1"
RETVAL=''
local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
RETVAL="$pids"
}
# Needed to kill processes running in background
cleanup() {
local current_pid element
local pids=( "$$" )
running_pids=("${pids[#]}")
while :; do
current_pid="${running_pids[0]}"
[ -z "$current_pid" ] && break
running_pids=("${running_pids[#]:1}")
get_pids_of_ppid $current_pid
local new_pids="$RETVAL"
[ -z "$new_pids" ] && continue
for element in $new_pids; do
running_pids+=("$element")
pids=("$element" "${pids[#]}")
done
done
kill ${pids[#]} 2>/dev/null
}
So, from the beginning. Let's assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.
Create file descriptors (FDs) #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing file descriptor #1 doesn't affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
1> >(...) redirects standard output to command in parentheses
Parentheses (sub-shell) executes 'tee', reading from exec's standard output (pipe) and redirects to the 'logger' command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.
The result of running a script having the above line and additionally this one:
echo "Will end up in standard output (terminal) and /var/log/messages"
...is as follows:
$ ./my_script
Will end up in standard output (terminal) and /var/log/messages
$ tail -n1 /var/log/messages
Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages
If you want to see clearer picture, add these two lines to the script:
ls -l /proc/self/fd/
ps xf
bash your_script.sh 1>file.log 2>&1
1>file.log instructs the shell to send standard output to the file file.log, and 2>&1 tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).
Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn't work.
Curiously, this works:
yourcommand &> filename
But this gives a syntax error:
yourcommand &>> filename
syntax error near unexpected token `>'
You have to use:
yourcommand 1>> filename 2>&1
Short answer: Command >filename 2>&1 or Command &>filename
Explanation:
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout to stdout, and create a file named "2" and write stderror there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2, is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we can explain why the solution why the following code produces no output:
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
File descriptors behave like pointers (although file descriptors are not the same as file pointers)
Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers
LOG_FACILITY="local7.notice"
LOG_TOPIC="my-prog-name"
LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
It is related: Writing standard output and standard error to syslog.
It almost works, but not from xinetd ;(
For the situation when "piping" is necessary, you can use |&.
For example:
echo -ne "15\n100\n" | sort -c |& tee >sort_result.txt
or
TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods | grep node >>js.log ; done |& sort -h
These Bash-based solutions can pipe standard output and standard error separately (from standard error of "sort -c", or from standard error to "sort -h").
I wanted a solution to have the output from stdout plus stderr written into a log file and stderr still on console. So I needed to duplicate the stderr output via tee.
This is the solution I found:
command 3>&1 1>&2 2>&3 1>>logfile | tee -a logfile
First swap stderr and stdout
then append the stdout to the log file
pipe stderr to tee and append it also to the log file
Adding to what Fernando Fabreti did, I changed the functions slightly and removed the &- closing and it worked for me.
function saveStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
exec 3>&1
exec 4>&2
trap restoreStandardOutputs EXIT
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
}
# Parameters: $1 => logfile to write to
function redirectOutputsToLogfile {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
saveStandardOutputs
exec 1>>${LOGFILE}
exec 2>&1
OUTPUTS_REDIRECTED="true"
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
}
function restoreStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
exec 1>&3 #restore stdout
exec 2>&4 #restore stderr
OUTPUTS_REDIRECTED="false"
fi
}
LOGFILE_NAME="tmp/one.log"
OUTPUTS_REDIRECTED="false"
echo "this goes to standard output"
redirectOutputsToLogfile $LOGFILE_NAME
echo "this goes to logfile"
echo "${LOGFILE_NAME}"
restoreStandardOutputs
echo "After restore this goes to standard output"
The "easiest" way (Bash 4 only):
ls * 2>&- 1>&-
In situations when you consider using things like exec 2>&1, I find it easier to read, if possible, rewriting code using Bash functions like this:
function myfunc(){
[...]
}
myfunc &>mylog.log
The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.
#!/bin/bash
#set -x
# global vars
OUTPUTS_REDIRECTED="false"
LOGFILE=/dev/stdout
# "private" function used by redirect_outputs_to_logfile()
function save_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
exec 3>&1
exec 4>&2
trap restore_standard_outputs EXIT
}
# Params: $1 => logfile to write to
function redirect_outputs_to_logfile {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
save_standard_outputs
exec 1>>${LOGFILE%.log}.log
exec 2>&1
OUTPUTS_REDIRECTED="true"
}
# "private" function used by save_standard_outputs()
function restore_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
exit 1;
fi
exec 1>&- #closes FD 1 (logfile)
exec 2>&- #closes FD 2 (logfile)
exec 2>&4 #restore stderr
exec 1>&3 #restore stdout
OUTPUTS_REDIRECTED="false"
}
Example of usage inside script:
echo "this goes to stdout"
redirect_outputs_to_logfile /tmp/one.log
echo "this goes to logfile"
restore_standard_outputs
echo "this goes to stdout"
For tcsh, I have to use the following command:
command >& file
If using command &> file, it will give an "Invalid null command" error.
Related
This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 1 year ago.
I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?
Take a look here. It should be:
yourcommand &> filename
It redirects both standard output and standard error to file filename.
do_something 2>&1 | tee -a some_file
This is going to redirect standard error to standard output and standard output to some_file and print it to standard output.
You can redirect stderr to stdout and the stdout into a file:
some_command >file.log 2>&1
See Chapter 20. I/O Redirection
This format is preferred over the most popular &> format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable - 2 (is standard error) redirected to 1 (standard output).
# Close standard output file descriptor
exec 1<&-
# Close standard error file descriptor
exec 2<&-
# Open standard output as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect standard error to standard output
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.
To the author of the original post,
It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.
Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...
exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
trap 'cleanup' INT QUIT TERM EXIT
get_pids_of_ppid() {
local ppid="$1"
RETVAL=''
local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
RETVAL="$pids"
}
# Needed to kill processes running in background
cleanup() {
local current_pid element
local pids=( "$$" )
running_pids=("${pids[#]}")
while :; do
current_pid="${running_pids[0]}"
[ -z "$current_pid" ] && break
running_pids=("${running_pids[#]:1}")
get_pids_of_ppid $current_pid
local new_pids="$RETVAL"
[ -z "$new_pids" ] && continue
for element in $new_pids; do
running_pids+=("$element")
pids=("$element" "${pids[#]}")
done
done
kill ${pids[#]} 2>/dev/null
}
So, from the beginning. Let's assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.
Create file descriptors (FDs) #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing file descriptor #1 doesn't affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
1> >(...) redirects standard output to command in parentheses
Parentheses (sub-shell) executes 'tee', reading from exec's standard output (pipe) and redirects to the 'logger' command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.
The result of running a script having the above line and additionally this one:
echo "Will end up in standard output (terminal) and /var/log/messages"
...is as follows:
$ ./my_script
Will end up in standard output (terminal) and /var/log/messages
$ tail -n1 /var/log/messages
Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages
If you want to see clearer picture, add these two lines to the script:
ls -l /proc/self/fd/
ps xf
bash your_script.sh 1>file.log 2>&1
1>file.log instructs the shell to send standard output to the file file.log, and 2>&1 tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).
Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn't work.
Curiously, this works:
yourcommand &> filename
But this gives a syntax error:
yourcommand &>> filename
syntax error near unexpected token `>'
You have to use:
yourcommand 1>> filename 2>&1
Short answer: Command >filename 2>&1 or Command &>filename
Explanation:
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout to stdout, and create a file named "2" and write stderror there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2, is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we can explain why the solution why the following code produces no output:
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
File descriptors behave like pointers (although file descriptors are not the same as file pointers)
Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers
LOG_FACILITY="local7.notice"
LOG_TOPIC="my-prog-name"
LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
It is related: Writing standard output and standard error to syslog.
It almost works, but not from xinetd ;(
For the situation when "piping" is necessary, you can use |&.
For example:
echo -ne "15\n100\n" | sort -c |& tee >sort_result.txt
or
TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods | grep node >>js.log ; done |& sort -h
These Bash-based solutions can pipe standard output and standard error separately (from standard error of "sort -c", or from standard error to "sort -h").
I wanted a solution to have the output from stdout plus stderr written into a log file and stderr still on console. So I needed to duplicate the stderr output via tee.
This is the solution I found:
command 3>&1 1>&2 2>&3 1>>logfile | tee -a logfile
First swap stderr and stdout
then append the stdout to the log file
pipe stderr to tee and append it also to the log file
Adding to what Fernando Fabreti did, I changed the functions slightly and removed the &- closing and it worked for me.
function saveStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
exec 3>&1
exec 4>&2
trap restoreStandardOutputs EXIT
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
}
# Parameters: $1 => logfile to write to
function redirectOutputsToLogfile {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
saveStandardOutputs
exec 1>>${LOGFILE}
exec 2>&1
OUTPUTS_REDIRECTED="true"
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
}
function restoreStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
exec 1>&3 #restore stdout
exec 2>&4 #restore stderr
OUTPUTS_REDIRECTED="false"
fi
}
LOGFILE_NAME="tmp/one.log"
OUTPUTS_REDIRECTED="false"
echo "this goes to standard output"
redirectOutputsToLogfile $LOGFILE_NAME
echo "this goes to logfile"
echo "${LOGFILE_NAME}"
restoreStandardOutputs
echo "After restore this goes to standard output"
The "easiest" way (Bash 4 only):
ls * 2>&- 1>&-
In situations when you consider using things like exec 2>&1, I find it easier to read, if possible, rewriting code using Bash functions like this:
function myfunc(){
[...]
}
myfunc &>mylog.log
The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.
#!/bin/bash
#set -x
# global vars
OUTPUTS_REDIRECTED="false"
LOGFILE=/dev/stdout
# "private" function used by redirect_outputs_to_logfile()
function save_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
exec 3>&1
exec 4>&2
trap restore_standard_outputs EXIT
}
# Params: $1 => logfile to write to
function redirect_outputs_to_logfile {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
save_standard_outputs
exec 1>>${LOGFILE%.log}.log
exec 2>&1
OUTPUTS_REDIRECTED="true"
}
# "private" function used by save_standard_outputs()
function restore_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
exit 1;
fi
exec 1>&- #closes FD 1 (logfile)
exec 2>&- #closes FD 2 (logfile)
exec 2>&4 #restore stderr
exec 1>&3 #restore stdout
OUTPUTS_REDIRECTED="false"
}
Example of usage inside script:
echo "this goes to stdout"
redirect_outputs_to_logfile /tmp/one.log
echo "this goes to logfile"
restore_standard_outputs
echo "this goes to stdout"
For tcsh, I have to use the following command:
command >& file
If using command &> file, it will give an "Invalid null command" error.
I'm facing to a problem that I'm not able to explain. I've to recode a shell, and there is a weird behavior for me.
echo test >&2 2>&1
This kind of command is writing on stderr (so with the first redirection), why the second redirection isn't impacting the output destination ? Why the output isn't on stdout ?
I saw some stuff that redirection is happening before executing command, from left to right, so why the second redirection is not cancelling the first one ?
Thanks in advance.
edit: I'm running my script on bash.
x>&y means "redirect file descriptor x to whatever fd y is currently pointing to.
Redirections are processed strictly left to right.
So, >&2 2>&1 points fd 1 to something like /dev/stderr, and then points fd 2 to /dev/stderr also.
If you want to swap stderr and stdout, you need a 3rd file descriptor:
(echo "test to stdout"; echo "test to stderr" >&2) 3>&1 1>&2 2>&3 3>&-
# ................................................ A .. B .. C .. D
# A. fd 3 = /dev/stdout
# B. fd 1 = /dev/stderr
# C. fd 2 = /dev/stdout
# D. fd 3 is closed
Let's put that in a function for easier testing
fdtest() { (echo "test to stdout"; echo "test to stderr" >&2) 3>&1 1>&2 2>&3 3>&-; }
Run it
$ fdtest
test to stdout
test to stderr
Throw away standard error (we expect to see the "stderr" message on standard out).
$ fdtest 2>/dev/null
test to stderr
Throw away standard out (we expect to see the "stdout" message on standard err).
$ fdtest 1>/dev/null
test to stdout
cmd >&2 2>&1 is very different than cmd 2>&1 >&2, since the redirections occur in a different order. Suppose the command is invoked from a process which has fd 1 attached to a file named 'output' and fd 2 attached to a filed named 'error'. Then cmd >&2 2>&1 will redirect the stdout of cmd to the file named error and then redirect stderr of cmd to whatever fd 1 is connected to, namely the file named error. But cmd 2>&1 >&2 will first redirect fd 2 to 'output' and the redirect fd 1 to the same place. In other words, cmd >&2 2>&1 writes everything to stderr, and cmd 2>&1 >&2 writes everything on stdout.
Writing a shell script within AWS DataPipeline that connects to a relational database, and the exit code is 0 even for SQL errors, so I need to capture + redirect stderr.
I'm redirecting stderr to another file to check for its contents, but still want stderr to be populated. The reason is that DataPipeline captures all the stderr and stdout and puts them all into 1 log. It's not capturing the stderr from the failed sql command because of the redirection. Is there a way to still have stderr populated? What I currently have is:
/bin/snowsql -f /home/scripts/dev/dev.sql 2> /home/scripts/dev/stderr.txt
This would be a lot shorter in bash:
#!/bin/bash
# ^^^^- NOT /bin/sh
...yourcommand... 2> >(tee stderr.text >&2)
In a POSIX-compliant manner, it looks more like:
#!/bin/sh
rm -f -- "stderr_and_logfile.$$"
mkfifo -- "stderr_and_logfile.$$" || exit
tee stderr.txt <"stderr_and_logfile.$$" >&2 &
/bin/snowsql -f /home/scripts/dev/dev.sql 2>"stderr_and_logfile.$$"; retval=$?
rm -f -- "stderr_and_logfile.$$"
exit "$retval" # so "rm" doesn't cause the exit status of snowsql to be discarded
This question already has an answer here:
Send stderr/stdout messages to function and trap exit signal
(1 answer)
Closed 7 years ago.
RESOLUTION:
See Send stderr/stdout messages to function and trap exit signal
Edit
I see that I have not been precise enough in my original post, sorry about that! So now I will incldue code example of my bash script with refined questions:
My four questions:
How can I send stdout to file and stderr to log() function in mysql command in STEP 2
How can I send stdout and stderr to log() function in gzip STEP 2 and rsync command STEP 3?
How to read stdout and stderr in log() function in question 1 and 2 above?
How can I still trap all errors in the onexit() function?
In generel: I want stdout and stderr to go to log() function so the messages can be put to specific log file according to specific format, but in some cases, like for mysqldump command, stdout need to go to file. Also, error occurs and is sent to stderr, I want to end up in onexit() function after the log statement is finished.
#!/bin/bash -Eu
# -E: ERR trap is inherited by shell functions.
# -u: Treat unset variables as an error when substituting.
# Example script for handling bash errors. Exit on error. Trap exit.
# This script is supposed to run in a subshell.
# See also: http://fvue.nl/wiki/Bash:_Error_handling
# Trap non-normal exit signals: 2/INT, 3/QUIT, 15/TERM,
trap onexit 1 2 3 15 ERR
# ****** VARIABLES STARTS HERE ***********
BACKUP_ROOT_DIR=/var/app/backup
BACKUP_DIR_DATE=${BACKUP_ROOT_DIR}/`date "+%Y-%m-%d"`
EMAIL_FROM="foo#bar.com"
EMAIL_RECIPIENTS="bar#foo.com"
...
# ****** VARIABLES STARTS HERE ***********
# ****** FUNCTIONS STARTS HERE ***********
# Function that checks if all folders exists and create new ones as required
function checkFolders()
{
if [ ! -d "${BACKUP_ROOT_DIR}" ] ; then
log "ERROR" "Backup directory doesn't exist"
exit
else
log "INFO" "All folders exists"
fi
if [ ! -d "${BACKUP_DIR_DATE}" ] ; then
mkdir ${BACKUP_DIR_DATE} -v
log "INFO" "Created new backup directory"
else
log "WARN" "Backup directory already exists"
fi
}
# Function executed when exiting the script, either because of an error or successfully run
function onexit() {
local exit_status=${1:-$?}
# Send email notification with the status
echo "Backup finished at `date` with status ${exit_status_text} | mail -s "${exit_status_text} - backup" -S from="${EMAIL_FROM}" ${EMAIL_RECIPIENTS}"
log "INFO" "Email notification sent with execution status ${exit_status_text}"
# Print script duration to the console
ELAPSED_TIME=$((${SECONDS} - ${START_TIME}))
log "INFO" "Backup finished" "startDate=\"${START_DATE}\", endDate=\"`date`\", duration=\"$((${ELAPSED_TIME}/60)) min $((${ELAPSED_TIME}%60)) sec\""
exit ${exit_status}
}
# Logs to custom log file according to preferred log format for Splunk
# Input:
# 1. severity (INFO,WARN,DEBUG,ERROR)
# 2. the message
# 3. additional fields
#
function log() {
local print_msg="`date +"%FT%T.%N%Z"` severity=\"${1}\",message=\"${2}\",transactionID=\"${TRANS_ID}\",source=\"${SCRIPT_NAME}\",environment=\"${ENV}\",application=\"${APP}\""
# check if additional fields set in the 3. parameter
if [ $# -eq 3 ] ; then
print_msg="${print_msg}, ${3}"
fi
echo ${print_msg} >> ${LOG_FILE}
}
# ****** FUNCTIONS ENDS HERE ***********
# ****** SCRIPT STARTS HERE ***********
log "INFO" "Backup of ${APP} in ${ENV} starting"
# STEP 1 - validate
log "INFO" "1/3 Checking folder paths"
checkFolders
# STEP 2 - mysql dump
log "INFO" "2/3 Dumping ${APP} database"
mysqldump --single-transaction ${DB_NAME} > ${BACKUP_DIR_DATE}/${SQL_BACKUP_FILE}
gzip -f ${BACKUP_DIR_DATE}/${SQL_BACKUP_FILE}
log "INFO" "Mysql dump finished."
# STEP 3 - transfer
# Files are only transferred if all commands has been running successfully. Transfer is done with use of rsync
log "INFO" "3/3 Transferring backup file"
rsync -r -av ${BACKUP_ROOT_DIR}/ ${BACKUP_TRANSFER_USER}#${BACKUP_TRANSFER_DEST}
# ****** SCRIPT ENDS HERE ***********
onexit
Thanks!
Try this:
mylogger() { printf "Log: %s\n" "$(</dev/stdin)"; }
mysqldump ... 2>&1 >dumpfilename.sql | mylogger
Both Cyrus's answer and Oleg Vaskevich's answer offer viable solutions for redirecting stderr to a shell function.
What they both imply is that it makes sense for your function to accept stdin input rather than expecting input as an argument.
To explain the idiom they use:
mysqldump ... 2>&1 > sdtout-file | log-func-that-receives-stderr-via-stdin
2>&1 ... redirects stderr to the original stdout
from that point on, any stderr input is redirected to stdout
> sdtout-file then redirects the original stdout stdout to file stdout-out-file
from that point on, any stdout input is redirected to the file.
Since > stdout-file comes after 2>&1, the net result is:
stdout output is redirected to file stdout-file
stderr output is sent to [the original] stdout
Thus, log-func-that-receives-stderr-via-stdin only receives the previous command's stderr input through the pipe, via its stdin.
Similarly, your original approach - command 2> >(logFunction) - works in principle, but requires that your log() function read from stdin rather than expect arguments:
The following illustrates the principle:
ls / nosuchfile 2> >(sed s'/^/Log: /') > stdout-file
ls / nosuchfile produces both stdout and stderr output.
stdout output is redirected to file stdout-file.
2> >(...) uses an [output] process substitution to redirect stderr output to the command enclosed in >(...) - that command receives input via its stdin.
sed s'/^/Log: /' reads from its stdin and prepends string Log: to each input line.
Thus, your log() function should be rewritten to process stdin:
either: by implicitly passing the input to another stdin-processing utility such as sed or awk (as above).
or: by using a while read ... loop to process each input line in a shell loop:
log() {
# `read` reads from stdin by default
while IFS= read -r line; do
printf 'STDERR line: %s\n' "$line"
done
}
mysqldump ... 2> >(log) > stdout-file
Let's suppose that your log function looks like this (it just echos the first argument):
log() { echo "$1"; }
To save the stdout of mysqldump to some file and call your log() function for every line in stderr, do this:
mysqldump 2>&1 >/your/sql_dump_file.dat | while IFS= read -r line; do log "$line"; done
If you wanted to use xargs, you could do it this way. However, you'd be starting a new shell every time.
export -f log
mysqldump 2>&1 >/your/sql_dump_file.dat | xargs -L1 bash -i -c 'log $#' _
I have a script which will be run interactively by non-technical users. The script writes status updates to STDOUT so that the user can be sure that the script is running OK.
I want both STDOUT and STDERR redirected to the terminal (so that the user can see that the script is working as well as see if there was a problem). I also want both streams redirected to a log file.
I've seen a bunch of solutions on the net. Some don't work and others are horribly complicated. I've developed a workable solution (which I'll enter as an answer), but it's kludgy.
The perfect solution would be a single line of code that could be incorporated into the beginning of any script that sends both streams to both the terminal and a log file.
EDIT: Redirecting STDERR to STDOUT and piping the result to tee works, but it depends on the users remembering to redirect and pipe the output. I want the logging to be fool-proof and automatic (which is why I'd like to be able to embed the solution into the script itself.)
Use "tee" to redirect to a file and the screen. Depending on the shell you use, you first have to redirect stderr to stdout using
./a.out 2>&1 | tee output
or
./a.out |& tee output
In csh, there is a built-in command called "script" that will capture everything that goes to the screen to a file. You start it by typing "script", then doing whatever it is you want to capture, then hit control-D to close the script file. I don't know of an equivalent for sh/bash/ksh.
Also, since you have indicated that these are your own sh scripts that you can modify, you can do the redirection internally by surrounding the whole script with braces or brackets, like
#!/bin/sh
{
... whatever you had in your script before
} 2>&1 | tee output.file
Approaching half a decade later...
I believe this is the "perfect solution" sought by the OP.
Here's a one liner you can add to the top of your Bash script:
exec > >(tee -a $HOME/logfile) 2>&1
Here's a small script demonstrating its use:
#!/usr/bin/env bash
exec > >(tee -a $HOME/logfile) 2>&1
# Test redirection of STDOUT
echo test_stdout
# Test redirection of STDERR
ls test_stderr___this_file_does_not_exist
(Note: This only works with Bash. It will not work with /bin/sh.)
Adapted from here; the original did not, from what I can tell, catch STDERR in the logfile. Fixed with a note from here.
The Pattern
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
This redirects both stdout and stderr separately, and it sends separate copies of stdout and stderr to the caller (which might be your terminal).
In zsh, it will not proceed to the next statement until the tees have finished.
In bash, you may find that the final few lines of output appear after whatever statement comes next.
In either case, the right bits go to the right places.
Explanation
Here's a script (stored in ./example):
#! /usr/bin/env bash
the_cmd()
{
echo out;
1>&2 echo err;
}
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
Here's a session:
$ foo=$(./example)
err
$ echo $foo
out
$ cat stdout.txt
out
$ cat stderr.txt
err
Here's how it works:
Both tee processes are started, their stdins are assigned to file descriptors. Because they're enclosed in process substitutions, the paths to those file descriptors are substituted in the calling command, so now it looks something like this:
the_cmd 1> /proc/self/fd/13 2> /proc/self/fd/14
the_cmd runs, writing stdout to the first file descriptor, and stderr to the second one.
In the bash case, once the_cmd finishes, the following statement happens immediately (if your terminal is the caller, then you will see your prompt appear).
In the zsh case, once the_cmd finishes, the shell waits for both of the tee processes to finish before moving on. More on this here.
The first tee process, which is reading from the_cmd's stdout, writes a copy of that stdout back to the caller because that's what tee does. Its outputs are not redirected, so they make it back to the caller unchanged
The second tee process has it's stdout redirected to the caller's stderr (which is good, because it's stdin is reading from the_cmd's stderr). So when it writes to its stdout, those bits go to the caller's stderr.
This keeps stderr separate from stdout both in the files and in the command's output.
If the first tee writes any errors, they'll show up in both the stderr file and in the command's stderr, if the second tee writes any errors, they'll only show up only in the terminal's stderr.
the to redirect stderr to stdout append this at your command: 2>&1
For outputting to terminal and logging into file you should use tee
Both together would look like this:
mycommand 2>&1 | tee mylogfile.log
EDIT: For embedding into your script you would do the same. So your script
#!/bin/sh
whatever1
whatever2
...
whatever3
would end up as
#!/bin/sh
( whatever1
whatever2
...
whatever3 ) 2>&1 | tee mylogfile.log
EDIT:
I see I got derailed and ended up answering a different question from the one asked. The answer to the real question is at the bottom of Paul Tomblin's answer. (If you want to enhance that solution to redirect stdout and stderr separately for some reason, you could use the technique I describe here.)
I've been wanting an answer that preserves the distinction between stdout and stderr.
Unfortunately all of the answers given so far that preserve that distinction
are race-prone: they risk programs seeing incomplete input, as I pointed out in comments.
I think I finally found an answer that preserves the distinction,
is not race prone, and isn't terribly fiddly either.
First building block: to swap stdout and stderr:
my_command 3>&1 1>&2 2>&3-
Second building block: if we wanted to filter (e.g. tee) only stderr,
we could accomplish that by swapping stdout&stderr, filtering, and then swapping back:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
Now the rest is easy: we can add a stdout filter, either at the beginning:
{ { my_command | stdout_filter;} 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
or at the end:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3- | stdout_filter
To convince myself that both of the above commands work, I used the following:
alias my_command='{ echo "to stdout"; echo "to stderr" >&2;}'
alias stdout_filter='{ sleep 1; sed -u "s/^/teed stdout: /" | tee stdout.txt;}'
alias stderr_filter='{ sleep 2; sed -u "s/^/teed stderr: /" | tee stderr.txt;}'
Output is:
...(1 second pause)...
teed stdout: to stdout
...(another 1 second pause)...
teed stderr: to stderr
and my prompt comes back immediately after the "teed stderr: to stderr", as expected.
Footnote about zsh:
The above solution works in bash (and maybe some other shells, I'm not sure), but it doesn't work in zsh. There are two reasons it fails in zsh:
the syntax 2>&3- isn't understood by zsh; that has to be rewritten
as 2>&3 3>&-
in zsh (unlike other shells), if you redirect a file descriptor
that's already open, in some cases (I don't completely understand how it decides) it does a built-in tee-like behavior instead. To avoid this, you have to close each fd prior to
redirecting it.
So, for example, my second solution has to be rewritten for zsh as {my_command 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stderr_filter;} 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stdout_filter (which works in bash too, but is awfully verbose).
On the other hand, you can take advantage of zsh's mysterious built-in implicit teeing to get a much shorter solution for zsh, which doesn't run tee at all:
my_command >&1 >stdout.txt 2>&2 2>stderr.txt
(I wouldn't have guessed from the docs I found that the >&1 and 2>&2 are the thing that trigger zsh's implicit teeing; I found that out by trial-and-error.)
Use the tee program and dup stderr to stdout.
program 2>&1 | tee > logfile
Use the script command in your script (man 1 script)
Create a wrapper shellscript (2 lines) that sets up script() and then calls exit.
Part 1: wrap.sh
#!/bin/sh
script -c './realscript.sh'
exit
Part 2: realscript.sh
#!/bin/sh
echo 'Output'
Result:
~: sh wrap.sh
Script started, file is typescript
Output
Script done, file is typescript
~: cat typescript
Script started on fr. 12. des. 2008 kl. 18.07 +0100
Output
Script done on fr. 12. des. 2008 kl. 18.07 +0100
~:
I created a script called "RunScript.sh". The contents of this script is:
${APP_HOME}/${1}.sh ${2} ${3} ${4} ${5} ${6} 2>&1 | tee -a ${APP_HOME}/${1}.log
I call it like this:
./RunScript.sh ScriptToRun Param1 Param2 Param3 ...
This works, but it requires the application's scripts to be run via an external script. It's a bit kludgy.
A year later, here's an old bash script for logging anything. For example,
teelog make ... logs to a generated log name (and see the trick for logging nested makes too.)
#!/bin/bash
me=teelog
Version="2008-10-9 oct denis-bz"
Help() {
cat <<!
$me anycommand args ...
logs the output of "anycommand ..." as well as displaying it on the screen,
by running
anycommand args ... 2>&1 | tee `day`-command-args.log
That is, stdout and stderr go to both the screen, and to a log file.
(The Unix "tee" command is named after "T" pipe fittings, 1 in -> 2 out;
see http://en.wikipedia.org/wiki/Tee_(command) ).
The default log file name is made up from "command" and all the "args":
$me cmd -opt dir/file logs to `day`-cmd--opt-file.log .
To log to xx.log instead, either export log=xx.log or
$me log=xx.log cmd ...
If "logdir" is set, logs are put in that directory, which must exist.
An old xx.log is moved to /tmp/\$USER-xx.log .
The log file has a header like
# from: command args ...
# run: date pwd etc.
to show what was run; see "From" in this file.
Called as "Log" (ln -s $me Log), Log anycommand ... logs to a file:
command args ... > `day`-command-args.log
and tees stderr to both the log file and the terminal -- bash only.
Some commands that prompt for input from the console, such as a password,
don't prompt if they "| tee"; you can only type ahead, carefully.
To log all "make" s, including nested ones like
cd dir1; \$(MAKE)
cd dir2; \$(MAKE)
...
export MAKE="$me make"
!
# See also: output logging in screen(1).
exit 1
}
#-------------------------------------------------------------------------------
# bzutil.sh denisbz may2008 --
day() { # 30mar, 3mar
/bin/date +%e%h | tr '[A-Z]' '[a-z]' | tr -d ' '
}
edate() { # 19 May 2008 15:56
echo `/bin/date "+%e %h %Y %H:%M"`
}
From() { # header # from: $* # run: date pwd ...
case `uname` in Darwin )
mac=" mac `sw_vers -productVersion`"
esac
cut -c -200 <<!
${comment-#} from: $#
${comment-#} run: `edate` in $PWD `uname -n` $mac `arch`
!
# mac $PWD is pwd -L not -P real
}
# log name: day-args*.log, change this if you like --
logfilename() {
log=`day`
[[ $1 == "sudo" ]] && shift
for arg
do
log="$log-${arg##*/}" # basename
(( ${#log} >= 100 )) && break # max len 100
done
# no blanks etc in logfilename please, tr them to "-"
echo $logdir/` echo "$log".log | tr -C '.:+=[:alnum:]_\n' - `
}
#-------------------------------------------------------------------------------
case "$1" in
-v* | --v* )
echo "$0 version: $Version"
exit 1 ;;
"" | -* )
Help
esac
# scan log= etc --
while [[ $1 == [a-zA-Z_]*=* ]]; do
export "$1"
shift
done
: ${logdir=.}
[[ -w $logdir ]] || {
echo >&2 "error: $me: can't write in logdir $logdir"
exit 1
}
: ${log=` logfilename "$#" `}
[[ -f $log ]] &&
/bin/mv "$log" "/tmp/$USER-${log##*/}"
case ${0##*/} in # basename
log | Log ) # both to log, stderr to caller's stderr too --
{
From "$#"
"$#"
} > $log 2> >(tee /dev/stderr) # bash only
# see http://wooledge.org:8000/BashFAQ 47, stderr to a pipe
;;
* )
#-------------------------------------------------------------------------------
{
From "$#" # header: from ... date pwd etc.
"$#" 2>&1 # run the cmd with stderr and stdout both to the log
} | tee $log
# mac tee buffers stdout ?
esac
This question seems has not been gracefully solved yet.
Everytime I search "how to output to stdout and stderr at same time", google directs me to this post.
Today, I finally found a simple & effective way to solve almost all these kind of needs.
The essential idea is the tee command which can print to multiple output at same time, and linux-specific /proc/self/fd/{1,2,...} to represent stdout, stderr...
print stdin to both stdout and stderr
tee /proc/self/fd/2
print stdin to both stdout and stderr, and file
tee /proc/self/fd/2 file
Hope this be helpful.
Here is a solution working for bash by redirection, by combining the solution of "kvantour, MatrixManAtYrService" and "Jason Sydes":
#!/bin/bash
exec 1> >(tee x.log) 2> >(tee x.err >&2)
echo "test for log"
echo "test for err" 1>&2
Save the script above as x.sh. After running ./x.sh, the x.log only include the stdout, while the x.err only include the stderr.