Redirect stdout and stderr after redirection command - bash

In a bash script I am enabling IP forwarding using the following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
However I also want to include my own error messages so I have to redirect the stdout and stderr to /dev/null, which looks like the following:
echo 1 > /proc/sys/net/ipv4/ip_forward >/dev/null 2>&1
This works for commands that do not have the redirection symbol in it, for example:
route add default gw 10.8.0.1 > /dev/null 2>&1
Is there any way I can make this work for commands that do have a redirection in them? Is there a workaround for this? Any other way I can do this better?

Your redirections are nonsensical (no offense).
This:
echo 1 > /proc/sys/net/ipv4/ip_forward >/dev/null 2>&1
will:
redirect echo's standard output to /proc/sys/net/ipv4/ip_forward;
redirect echo's standard output to /dev/null
redirect echo's standard error to where file descriptor 1 points, i.e., to /dev/null.
Hence, globally, redirection 2 cancels redirection 1: nothing is going to go to /proc/sys/net/ipv4/ip_forward!
I guess you want to redirect echo's standard output to /proc/sys/net/ipv4/ip_forward and echo's standard error to /dev/null. This is achieved by:
echo 1 > /proc/sys/net/ipv4/ip_forward 2>/dev/null
But read on, I believe that's not the answer you're looking for!
Why do you want to redirect echo's standard error to /dev/null?
echo very rarely writes to standard error. In fact, the only time echo will write to standard error is when there's a write error. There are a couple of ways this could happen:
if the disk is full (that can be simulated with /dev/full):
$ echo hello >/dev/full
bash: echo: write error: No space left on device
$ echo hello >/dev/full 2>/dev/null
$
(no error messages shown with the redirection 2>/dev/null).
if echo's standard out is closed:
$ ( exec >&-; echo hello )
bash: echo: write error: Bad file descriptor
$ ( exec >&-; echo hello 2> /dev/null )
$
(no error messages shown with the redirection 2>/dev/null).
There might be other cases where echo outputs to standard error. But the following are certainly not among them:
Redirecting echo's standard output to a non-existent file descriptor:
$ echo hello >&42
bash: 42: Bad file descriptor
$ echo hello >&42 2>/dev/null
bash: 42: Bad file descriptor
$
The redirection 2>/dev/null doesn't fix anything; you can actually see that the error comes from bash and not from echo (the latter would have bash: echo: as a prefix).
Redirecting echo's standard output to a file without write permission:
$ touch testfile
$ chmod -w testfile
$ echo hello > testfile
bash: testfile: Permission denied
$ echo hello > testfile 2>/dev/null
bash: testfile: Permission denied
$
Same as above, the redirection 2>/dev/null doesn't fix anything.
The previous cases are not fixed by 2>/dev/null, since the error occurs at Bash's level, before the command is even executed and the redirections performed, because it's at the moment of the redirection that Bash encounters an error: it can't open the stream for writing and outputs the error message to standard output.†
Now I guess you're trying to fix the following scenario: when the user doesn't have enough rights to write to /proc/sys/net/ipv4/ip_forward:
$ echo 1 >/proc/sys/net/ipv4/ip_forward
bash: /proc/sys/net/ipv4/ip_forward: Permission denied
$
The error message on standard error can't be redirected with a simple redirection‡:
$ echo 1 >/proc/sys/net/ipv4/ip_forward 2>/dev/null
bash: /proc/sys/net/ipv4/ip_forward: Permission denied
$
A standard way to redirect the error that occurs at the redirection level (i.e., before the command is even executed) is to use groupings:
$ { echo 1 >/proc/sys/net/ipv4/ip_forward; } 2>/dev/null
Now that explains why the solution you posted as an answer works: let's go through it and we'll see that there's something that shows you didn't completely understand redirections (and hopefully this post will help you understanding a few things); your code is:
function ip_forward
{
echo 1 > /proc/sys/net/ipv4/ip_forward
}
ip_forward >/dev/null 2>&1
This will run the function ip_forward, and redirect:
its standard output to /dev/null;
and then its standard error to where its standard output points (i.e., to /dev/null).
But the function ip_forward doesn't output anything to standard output! so the redirection >/dev/null is only useful for the 2>&1 part of the redirection. In fact, your code is completely equivalent to:
function ip_forward
{
echo 1 > /proc/sys/net/ipv4/ip_forward
}
ip_forward 2>/dev/null
But then (since you only use a function construct as a way to achieve what you wanted—not because you want a function), it's much better to write your code as either:
echo 1 2>/dev/null >/proc/sys/net/ipv4/ip_forward
or
{ echo 1 > /proc/sys/net/ipv4/ip_forward; } 2>/dev/null
(the latter being preferred).
Sorry for this long post!
†
There's something we should be aware of: the order of redirection. They are performed from left to right, as Bash reads them. How about we first redirect standard error, and then standard output to a non-existent/non-writable stream?
$ echo hello 2>/dev/null >&42
$
That's right, it works.
‡
Well, can, if you understood the previous footnote:
$ echo 1 2>/dev/null >/proc/sys/net/ipv4/ip_forward
$ echo $?
1
$
No error on standard error! that's because of the order of the redirections.

Solved
echo 1 > /proc/sys/net/ipv4/ip_forward ip_forward 2>/dev/null
echo 1 is the stdout part so this does not have to be redirected anymore. I only had to redirect stderr by adding 2>/dev/null.

Related

How to redirect the command ssh -V to a file? [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 1 year ago.
I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?
Take a look here. It should be:
yourcommand &> filename
It redirects both standard output and standard error to file filename.
do_something 2>&1 | tee -a some_file
This is going to redirect standard error to standard output and standard output to some_file and print it to standard output.
You can redirect stderr to stdout and the stdout into a file:
some_command >file.log 2>&1
See Chapter 20. I/O Redirection
This format is preferred over the most popular &> format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable - 2 (is standard error) redirected to 1 (standard output).
# Close standard output file descriptor
exec 1<&-
# Close standard error file descriptor
exec 2<&-
# Open standard output as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect standard error to standard output
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.
To the author of the original post,
It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.
Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...
exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
trap 'cleanup' INT QUIT TERM EXIT
get_pids_of_ppid() {
local ppid="$1"
RETVAL=''
local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
RETVAL="$pids"
}
# Needed to kill processes running in background
cleanup() {
local current_pid element
local pids=( "$$" )
running_pids=("${pids[#]}")
while :; do
current_pid="${running_pids[0]}"
[ -z "$current_pid" ] && break
running_pids=("${running_pids[#]:1}")
get_pids_of_ppid $current_pid
local new_pids="$RETVAL"
[ -z "$new_pids" ] && continue
for element in $new_pids; do
running_pids+=("$element")
pids=("$element" "${pids[#]}")
done
done
kill ${pids[#]} 2>/dev/null
}
So, from the beginning. Let's assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.
Create file descriptors (FDs) #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing file descriptor #1 doesn't affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
1> >(...) redirects standard output to command in parentheses
Parentheses (sub-shell) executes 'tee', reading from exec's standard output (pipe) and redirects to the 'logger' command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.
The result of running a script having the above line and additionally this one:
echo "Will end up in standard output (terminal) and /var/log/messages"
...is as follows:
$ ./my_script
Will end up in standard output (terminal) and /var/log/messages
$ tail -n1 /var/log/messages
Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages
If you want to see clearer picture, add these two lines to the script:
ls -l /proc/self/fd/
ps xf
bash your_script.sh 1>file.log 2>&1
1>file.log instructs the shell to send standard output to the file file.log, and 2>&1 tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).
Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn't work.
Curiously, this works:
yourcommand &> filename
But this gives a syntax error:
yourcommand &>> filename
syntax error near unexpected token `>'
You have to use:
yourcommand 1>> filename 2>&1
Short answer: Command >filename 2>&1 or Command &>filename
Explanation:
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout to stdout, and create a file named "2" and write stderror there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2, is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we can explain why the solution why the following code produces no output:
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
File descriptors behave like pointers (although file descriptors are not the same as file pointers)
Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers
LOG_FACILITY="local7.notice"
LOG_TOPIC="my-prog-name"
LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
It is related: Writing standard output and standard error to syslog.
It almost works, but not from xinetd ;(
For the situation when "piping" is necessary, you can use |&.
For example:
echo -ne "15\n100\n" | sort -c |& tee >sort_result.txt
or
TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods | grep node >>js.log ; done |& sort -h
These Bash-based solutions can pipe standard output and standard error separately (from standard error of "sort -c", or from standard error to "sort -h").
I wanted a solution to have the output from stdout plus stderr written into a log file and stderr still on console. So I needed to duplicate the stderr output via tee.
This is the solution I found:
command 3>&1 1>&2 2>&3 1>>logfile | tee -a logfile
First swap stderr and stdout
then append the stdout to the log file
pipe stderr to tee and append it also to the log file
Adding to what Fernando Fabreti did, I changed the functions slightly and removed the &- closing and it worked for me.
function saveStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
exec 3>&1
exec 4>&2
trap restoreStandardOutputs EXIT
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
}
# Parameters: $1 => logfile to write to
function redirectOutputsToLogfile {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
saveStandardOutputs
exec 1>>${LOGFILE}
exec 2>&1
OUTPUTS_REDIRECTED="true"
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
}
function restoreStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
exec 1>&3 #restore stdout
exec 2>&4 #restore stderr
OUTPUTS_REDIRECTED="false"
fi
}
LOGFILE_NAME="tmp/one.log"
OUTPUTS_REDIRECTED="false"
echo "this goes to standard output"
redirectOutputsToLogfile $LOGFILE_NAME
echo "this goes to logfile"
echo "${LOGFILE_NAME}"
restoreStandardOutputs
echo "After restore this goes to standard output"
The "easiest" way (Bash 4 only):
ls * 2>&- 1>&-
In situations when you consider using things like exec 2>&1, I find it easier to read, if possible, rewriting code using Bash functions like this:
function myfunc(){
[...]
}
myfunc &>mylog.log
The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.
#!/bin/bash
#set -x
# global vars
OUTPUTS_REDIRECTED="false"
LOGFILE=/dev/stdout
# "private" function used by redirect_outputs_to_logfile()
function save_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
exec 3>&1
exec 4>&2
trap restore_standard_outputs EXIT
}
# Params: $1 => logfile to write to
function redirect_outputs_to_logfile {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
save_standard_outputs
exec 1>>${LOGFILE%.log}.log
exec 2>&1
OUTPUTS_REDIRECTED="true"
}
# "private" function used by save_standard_outputs()
function restore_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
exit 1;
fi
exec 1>&- #closes FD 1 (logfile)
exec 2>&- #closes FD 2 (logfile)
exec 2>&4 #restore stderr
exec 1>&3 #restore stdout
OUTPUTS_REDIRECTED="false"
}
Example of usage inside script:
echo "this goes to stdout"
redirect_outputs_to_logfile /tmp/one.log
echo "this goes to logfile"
restore_standard_outputs
echo "this goes to stdout"
For tcsh, I have to use the following command:
command >& file
If using command &> file, it will give an "Invalid null command" error.

How to set up two levels of redirection in bash?

In a bash script, I'd like to:
output every logs into a file (loglowlevel.txt),
but make only few of them visible in terminal (high-level),
I have, say, 20 high-level logs and 60 low-level logs. Thus I'd like to keep low-level logs "redirection-command" free, and have the redirection stuff to high-level logs only.
Take 1
I wrote a basic script that redirects stdout and sterr to FD 3. loglowlevel.txt is correctly populated. But I am stuck in specifying the option for the high-level logs.
#!/bin/bash -
# create fd 3
exec 3<> loglowlevel.txt
# redirect stdout and stderr to fd 3
exec 1>&3
exec 2>&3
# high-level logs' redirection below is wrong
echo "high-level comment" 3>&1
# low-level logs should remain redirection-free, as below
echo "low-level comment"
ls notafile
# close fd 3
3>&-
Here is what it does:
$ redirect.sh
$ cat loglowlevel.txt
low-level comment
ls: cannot access notafile: No such file or directory
I expected high-level comment to be printed on terminal as well.
Take 2
Second script, different strategy:
#!/bin/bash -
function echolowlevel() {
echo $1 &>loglowlevel.txt
}
function echohighlevel() {
echo $1 |& tee loglowlevel.txt
}
echohighlevel "high-level comment 1"
echolowlevel "low-level comment 1"
echohighlevel "high-level comment 2"
ls notafile
Here is what it does:
$ redirect.sh
high-level comment 1
high-level comment 2
ls: cannot access notafile: No such file or directory
$ cat loglowlevel.txt
high-level comment 2
Two problems here:
error message from ls is printed in terminal, whereas I need it only in loglowlevel.txt.
high-level comment 1 has been eaten in loglowlevel.txt.
Question
I prefer the idea behind Take 1. But how can I make high-level comment be output to stdout while keeping the two exec commands?
#!/bin/sh
FIFO=/tmp/fifo.$$ # or use tmpfile, or some other mechanism to get unique name
trap 'rm -f $FIFO' 0
mkfifo $FIFO
tee -a loglowlevel.txt < $FIFO &
exec >> loglowlevel.txt
exec 3> $FIFO
echo high-level >&3 # Appears on original stdout and in loglowlevel.txt
echo low-level # Appears only in loglowlevel.txt
A shorter version of William Pursell's answer, for shells and operating systems that support process substitution:
exec 3>> >(tee -a loglowlevel.txt)
exec >> loglowlevel.txt
echo high-level >&3 # Appears on original stdout and in loglowlevel.txt
echo low-level # Appears only in loglowlevel.txt
In this example, writing to file descriptor 3 effectively writes to the standard input of a background tee process that appends to the file "loglowlevel.txt".
Support for this feature varies. It's not part of the POSIX standard, but it is provided by at least bash, ksh, and zsh. Each shell will require some amount of operating system support. The bash version, for example, requires the availability of named pipes (the object created by mkfifo in William's solution) or access to open files via /dev/fd.

What is /dev/null 2>&1? [duplicate]

This question already has answers here:
What does " 2>&1 " mean?
(19 answers)
Closed 26 days ago.
I found this piece of code in /etc/cron.daily/apf
#!/bin/bash
/etc/apf/apf -f >> /dev/null 2>&1
/etc/apf/apf -s >> /dev/null 2>&1
It's flushing and reloading the firewall.
I don't understand the >> /dev/null 2>&1 part.
What is the purpose of having this in the cron? It's overriding my firewall rules.
Can I safely remove this cron job?
>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.
(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)
2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.
Let's break >> /dev/null 2>&1 statement into parts:
Part 1: >> output redirection
This is used to redirect the program output and append the output at the end of the file. More...
Part 2: /dev/null special file
This is a Pseudo-devices special file.
Command ls -l /dev/null will give you details of this file:
crw-rw-rw-. 1 root root 1, 3 Mar 20 18:37 /dev/null
Did you observe crw? Which means it is a pseudo-device file which is of character-special-file type that provides serial access.
/dev/null accepts and discards all input; produces no output (always returns an end-of-file indication on a read). Reference: Wikipedia
Part 3: 2>&1 (Merges output from stream 2 with stream 1)
Whenever you execute a program, the operating system always opens three files, standard input, standard output, and standard error as we know whenever a file is opened, the operating system (from kernel) returns a non-negative integer called a file descriptor. The file descriptor for these files are 0, 1, and 2, respectively.
So 2>&1 simply says redirect standard error to standard output.
& means whatever follows is a file descriptor, not a filename.
In short, by using this command you are telling your program not to shout while executing.
What is the importance of using 2>&1?
If you don't want to produce any output, even in case of some error produced in the terminal. To explain more clearly, let's consider the following example:
$ ls -l > /dev/null
For the above command, no output was printed in the terminal, but what if this command produces an error:
$ ls -l file_doesnot_exists > /dev/null
ls: cannot access file_doesnot_exists: No such file or directory
Despite I'm redirecting output to /dev/null, it is printed in the terminal. It is because we are not redirecting error output to /dev/null, so in order to redirect error output as well, it is required to add 2>&1:
$ ls -l file_doesnot_exists > /dev/null 2>&1
This is the way to execute a program quietly, and hide all its output.
/dev/null is a special filesystem object that discards everything written into it. Redirecting a stream into it means hiding your program's output.
The 2>&1 part means "redirect the error stream into the output stream", so when you redirect the output stream, error stream gets redirected as well. Even if your program writes to stderr now, that output would be discarded as well.
Let me explain a bit by bit.
0,1,2
0: standard input
1: standard output
2: standard error
>>
>> in command >> /dev/null 2>&1 appends the command output to /dev/null.
command >> /dev/null 2>&1
After command:
command
=> 1 output on the terminal screen
=> 2 output on the terminal screen
After redirect:
command >> /dev/null
=> 1 output to /dev/null
=> 2 output on the terminal screen
After /dev/null 2>&1
command >> /dev/null 2>&1
=> 1 output to /dev/null
=> 2 output is redirected to 1 which is now to /dev/null
/dev/null is a standard file that discards all you write to it, but reports that the write operation succeeded.
1 is standard output and 2 is standard error.
2>&1 redirects standard error to standard output. &1 indicates file descriptor (standard output), otherwise (if you use just 1) you will redirect standard error to a file named 1. [any command] >>/dev/null 2>&1 redirects all standard error to standard output, and writes all of that to /dev/null.
I use >> /dev/null 2>&1 for a silent cronjob. A cronjob will do the job, but not send a report to my email.
As far as I know, don't remove /dev/null. It's useful, especially when you run cPanel, it can be used for throw-away cronjob reports.
As described by the others, writing to /dev/null eliminates the output of a program. Usually cron sends an email for every output from the process started with a cronjob. So by writing the output to /dev/null you prevent being spammed if you have specified your adress in cron.
instead of using >/dev/null 2>&1
Could you use : wget -O /dev/null -o /dev/null example.com
what i can see on the other forum it says. "Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all."
and the other solution is : wget -q --spider mysite.com
https://serverfault.com/questions/619542/piping-wget-output-to-dev-null-in-cron/619546#619546
I normally used the command in connection with the log files… purpose would be to catch any errors to evaluate/troubleshoot issues when running scripts on multiple servers simultaneously.
sh -vxe cmd > cmd.logfile 2>&1
Edit /etc/conf.apf. Set DEVEL_MODE="0". DEVEL_MODE set to 1 will add a cron job to stop apf after 5 minutes.

How do I get both STDOUT and STDERR to go to the terminal and a log file?

I have a script which will be run interactively by non-technical users. The script writes status updates to STDOUT so that the user can be sure that the script is running OK.
I want both STDOUT and STDERR redirected to the terminal (so that the user can see that the script is working as well as see if there was a problem). I also want both streams redirected to a log file.
I've seen a bunch of solutions on the net. Some don't work and others are horribly complicated. I've developed a workable solution (which I'll enter as an answer), but it's kludgy.
The perfect solution would be a single line of code that could be incorporated into the beginning of any script that sends both streams to both the terminal and a log file.
EDIT: Redirecting STDERR to STDOUT and piping the result to tee works, but it depends on the users remembering to redirect and pipe the output. I want the logging to be fool-proof and automatic (which is why I'd like to be able to embed the solution into the script itself.)
Use "tee" to redirect to a file and the screen. Depending on the shell you use, you first have to redirect stderr to stdout using
./a.out 2>&1 | tee output
or
./a.out |& tee output
In csh, there is a built-in command called "script" that will capture everything that goes to the screen to a file. You start it by typing "script", then doing whatever it is you want to capture, then hit control-D to close the script file. I don't know of an equivalent for sh/bash/ksh.
Also, since you have indicated that these are your own sh scripts that you can modify, you can do the redirection internally by surrounding the whole script with braces or brackets, like
#!/bin/sh
{
... whatever you had in your script before
} 2>&1 | tee output.file
Approaching half a decade later...
I believe this is the "perfect solution" sought by the OP.
Here's a one liner you can add to the top of your Bash script:
exec > >(tee -a $HOME/logfile) 2>&1
Here's a small script demonstrating its use:
#!/usr/bin/env bash
exec > >(tee -a $HOME/logfile) 2>&1
# Test redirection of STDOUT
echo test_stdout
# Test redirection of STDERR
ls test_stderr___this_file_does_not_exist
(Note: This only works with Bash. It will not work with /bin/sh.)
Adapted from here; the original did not, from what I can tell, catch STDERR in the logfile. Fixed with a note from here.
The Pattern
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
This redirects both stdout and stderr separately, and it sends separate copies of stdout and stderr to the caller (which might be your terminal).
In zsh, it will not proceed to the next statement until the tees have finished.
In bash, you may find that the final few lines of output appear after whatever statement comes next.
In either case, the right bits go to the right places.
Explanation
Here's a script (stored in ./example):
#! /usr/bin/env bash
the_cmd()
{
echo out;
1>&2 echo err;
}
the_cmd 1> >(tee stdout.txt ) 2> >(tee stderr.txt >&2 )
Here's a session:
$ foo=$(./example)
err
$ echo $foo
out
$ cat stdout.txt
out
$ cat stderr.txt
err
Here's how it works:
Both tee processes are started, their stdins are assigned to file descriptors. Because they're enclosed in process substitutions, the paths to those file descriptors are substituted in the calling command, so now it looks something like this:
the_cmd 1> /proc/self/fd/13 2> /proc/self/fd/14
the_cmd runs, writing stdout to the first file descriptor, and stderr to the second one.
In the bash case, once the_cmd finishes, the following statement happens immediately (if your terminal is the caller, then you will see your prompt appear).
In the zsh case, once the_cmd finishes, the shell waits for both of the tee processes to finish before moving on. More on this here.
The first tee process, which is reading from the_cmd's stdout, writes a copy of that stdout back to the caller because that's what tee does. Its outputs are not redirected, so they make it back to the caller unchanged
The second tee process has it's stdout redirected to the caller's stderr (which is good, because it's stdin is reading from the_cmd's stderr). So when it writes to its stdout, those bits go to the caller's stderr.
This keeps stderr separate from stdout both in the files and in the command's output.
If the first tee writes any errors, they'll show up in both the stderr file and in the command's stderr, if the second tee writes any errors, they'll only show up only in the terminal's stderr.
the to redirect stderr to stdout append this at your command: 2>&1
For outputting to terminal and logging into file you should use tee
Both together would look like this:
mycommand 2>&1 | tee mylogfile.log
EDIT: For embedding into your script you would do the same. So your script
#!/bin/sh
whatever1
whatever2
...
whatever3
would end up as
#!/bin/sh
( whatever1
whatever2
...
whatever3 ) 2>&1 | tee mylogfile.log
EDIT:
I see I got derailed and ended up answering a different question from the one asked. The answer to the real question is at the bottom of Paul Tomblin's answer. (If you want to enhance that solution to redirect stdout and stderr separately for some reason, you could use the technique I describe here.)
I've been wanting an answer that preserves the distinction between stdout and stderr.
Unfortunately all of the answers given so far that preserve that distinction
are race-prone: they risk programs seeing incomplete input, as I pointed out in comments.
I think I finally found an answer that preserves the distinction,
is not race prone, and isn't terribly fiddly either.
First building block: to swap stdout and stderr:
my_command 3>&1 1>&2 2>&3-
Second building block: if we wanted to filter (e.g. tee) only stderr,
we could accomplish that by swapping stdout&stderr, filtering, and then swapping back:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
Now the rest is easy: we can add a stdout filter, either at the beginning:
{ { my_command | stdout_filter;} 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3-
or at the end:
{ my_command 3>&1 1>&2 2>&3- | stderr_filter;} 3>&1 1>&2 2>&3- | stdout_filter
To convince myself that both of the above commands work, I used the following:
alias my_command='{ echo "to stdout"; echo "to stderr" >&2;}'
alias stdout_filter='{ sleep 1; sed -u "s/^/teed stdout: /" | tee stdout.txt;}'
alias stderr_filter='{ sleep 2; sed -u "s/^/teed stderr: /" | tee stderr.txt;}'
Output is:
...(1 second pause)...
teed stdout: to stdout
...(another 1 second pause)...
teed stderr: to stderr
and my prompt comes back immediately after the "teed stderr: to stderr", as expected.
Footnote about zsh:
The above solution works in bash (and maybe some other shells, I'm not sure), but it doesn't work in zsh. There are two reasons it fails in zsh:
the syntax 2>&3- isn't understood by zsh; that has to be rewritten
as 2>&3 3>&-
in zsh (unlike other shells), if you redirect a file descriptor
that's already open, in some cases (I don't completely understand how it decides) it does a built-in tee-like behavior instead. To avoid this, you have to close each fd prior to
redirecting it.
So, for example, my second solution has to be rewritten for zsh as {my_command 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stderr_filter;} 3>&1 1>&- 1>&2 2>&- 2>&3 3>&- | stdout_filter (which works in bash too, but is awfully verbose).
On the other hand, you can take advantage of zsh's mysterious built-in implicit teeing to get a much shorter solution for zsh, which doesn't run tee at all:
my_command >&1 >stdout.txt 2>&2 2>stderr.txt
(I wouldn't have guessed from the docs I found that the >&1 and 2>&2 are the thing that trigger zsh's implicit teeing; I found that out by trial-and-error.)
Use the tee program and dup stderr to stdout.
program 2>&1 | tee > logfile
Use the script command in your script (man 1 script)
Create a wrapper shellscript (2 lines) that sets up script() and then calls exit.
Part 1: wrap.sh
#!/bin/sh
script -c './realscript.sh'
exit
Part 2: realscript.sh
#!/bin/sh
echo 'Output'
Result:
~: sh wrap.sh
Script started, file is typescript
Output
Script done, file is typescript
~: cat typescript
Script started on fr. 12. des. 2008 kl. 18.07 +0100
Output
Script done on fr. 12. des. 2008 kl. 18.07 +0100
~:
I created a script called "RunScript.sh". The contents of this script is:
${APP_HOME}/${1}.sh ${2} ${3} ${4} ${5} ${6} 2>&1 | tee -a ${APP_HOME}/${1}.log
I call it like this:
./RunScript.sh ScriptToRun Param1 Param2 Param3 ...
This works, but it requires the application's scripts to be run via an external script. It's a bit kludgy.
A year later, here's an old bash script for logging anything. For example,
teelog make ... logs to a generated log name (and see the trick for logging nested makes too.)
#!/bin/bash
me=teelog
Version="2008-10-9 oct denis-bz"
Help() {
cat <<!
$me anycommand args ...
logs the output of "anycommand ..." as well as displaying it on the screen,
by running
anycommand args ... 2>&1 | tee `day`-command-args.log
That is, stdout and stderr go to both the screen, and to a log file.
(The Unix "tee" command is named after "T" pipe fittings, 1 in -> 2 out;
see http://en.wikipedia.org/wiki/Tee_(command) ).
The default log file name is made up from "command" and all the "args":
$me cmd -opt dir/file logs to `day`-cmd--opt-file.log .
To log to xx.log instead, either export log=xx.log or
$me log=xx.log cmd ...
If "logdir" is set, logs are put in that directory, which must exist.
An old xx.log is moved to /tmp/\$USER-xx.log .
The log file has a header like
# from: command args ...
# run: date pwd etc.
to show what was run; see "From" in this file.
Called as "Log" (ln -s $me Log), Log anycommand ... logs to a file:
command args ... > `day`-command-args.log
and tees stderr to both the log file and the terminal -- bash only.
Some commands that prompt for input from the console, such as a password,
don't prompt if they "| tee"; you can only type ahead, carefully.
To log all "make" s, including nested ones like
cd dir1; \$(MAKE)
cd dir2; \$(MAKE)
...
export MAKE="$me make"
!
# See also: output logging in screen(1).
exit 1
}
#-------------------------------------------------------------------------------
# bzutil.sh denisbz may2008 --
day() { # 30mar, 3mar
/bin/date +%e%h | tr '[A-Z]' '[a-z]' | tr -d ' '
}
edate() { # 19 May 2008 15:56
echo `/bin/date "+%e %h %Y %H:%M"`
}
From() { # header # from: $* # run: date pwd ...
case `uname` in Darwin )
mac=" mac `sw_vers -productVersion`"
esac
cut -c -200 <<!
${comment-#} from: $#
${comment-#} run: `edate` in $PWD `uname -n` $mac `arch`
!
# mac $PWD is pwd -L not -P real
}
# log name: day-args*.log, change this if you like --
logfilename() {
log=`day`
[[ $1 == "sudo" ]] && shift
for arg
do
log="$log-${arg##*/}" # basename
(( ${#log} >= 100 )) && break # max len 100
done
# no blanks etc in logfilename please, tr them to "-"
echo $logdir/` echo "$log".log | tr -C '.:+=[:alnum:]_\n' - `
}
#-------------------------------------------------------------------------------
case "$1" in
-v* | --v* )
echo "$0 version: $Version"
exit 1 ;;
"" | -* )
Help
esac
# scan log= etc --
while [[ $1 == [a-zA-Z_]*=* ]]; do
export "$1"
shift
done
: ${logdir=.}
[[ -w $logdir ]] || {
echo >&2 "error: $me: can't write in logdir $logdir"
exit 1
}
: ${log=` logfilename "$#" `}
[[ -f $log ]] &&
/bin/mv "$log" "/tmp/$USER-${log##*/}"
case ${0##*/} in # basename
log | Log ) # both to log, stderr to caller's stderr too --
{
From "$#"
"$#"
} > $log 2> >(tee /dev/stderr) # bash only
# see http://wooledge.org:8000/BashFAQ 47, stderr to a pipe
;;
* )
#-------------------------------------------------------------------------------
{
From "$#" # header: from ... date pwd etc.
"$#" 2>&1 # run the cmd with stderr and stdout both to the log
} | tee $log
# mac tee buffers stdout ?
esac
This question seems has not been gracefully solved yet.
Everytime I search "how to output to stdout and stderr at same time", google directs me to this post.
Today, I finally found a simple & effective way to solve almost all these kind of needs.
The essential idea is the tee command which can print to multiple output at same time, and linux-specific /proc/self/fd/{1,2,...} to represent stdout, stderr...
print stdin to both stdout and stderr
tee /proc/self/fd/2
print stdin to both stdout and stderr, and file
tee /proc/self/fd/2 file
Hope this be helpful.
Here is a solution working for bash by redirection, by combining the solution of "kvantour, MatrixManAtYrService" and "Jason Sydes":
#!/bin/bash
exec 1> >(tee x.log) 2> >(tee x.err >&2)
echo "test for log"
echo "test for err" 1>&2
Save the script above as x.sh. After running ./x.sh, the x.log only include the stdout, while the x.err only include the stderr.

Redirect stderr and stdout in Bash [duplicate]

This question already has answers here:
How to redirect and append both standard output and standard error to a file with Bash
(8 answers)
Closed 1 year ago.
I want to redirect both standard output and standard error of a process to a single file. How do I do that in Bash?
Take a look here. It should be:
yourcommand &> filename
It redirects both standard output and standard error to file filename.
do_something 2>&1 | tee -a some_file
This is going to redirect standard error to standard output and standard output to some_file and print it to standard output.
You can redirect stderr to stdout and the stdout into a file:
some_command >file.log 2>&1
See Chapter 20. I/O Redirection
This format is preferred over the most popular &> format that only works in Bash. In Bourne shell it could be interpreted as running the command in background. Also the format is more readable - 2 (is standard error) redirected to 1 (standard output).
# Close standard output file descriptor
exec 1<&-
# Close standard error file descriptor
exec 2<&-
# Open standard output as $LOG_FILE file for read and write.
exec 1<>$LOG_FILE
# Redirect standard error to standard output
exec 2>&1
echo "This line will appear in $LOG_FILE, not 'on screen'"
Now, a simple echo will write to $LOG_FILE, and it is useful for daemonizing.
To the author of the original post,
It depends what you need to achieve. If you just need to redirect in/out of a command you call from your script, the answers are already given. Mine is about redirecting within current script which affects all commands/built-ins (includes forks) after the mentioned code snippet.
Another cool solution is about redirecting to both standard error and standard output and to log to a log file at once which involves splitting "a stream" into two. This functionality is provided by 'tee' command which can write/append to several file descriptors (files, sockets, pipes, etc.) at once: tee FILE1 FILE2 ... >(cmd1) >(cmd2) ...
exec 3>&1 4>&2 1> >(tee >(logger -i -t 'my_script_tag') >&3) 2> >(tee >(logger -i -t 'my_script_tag') >&4)
trap 'cleanup' INT QUIT TERM EXIT
get_pids_of_ppid() {
local ppid="$1"
RETVAL=''
local pids=`ps x -o pid,ppid | awk "\\$2 == \\"$ppid\\" { print \\$1 }"`
RETVAL="$pids"
}
# Needed to kill processes running in background
cleanup() {
local current_pid element
local pids=( "$$" )
running_pids=("${pids[#]}")
while :; do
current_pid="${running_pids[0]}"
[ -z "$current_pid" ] && break
running_pids=("${running_pids[#]:1}")
get_pids_of_ppid $current_pid
local new_pids="$RETVAL"
[ -z "$new_pids" ] && continue
for element in $new_pids; do
running_pids+=("$element")
pids=("$element" "${pids[#]}")
done
done
kill ${pids[#]} 2>/dev/null
}
So, from the beginning. Let's assume we have a terminal connected to /dev/stdout (file descriptor #1) and /dev/stderr (file descriptor #2). In practice, it could be a pipe, socket or whatever.
Create file descriptors (FDs) #3 and #4 and point to the same "location" as #1 and #2 respectively. Changing file descriptor #1 doesn't affect file descriptor #3 from now on. Now, file descriptors #3 and #4 point to standard output and standard error respectively. These will be used as real terminal standard output and standard error.
1> >(...) redirects standard output to command in parentheses
Parentheses (sub-shell) executes 'tee', reading from exec's standard output (pipe) and redirects to the 'logger' command via another pipe to the sub-shell in parentheses. At the same time it copies the same input to file descriptor #3 (the terminal)
the second part, very similar, is about doing the same trick for standard error and file descriptors #2 and #4.
The result of running a script having the above line and additionally this one:
echo "Will end up in standard output (terminal) and /var/log/messages"
...is as follows:
$ ./my_script
Will end up in standard output (terminal) and /var/log/messages
$ tail -n1 /var/log/messages
Sep 23 15:54:03 wks056 my_script_tag[11644]: Will end up in standard output (terminal) and /var/log/messages
If you want to see clearer picture, add these two lines to the script:
ls -l /proc/self/fd/
ps xf
bash your_script.sh 1>file.log 2>&1
1>file.log instructs the shell to send standard output to the file file.log, and 2>&1 tells it to redirect standard error (file descriptor 2) to standard output (file descriptor 1).
Note: The order matters as liw.fi pointed out, 2>&1 1>file.log doesn't work.
Curiously, this works:
yourcommand &> filename
But this gives a syntax error:
yourcommand &>> filename
syntax error near unexpected token `>'
You have to use:
yourcommand 1>> filename 2>&1
Short answer: Command >filename 2>&1 or Command &>filename
Explanation:
Consider the following code which prints the word "stdout" to stdout and the word "stderror" to stderror.
$ (echo "stdout"; echo "stderror" >&2)
stdout
stderror
Note that the '&' operator tells bash that 2 is a file descriptor (which points to the stderr) and not a file name. If we left out the '&', this command would print stdout to stdout, and create a file named "2" and write stderror there.
By experimenting with the code above, you can see for yourself exactly how redirection operators work. For instance, by changing which file which of the two descriptors 1,2, is redirected to /dev/null the following two lines of code delete everything from the stdout, and everything from stderror respectively (printing what remains).
$ (echo "stdout"; echo "stderror" >&2) 1>/dev/null
stderror
$ (echo "stdout"; echo "stderror" >&2) 2>/dev/null
stdout
Now, we can explain why the solution why the following code produces no output:
(echo "stdout"; echo "stderror" >&2) >/dev/null 2>&1
To truly understand this, I highly recommend you read this webpage on file descriptor tables. Assuming you have done that reading, we can proceed. Note that Bash processes left to right; thus Bash sees >/dev/null first (which is the same as 1>/dev/null), and sets the file descriptor 1 to point to /dev/null instead of the stdout. Having done this, Bash then moves rightwards and sees 2>&1. This sets the file descriptor 2 to point to the same file as file descriptor 1 (and not to file descriptor 1 itself!!!! (see this resource on pointers for more info)) . Since file descriptor 1 points to /dev/null, and file descriptor 2 points to the same file as file descriptor 1, file descriptor 2 now also points to /dev/null. Thus both file descriptors point to /dev/null, and this is why no output is rendered.
To test if you really understand the concept, try to guess the output when we switch the redirection order:
(echo "stdout"; echo "stderror" >&2) 2>&1 >/dev/null
stderror
The reasoning here is that evaluating from left to right, Bash sees 2>&1, and thus sets the file descriptor 2 to point to the same place as file descriptor 1, ie stdout. It then sets file descriptor 1 (remember that >/dev/null = 1>/dev/null) to point to >/dev/null, thus deleting everything which would usually be send to to the standard out. Thus all we are left with was that which was not send to stdout in the subshell (the code in the parentheses)- i.e. "stderror".
The interesting thing to note there is that even though 1 is just a pointer to the stdout, redirecting pointer 2 to 1 via 2>&1 does NOT form a chain of pointers 2 -> 1 -> stdout. If it did, as a result of redirecting 1 to /dev/null, the code 2>&1 >/dev/null would give the pointer chain 2 -> 1 -> /dev/null, and thus the code would generate nothing, in contrast to what we saw above.
Finally, I'd note that there is a simpler way to do this:
From section 3.6.4 here, we see that we can use the operator &> to redirect both stdout and stderr. Thus, to redirect both the stderr and stdout output of any command to \dev\null (which deletes the output), we simply type
$ command &> /dev/null
or in case of my example:
$ (echo "stdout"; echo "stderror" >&2) &>/dev/null
Key takeaways:
File descriptors behave like pointers (although file descriptors are not the same as file pointers)
Redirecting a file descriptor "a" to a file descriptor "b" which points to file "f", causes file descriptor "a" to point to the same place as file descriptor b - file "f". It DOES NOT form a chain of pointers a -> b -> f
Because of the above, order matters, 2>&1 >/dev/null is != >/dev/null 2>&1. One generates output and the other does not!
Finally have a look at these great resources:
Bash Documentation on Redirection, An Explanation of File Descriptor Tables, Introduction to Pointers
LOG_FACILITY="local7.notice"
LOG_TOPIC="my-prog-name"
LOG_TOPIC_OUT="$LOG_TOPIC-out[$$]"
LOG_TOPIC_ERR="$LOG_TOPIC-err[$$]"
exec 3>&1 > >(tee -a /dev/fd/3 | logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_OUT" )
exec 2> >(logger -p "$LOG_FACILITY" -t "$LOG_TOPIC_ERR" )
It is related: Writing standard output and standard error to syslog.
It almost works, but not from xinetd ;(
For the situation when "piping" is necessary, you can use |&.
For example:
echo -ne "15\n100\n" | sort -c |& tee >sort_result.txt
or
TIMEFORMAT=%R;for i in `seq 1 20` ; do time kubectl get pods | grep node >>js.log ; done |& sort -h
These Bash-based solutions can pipe standard output and standard error separately (from standard error of "sort -c", or from standard error to "sort -h").
I wanted a solution to have the output from stdout plus stderr written into a log file and stderr still on console. So I needed to duplicate the stderr output via tee.
This is the solution I found:
command 3>&1 1>&2 2>&3 1>>logfile | tee -a logfile
First swap stderr and stdout
then append the stdout to the log file
pipe stderr to tee and append it also to the log file
Adding to what Fernando Fabreti did, I changed the functions slightly and removed the &- closing and it worked for me.
function saveStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
exec 3>&1
exec 4>&2
trap restoreStandardOutputs EXIT
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
}
# Parameters: $1 => logfile to write to
function redirectOutputsToLogfile {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
saveStandardOutputs
exec 1>>${LOGFILE}
exec 2>&1
OUTPUTS_REDIRECTED="true"
else
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
}
function restoreStandardOutputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
exec 1>&3 #restore stdout
exec 2>&4 #restore stderr
OUTPUTS_REDIRECTED="false"
fi
}
LOGFILE_NAME="tmp/one.log"
OUTPUTS_REDIRECTED="false"
echo "this goes to standard output"
redirectOutputsToLogfile $LOGFILE_NAME
echo "this goes to logfile"
echo "${LOGFILE_NAME}"
restoreStandardOutputs
echo "After restore this goes to standard output"
The "easiest" way (Bash 4 only):
ls * 2>&- 1>&-
In situations when you consider using things like exec 2>&1, I find it easier to read, if possible, rewriting code using Bash functions like this:
function myfunc(){
[...]
}
myfunc &>mylog.log
The following functions can be used to automate the process of toggling outputs beetwen stdout/stderr and a logfile.
#!/bin/bash
#set -x
# global vars
OUTPUTS_REDIRECTED="false"
LOGFILE=/dev/stdout
# "private" function used by redirect_outputs_to_logfile()
function save_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot save standard outputs because they have been redirected before"
exit 1;
fi
exec 3>&1
exec 4>&2
trap restore_standard_outputs EXIT
}
# Params: $1 => logfile to write to
function redirect_outputs_to_logfile {
if [ "$OUTPUTS_REDIRECTED" == "true" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot redirect standard outputs because they have been redirected before"
exit 1;
fi
LOGFILE=$1
if [ -z "$LOGFILE" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: logfile empty [$LOGFILE]"
fi
if [ ! -f $LOGFILE ]; then
touch $LOGFILE
fi
if [ ! -f $LOGFILE ]; then
echo "[ERROR]: ${FUNCNAME[0]}: creating logfile [$LOGFILE]"
exit 1
fi
save_standard_outputs
exec 1>>${LOGFILE%.log}.log
exec 2>&1
OUTPUTS_REDIRECTED="true"
}
# "private" function used by save_standard_outputs()
function restore_standard_outputs {
if [ "$OUTPUTS_REDIRECTED" == "false" ]; then
echo "[ERROR]: ${FUNCNAME[0]}: Cannot restore standard outputs because they have NOT been redirected"
exit 1;
fi
exec 1>&- #closes FD 1 (logfile)
exec 2>&- #closes FD 2 (logfile)
exec 2>&4 #restore stderr
exec 1>&3 #restore stdout
OUTPUTS_REDIRECTED="false"
}
Example of usage inside script:
echo "this goes to stdout"
redirect_outputs_to_logfile /tmp/one.log
echo "this goes to logfile"
restore_standard_outputs
echo "this goes to stdout"
For tcsh, I have to use the following command:
command >& file
If using command &> file, it will give an "Invalid null command" error.

Resources