What is /dev/null 2>&1? [duplicate] - shell

This question already has answers here:
What does " 2>&1 " mean?
(19 answers)
Closed 26 days ago.
I found this piece of code in /etc/cron.daily/apf
#!/bin/bash
/etc/apf/apf -f >> /dev/null 2>&1
/etc/apf/apf -s >> /dev/null 2>&1
It's flushing and reloading the firewall.
I don't understand the >> /dev/null 2>&1 part.
What is the purpose of having this in the cron? It's overriding my firewall rules.
Can I safely remove this cron job?

>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.
(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)
2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.

Let's break >> /dev/null 2>&1 statement into parts:
Part 1: >> output redirection
This is used to redirect the program output and append the output at the end of the file. More...
Part 2: /dev/null special file
This is a Pseudo-devices special file.
Command ls -l /dev/null will give you details of this file:
crw-rw-rw-. 1 root root 1, 3 Mar 20 18:37 /dev/null
Did you observe crw? Which means it is a pseudo-device file which is of character-special-file type that provides serial access.
/dev/null accepts and discards all input; produces no output (always returns an end-of-file indication on a read). Reference: Wikipedia
Part 3: 2>&1 (Merges output from stream 2 with stream 1)
Whenever you execute a program, the operating system always opens three files, standard input, standard output, and standard error as we know whenever a file is opened, the operating system (from kernel) returns a non-negative integer called a file descriptor. The file descriptor for these files are 0, 1, and 2, respectively.
So 2>&1 simply says redirect standard error to standard output.
& means whatever follows is a file descriptor, not a filename.
In short, by using this command you are telling your program not to shout while executing.
What is the importance of using 2>&1?
If you don't want to produce any output, even in case of some error produced in the terminal. To explain more clearly, let's consider the following example:
$ ls -l > /dev/null
For the above command, no output was printed in the terminal, but what if this command produces an error:
$ ls -l file_doesnot_exists > /dev/null
ls: cannot access file_doesnot_exists: No such file or directory
Despite I'm redirecting output to /dev/null, it is printed in the terminal. It is because we are not redirecting error output to /dev/null, so in order to redirect error output as well, it is required to add 2>&1:
$ ls -l file_doesnot_exists > /dev/null 2>&1

This is the way to execute a program quietly, and hide all its output.
/dev/null is a special filesystem object that discards everything written into it. Redirecting a stream into it means hiding your program's output.
The 2>&1 part means "redirect the error stream into the output stream", so when you redirect the output stream, error stream gets redirected as well. Even if your program writes to stderr now, that output would be discarded as well.

Let me explain a bit by bit.
0,1,2
0: standard input
1: standard output
2: standard error
>>
>> in command >> /dev/null 2>&1 appends the command output to /dev/null.
command >> /dev/null 2>&1
After command:
command
=> 1 output on the terminal screen
=> 2 output on the terminal screen
After redirect:
command >> /dev/null
=> 1 output to /dev/null
=> 2 output on the terminal screen
After /dev/null 2>&1
command >> /dev/null 2>&1
=> 1 output to /dev/null
=> 2 output is redirected to 1 which is now to /dev/null

/dev/null is a standard file that discards all you write to it, but reports that the write operation succeeded.
1 is standard output and 2 is standard error.
2>&1 redirects standard error to standard output. &1 indicates file descriptor (standard output), otherwise (if you use just 1) you will redirect standard error to a file named 1. [any command] >>/dev/null 2>&1 redirects all standard error to standard output, and writes all of that to /dev/null.

I use >> /dev/null 2>&1 for a silent cronjob. A cronjob will do the job, but not send a report to my email.
As far as I know, don't remove /dev/null. It's useful, especially when you run cPanel, it can be used for throw-away cronjob reports.

As described by the others, writing to /dev/null eliminates the output of a program. Usually cron sends an email for every output from the process started with a cronjob. So by writing the output to /dev/null you prevent being spammed if you have specified your adress in cron.

instead of using >/dev/null 2>&1
Could you use : wget -O /dev/null -o /dev/null example.com
what i can see on the other forum it says. "Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all."
and the other solution is : wget -q --spider mysite.com
https://serverfault.com/questions/619542/piping-wget-output-to-dev-null-in-cron/619546#619546

I normally used the command in connection with the log files… purpose would be to catch any errors to evaluate/troubleshoot issues when running scripts on multiple servers simultaneously.
sh -vxe cmd > cmd.logfile 2>&1

Edit /etc/conf.apf. Set DEVEL_MODE="0". DEVEL_MODE set to 1 will add a cron job to stop apf after 5 minutes.

Related

How to output data to file but also suppress output on the terminal?

I'm trying to execute a system command in python but I am using > /dev/null 2>&1 to hide all the output. I'm trying to just display very specific output, but I am also trying to output the data to a file > grep.txt so that I can grep the specific data out. However it seems to be going to /dev/null as nothing appears in grep.txt when I use /dev/null in the command.
I have tried the following:
#command > grep.txt > /dev/null 2>&1
#command > grep.txt | > /dev/null 2>&1
#command > grep.txt & > /dev/null 2>&1
#command > grep.txt & /dev/null 2>&1
but nothing seems to work. It's either one or the other. I just want to save the results to the grep.txt file but also hide the output on the terminal.
I have even tried just using a variable to store the results of the command whilst using > /dev/null but the variable is always empty! So I can only assume that it's going to /dev/null.
Please help! xD
Sorry for the stupid question!
/dev/null is equivalent to writing to nothing. It has a file interface, but does not record anything. If you want to retrieve the data, write it to a file:
command > grep.txt 2>&1
Now grep.txt has all the data you were looking for. There will be no output printed to the terminal. You read redirects from left to right: stdout goes to file, stderr goes to wherever stdout is currently going. 2>&1 grep.txt would read as stderr goes to stdout (terminal), stdout goes to file, so you will see error output but normal output would go to the file.

Run Shell scripts without having messages [duplicate]

I want to make my Bash scripts more elegant for the end user. How do I hide the output when Bash is executing commands?
For example, when Bash executes
yum install nano
The following will show up to the user who executed the Bash:
Loaded plugins: fastestmirror
base | 3.7 kB 00:00
base/primary_db | 4.4 MB 00:03
extras | 3.4 kB 00:00
extras/primary_db | 18 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 3.8 MB 00:02
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nano.x86_64 0:2.0.9-7.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
nano x86_64 2.0.9-7.el6 base 436 k
Transaction Summary
================================================================================
Install 1 Package(s)
Total download size: 436 k
Installed size: 1.5 M
Downloading Packages:
nano-2.0.9-7.el6.x86_64.rpm | 436 kB 00:00
warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID c105b9de: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Importing GPG key 0xC105B9DE:
Userid : CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key#centos.org>
Package: centos-release-6-4.el6.centos.10.x86_64 (#anaconda-CentOS-201303020151.x86_64/6.4)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : nano-2.0.9-7.el6.x86_64 1/1
Verifying : nano-2.0.9-7.el6.x86_64 1/1
Installed:
nano.x86_64 0:2.0.9-7.el6
Complete!
Now I want to hide this from the user and instead show:
Installing nano ......
How can I accomplish this task? I will definitely help to make the script more user friendly. In case an error occurs then it should be shown to the user.
I would like to know how to show same message while a set of commands are being executed.
Use this.
{
/your/first/command
/your/second/command
} &> /dev/null
Explanation
To eliminate output from commands, you have two options:
Close the output descriptor file, which keeps it from accepting any more input. That looks like this:
your_command "Is anybody listening?" >&-
Usually, output goes either to file descriptor 1 (stdout) or 2 (stderr). If you close a file descriptor, you'll have to do so for every numbered descriptor, as &> (below) is a special BASH syntax incompatible with >&-:
/your/first/command >&- 2>&-
Be careful to note the order: >&- closes stdout, which is what you want to do; &>- redirects stdout and stderr to a file named - (hyphen), which is not what what you want to do. It'll look the same at first, but the latter creates a stray file in your working directory. It's easy to remember: >&2 redirects stdout to descriptor 2 (stderr), >&3 redirects stdout to descriptor 3, and >&- redirects stdout to a dead end (i.e. it closes stdout).
Also beware that some commands may not handle a closed file descriptor particularly well ("write error: Bad file descriptor"), which is why the better solution may be to...
Redirect output to /dev/null, which accepts all output and does nothing with it. It looks like this:
your_command "Hello?" > /dev/null
For output redirection to a file, you can direct both stdout and stderr to the same place very concisely, but only in bash:
/your/first/command &> /dev/null
Finally, to do the same for a number of commands at once, surround the whole thing in curly braces. Bash treats this as a group of commands, aggregating the output file descriptors so you can redirect all at once. If you're familiar instead with subshells using ( command1; command2; ) syntax, you'll find the braces behave almost exactly the same way, except that unless you involve them in a pipe the braces will not create a subshell and thus will allow you to set variables inside.
{
/your/first/command
/your/second/command
} &> /dev/null
See the bash manual on redirections for more details, options, and syntax.
You can redirect stdout to /dev/null.
yum install nano > /dev/null
Or you can redirect both stdout and stderr,
yum install nano &> /dev/null.
But if the program has a quiet option, that's even better.
A process normally has two outputs to screen: stdout (standard out), and stderr (standard error).
Normally informational messages go to sdout, and errors and alerts go to stderr.
You can turn off stdout for a command by doing
MyCommand >/dev/null
and turn off stderr by doing:
MyCommand 2>/dev/null
If you want both off, you can do:
MyCommand >/dev/null 2>&1
The 2>&1 says send stderr to the same place as stdout.
You can redirect the output to /dev/null. For more info regarding /dev/null read this link.
You can hide the output of a comand in the following ways :
echo -n "Installing nano ......"; yum install nano > /dev/null; echo " done.";
Redirect the standard output to /dev/null, but not the standard error. This will show the errors occurring during the installation, for example if yum cannot find a package.
echo -n "Installing nano ......"; yum install nano &> /dev/null; echo " done.";
While this code will not show anything in the terminal since both standard error and standard output are redirected and thus nullified to /dev/null.
>/dev/null 2>&1 will mute both stdout and stderr
yum install nano >/dev/null 2>&1
You should not use bash in this case to get rid of the output. Yum does have an option -q which suppresses the output.
You'll most certainly also want to use -y
echo "Installing nano..."
yum -y -q install nano
To see all the options for yum, use man yum.
you can also do it by assigning its output to a variable, this is particularly useful when you don't have /dev/null.
Yes, I came across a situation when I can't use /dev/null.
The solution I found was to assign the output to a variable which I will never use there after:
hide_output=$([[ -d /proc ]] && [[ mountpoint -q /proc ]] && umount -l /proc)
This:
command > /dev/null
Or this: (to suppress errors as well)
command > /dev/null 2>&1
Similar to lots of other answers but they didn't work for me with 2> being in front of dev/null.
.SILENT:
Type " .SILENT: " in the beginning of your script without colons.

How to suppress stdout and stderr in bash

I was trying to make a little script when I realized that the output redirection &> doesn't work inside a script. If I write in the terminal
dpkg -s firefox &> /dev/null
or
dpkg -s firefox 2>&1 /dev/null
I get no output, but if I insert it in a script it will display the output. The strange thing is that if I write inside the script
dpkg -s firefox 1> /dev/null
or
dpkg -s firefox 2> /dev/null
the output of the command is suppressed. How can I suppress both stderr and stdout?
&> is a bash extension: &>filename is equivalent to the POSIX standard >filename 2>&1.
Make sure the script begins with #!/bin/bash, so it's able to use bash extensions. #!/bin/sh runs a different shell (or it may be a link to bash, but when it sees that it's run with this name it switches to a more POSIX-compatible mode).
You almost got it right with 2>&1 >/dev/null, but the order is important. Redirections are executed from left to right, so your version copies the old stdout to stderr, then redirects stdout to /dev/null. The correct, portable way to redirect both stderr and stdout to /dev/null is:
>/dev/null 2>&1
Make file descriptor 2 (stderr) write to where File descriptor 1 (stdout) is writing
dpkg -s firefox >/dev/null 2>&1
This will suppress both sources.

Redirect stdout and stderr after redirection command

In a bash script I am enabling IP forwarding using the following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
However I also want to include my own error messages so I have to redirect the stdout and stderr to /dev/null, which looks like the following:
echo 1 > /proc/sys/net/ipv4/ip_forward >/dev/null 2>&1
This works for commands that do not have the redirection symbol in it, for example:
route add default gw 10.8.0.1 > /dev/null 2>&1
Is there any way I can make this work for commands that do have a redirection in them? Is there a workaround for this? Any other way I can do this better?
Your redirections are nonsensical (no offense).
This:
echo 1 > /proc/sys/net/ipv4/ip_forward >/dev/null 2>&1
will:
redirect echo's standard output to /proc/sys/net/ipv4/ip_forward;
redirect echo's standard output to /dev/null
redirect echo's standard error to where file descriptor 1 points, i.e., to /dev/null.
Hence, globally, redirection 2 cancels redirection 1: nothing is going to go to /proc/sys/net/ipv4/ip_forward!
I guess you want to redirect echo's standard output to /proc/sys/net/ipv4/ip_forward and echo's standard error to /dev/null. This is achieved by:
echo 1 > /proc/sys/net/ipv4/ip_forward 2>/dev/null
But read on, I believe that's not the answer you're looking for!
Why do you want to redirect echo's standard error to /dev/null?
echo very rarely writes to standard error. In fact, the only time echo will write to standard error is when there's a write error. There are a couple of ways this could happen:
if the disk is full (that can be simulated with /dev/full):
$ echo hello >/dev/full
bash: echo: write error: No space left on device
$ echo hello >/dev/full 2>/dev/null
$
(no error messages shown with the redirection 2>/dev/null).
if echo's standard out is closed:
$ ( exec >&-; echo hello )
bash: echo: write error: Bad file descriptor
$ ( exec >&-; echo hello 2> /dev/null )
$
(no error messages shown with the redirection 2>/dev/null).
There might be other cases where echo outputs to standard error. But the following are certainly not among them:
Redirecting echo's standard output to a non-existent file descriptor:
$ echo hello >&42
bash: 42: Bad file descriptor
$ echo hello >&42 2>/dev/null
bash: 42: Bad file descriptor
$
The redirection 2>/dev/null doesn't fix anything; you can actually see that the error comes from bash and not from echo (the latter would have bash: echo: as a prefix).
Redirecting echo's standard output to a file without write permission:
$ touch testfile
$ chmod -w testfile
$ echo hello > testfile
bash: testfile: Permission denied
$ echo hello > testfile 2>/dev/null
bash: testfile: Permission denied
$
Same as above, the redirection 2>/dev/null doesn't fix anything.
The previous cases are not fixed by 2>/dev/null, since the error occurs at Bash's level, before the command is even executed and the redirections performed, because it's at the moment of the redirection that Bash encounters an error: it can't open the stream for writing and outputs the error message to standard output.†
Now I guess you're trying to fix the following scenario: when the user doesn't have enough rights to write to /proc/sys/net/ipv4/ip_forward:
$ echo 1 >/proc/sys/net/ipv4/ip_forward
bash: /proc/sys/net/ipv4/ip_forward: Permission denied
$
The error message on standard error can't be redirected with a simple redirection‡:
$ echo 1 >/proc/sys/net/ipv4/ip_forward 2>/dev/null
bash: /proc/sys/net/ipv4/ip_forward: Permission denied
$
A standard way to redirect the error that occurs at the redirection level (i.e., before the command is even executed) is to use groupings:
$ { echo 1 >/proc/sys/net/ipv4/ip_forward; } 2>/dev/null
Now that explains why the solution you posted as an answer works: let's go through it and we'll see that there's something that shows you didn't completely understand redirections (and hopefully this post will help you understanding a few things); your code is:
function ip_forward
{
echo 1 > /proc/sys/net/ipv4/ip_forward
}
ip_forward >/dev/null 2>&1
This will run the function ip_forward, and redirect:
its standard output to /dev/null;
and then its standard error to where its standard output points (i.e., to /dev/null).
But the function ip_forward doesn't output anything to standard output! so the redirection >/dev/null is only useful for the 2>&1 part of the redirection. In fact, your code is completely equivalent to:
function ip_forward
{
echo 1 > /proc/sys/net/ipv4/ip_forward
}
ip_forward 2>/dev/null
But then (since you only use a function construct as a way to achieve what you wanted—not because you want a function), it's much better to write your code as either:
echo 1 2>/dev/null >/proc/sys/net/ipv4/ip_forward
or
{ echo 1 > /proc/sys/net/ipv4/ip_forward; } 2>/dev/null
(the latter being preferred).
Sorry for this long post!
†
There's something we should be aware of: the order of redirection. They are performed from left to right, as Bash reads them. How about we first redirect standard error, and then standard output to a non-existent/non-writable stream?
$ echo hello 2>/dev/null >&42
$
That's right, it works.
‡
Well, can, if you understood the previous footnote:
$ echo 1 2>/dev/null >/proc/sys/net/ipv4/ip_forward
$ echo $?
1
$
No error on standard error! that's because of the order of the redirections.
Solved
echo 1 > /proc/sys/net/ipv4/ip_forward ip_forward 2>/dev/null
echo 1 is the stdout part so this does not have to be redirected anymore. I only had to redirect stderr by adding 2>/dev/null.

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

Resources