How to suppress stdout and stderr in bash - bash

I was trying to make a little script when I realized that the output redirection &> doesn't work inside a script. If I write in the terminal
dpkg -s firefox &> /dev/null
or
dpkg -s firefox 2>&1 /dev/null
I get no output, but if I insert it in a script it will display the output. The strange thing is that if I write inside the script
dpkg -s firefox 1> /dev/null
or
dpkg -s firefox 2> /dev/null
the output of the command is suppressed. How can I suppress both stderr and stdout?

&> is a bash extension: &>filename is equivalent to the POSIX standard >filename 2>&1.
Make sure the script begins with #!/bin/bash, so it's able to use bash extensions. #!/bin/sh runs a different shell (or it may be a link to bash, but when it sees that it's run with this name it switches to a more POSIX-compatible mode).
You almost got it right with 2>&1 >/dev/null, but the order is important. Redirections are executed from left to right, so your version copies the old stdout to stderr, then redirects stdout to /dev/null. The correct, portable way to redirect both stderr and stdout to /dev/null is:
>/dev/null 2>&1

Make file descriptor 2 (stderr) write to where File descriptor 1 (stdout) is writing
dpkg -s firefox >/dev/null 2>&1
This will suppress both sources.

Related

Redirect _sudo_ 's PID, stderr and stdout to file

How can _user_ redirect PID as well as stdout and stderr of sudo (and not of the sub-shell it spawns) to file ?
I.e. the sudo passwd request ([sudo] password for _user_: _) and any error normally sent to the current shell's stdout and stderr by sudo should be redirected to any appropriate file /path/foobar, in the spirit of:
{sudo-process} &> >(tee /path/foobar > /dev/null) or
{sudo-process} 2>&1 | tee /path/foobar &>/dev/null)
as far as stdout and stderr are concerned. In my use case, PID is also needed to make unattended killing of the sudo process possible from an external script with appropriate privis.
By "current shell", I mean the shell in which the sudo process lives.
The farthest I got to is a possible askpass program (see option -A in man 8 sudo) and the possibility to use plugins which may or may not be relevant. I have no experience at all with that. Can you help ?
Note: I am NOT looking at redirecting output from a shell spawned by sudo to file. As in:
$ sudo cmd &> /path/foobar or $ sudo sh -c 'cmd &> /path/foobar'
depending on whether /path/foobar is a file with appropriate write permission, i.e. accessible to cmd's redirection of stdoutand stderr. This is not the issue.
EDIT:
#JohnKugelman suggests running sudo -n which causes sudo to emit an error message to stderr and to die, in case a passwd is needed. That takes care of:
- redirecting stdout, since there is none anymore.
- the need to store sudo's PID, since the process dies on its own.
The main issue remains: how to redirect sudo's stderr to file ?
To redirect sudo's stderr to file:
sudo -n cmd -- &> /path/of/the/file

Copy stderr and stdout to a file as well as the screen in ksh

I'm looking for a solution (similar to the bash code below) to copy both stdout and stderr to a file in addition to the screen within ksh on Solaris.
The following code works great in the bash shell:
#!/usr/bin/bash
# Clear the logfile
>logfile.txt
# Redirect all script output to a logfile as well as their normal locations
exec > >(tee -a logfile.txt)
exec 2> >(tee -a logfile.txt >&2)
date
ls -l /non-existent/path
For some reason this is throwing a syntax error on Solaris. I assume it's because I can't do process substitution, and I've seen some posts suggesting the use of mkfifo, but I've yet to come up with a working solution.
Does anyone know of a way that all output can be redirected to a file in addition to the default locations?
Which version of ksh are you using? The >() is not supported in ksh88, but is supported in ksh93 - the bash code should work unchanged (aside from the #! line) on ksh93.
If you are stuck with ksh88 (poor thing!) then you can emulate the bash/ksh93 behaviour using a named pipe:
#!/bin/ksh
# Clear the logfile
>logfile.txt
pipe1="/tmp/mypipe1.$$"
pipe2="/tmp/mypipe2.$$"
trap 'rm "$pipe1" "$pipe2"' EXIT
mkfifo "$pipe1"
mkfifo "$pipe2"
tee -a logfile.txt < "$pipe1" &
tee -a logfile.txt >&2 < "$pipe2" &
# Redirect all script output to a logfile as well as their normal locations
exec >"$pipe1"
exec 2>"$pipe2"
date
ls -l /non-existent/path
The above is a second version to enable stderr to be redirected to a different file.
How about this:
(some commands ...) 2>&1 | tee logfile.txt
Add -a to the tee command line for subsequent invocations to append rather than overwrite.
In ksh, the following works very well for me
LOG=log_file.$(date +%Y%m%d%H%M%S).txt
{
ls
date
... whatever command
} 2>&1 | tee -a $LOG

What is /dev/null 2>&1? [duplicate]

This question already has answers here:
What does " 2>&1 " mean?
(19 answers)
Closed 26 days ago.
I found this piece of code in /etc/cron.daily/apf
#!/bin/bash
/etc/apf/apf -f >> /dev/null 2>&1
/etc/apf/apf -s >> /dev/null 2>&1
It's flushing and reloading the firewall.
I don't understand the >> /dev/null 2>&1 part.
What is the purpose of having this in the cron? It's overriding my firewall rules.
Can I safely remove this cron job?
>> /dev/null redirects standard output (stdout) to /dev/null, which discards it.
(The >> seems sort of superfluous, since >> means append while > means truncate and write, and either appending to or writing to /dev/null has the same net effect. I usually just use > for that reason.)
2>&1 redirects standard error (2) to standard output (1), which then discards it as well since standard output has already been redirected.
Let's break >> /dev/null 2>&1 statement into parts:
Part 1: >> output redirection
This is used to redirect the program output and append the output at the end of the file. More...
Part 2: /dev/null special file
This is a Pseudo-devices special file.
Command ls -l /dev/null will give you details of this file:
crw-rw-rw-. 1 root root 1, 3 Mar 20 18:37 /dev/null
Did you observe crw? Which means it is a pseudo-device file which is of character-special-file type that provides serial access.
/dev/null accepts and discards all input; produces no output (always returns an end-of-file indication on a read). Reference: Wikipedia
Part 3: 2>&1 (Merges output from stream 2 with stream 1)
Whenever you execute a program, the operating system always opens three files, standard input, standard output, and standard error as we know whenever a file is opened, the operating system (from kernel) returns a non-negative integer called a file descriptor. The file descriptor for these files are 0, 1, and 2, respectively.
So 2>&1 simply says redirect standard error to standard output.
& means whatever follows is a file descriptor, not a filename.
In short, by using this command you are telling your program not to shout while executing.
What is the importance of using 2>&1?
If you don't want to produce any output, even in case of some error produced in the terminal. To explain more clearly, let's consider the following example:
$ ls -l > /dev/null
For the above command, no output was printed in the terminal, but what if this command produces an error:
$ ls -l file_doesnot_exists > /dev/null
ls: cannot access file_doesnot_exists: No such file or directory
Despite I'm redirecting output to /dev/null, it is printed in the terminal. It is because we are not redirecting error output to /dev/null, so in order to redirect error output as well, it is required to add 2>&1:
$ ls -l file_doesnot_exists > /dev/null 2>&1
This is the way to execute a program quietly, and hide all its output.
/dev/null is a special filesystem object that discards everything written into it. Redirecting a stream into it means hiding your program's output.
The 2>&1 part means "redirect the error stream into the output stream", so when you redirect the output stream, error stream gets redirected as well. Even if your program writes to stderr now, that output would be discarded as well.
Let me explain a bit by bit.
0,1,2
0: standard input
1: standard output
2: standard error
>>
>> in command >> /dev/null 2>&1 appends the command output to /dev/null.
command >> /dev/null 2>&1
After command:
command
=> 1 output on the terminal screen
=> 2 output on the terminal screen
After redirect:
command >> /dev/null
=> 1 output to /dev/null
=> 2 output on the terminal screen
After /dev/null 2>&1
command >> /dev/null 2>&1
=> 1 output to /dev/null
=> 2 output is redirected to 1 which is now to /dev/null
/dev/null is a standard file that discards all you write to it, but reports that the write operation succeeded.
1 is standard output and 2 is standard error.
2>&1 redirects standard error to standard output. &1 indicates file descriptor (standard output), otherwise (if you use just 1) you will redirect standard error to a file named 1. [any command] >>/dev/null 2>&1 redirects all standard error to standard output, and writes all of that to /dev/null.
I use >> /dev/null 2>&1 for a silent cronjob. A cronjob will do the job, but not send a report to my email.
As far as I know, don't remove /dev/null. It's useful, especially when you run cPanel, it can be used for throw-away cronjob reports.
As described by the others, writing to /dev/null eliminates the output of a program. Usually cron sends an email for every output from the process started with a cronjob. So by writing the output to /dev/null you prevent being spammed if you have specified your adress in cron.
instead of using >/dev/null 2>&1
Could you use : wget -O /dev/null -o /dev/null example.com
what i can see on the other forum it says. "Here -O sends the downloaded file to /dev/null and -o logs to /dev/null instead of stderr. That way redirection is not needed at all."
and the other solution is : wget -q --spider mysite.com
https://serverfault.com/questions/619542/piping-wget-output-to-dev-null-in-cron/619546#619546
I normally used the command in connection with the log files… purpose would be to catch any errors to evaluate/troubleshoot issues when running scripts on multiple servers simultaneously.
sh -vxe cmd > cmd.logfile 2>&1
Edit /etc/conf.apf. Set DEVEL_MODE="0". DEVEL_MODE set to 1 will add a cron job to stop apf after 5 minutes.

Write STDOUT & STDERR to a logfile, also write STDERR to screen

I would like to run several commands, and capture all output to a logfile. I also want to print any errors to the screen (or optionally mail the output to someone).
Here's an example. The following command will run three commands, and will write all output (STDOUT and STDERR) into a single logfile.
{ command1 && command2 && command3 ; } > logfile.log 2>&1
Here is what I want to do with the output of these commands:
STDERR and STDOUT for all commands goes to a logfile, in case I need it later--- I usually won't look in here unless there are problems.
Print STDERR to the screen (or optionally, pipe to /bin/mail), so that any error stands out and doesn't get ignored.
It would be nice if the return codes were still usable, so that I could do some error handling. Maybe I want to send email if there was an error, like this:
{ command1 && command2 && command3 ; } > logfile.log 2>&1 || mailx -s "There was an error" stefanl#example.org
The problem I run into is that STDERR loses context during I/O redirection. A '2>&1' will convert STDERR into STDOUT, and therefore I cannot view errors if I do 2> error.log
Here are a couple juicier examples. Let's pretend that I am running some familiar build commands, but I don't want the entire build to stop just because of one error so I use the '--keep-going' flag.
{ ./configure && make --keep-going && make install ; } > build.log 2>&1
Or, here's a simple (And perhaps sloppy) build and deploy script, which will keep going in the event of an error.
{ ./configure && make --keep-going && make install && rsync -av --keep-going /foo devhost:/foo} > build-and-deploy.log 2>&1
I think what I want involves some sort of Bash I/O Redirection, but I can't figure this out.
(./doit >> log) 2>&1 | tee -a log
This will take stdout and append it to log file.
The stderr will then get converted to stdout which is piped to tee which appends it to the log (if you are have Bash 4, you can replace 2>&1 | with |&) and sends it to stdout which will either appear on the tty or can be piped to another command.
I used append mode for both so that regardless of which order the shell redirection and tee open the file, you won't blow away the original. That said, it may be possible that stderr/stdout is interleaved in an unexpected way.
If your system has /dev/fd/* nodes you can do it as:
( exec 5>logfile.txt ; { command1 && command2 && command3 ;} 2>&1 >&5 | tee /dev/fd/5 )
This opens file descriptor 5 to your logfile. Executes the commands with standard error directed to standard out, standard out directed to fd 5 and pipes stdout (which now contains only stderr) to tee which duplicates the output to fd 5 which is the log file.
Here is how to run one or more commands, capturing the standard output and error, in the order in which they are generated, to a logfile, and displaying only the standard error on any terminal screen you like. Works in bash on linux. Probably works in most other environments. I will use an example to show how it's done.
Preliminaries:
Open two windows (shells, tmux sessions, whatever)
I will demonstrate with some test files, so create the test files:
touch /tmp/foo /tmp/foo1 /tmp/foo2
in window1:
mkfifo /tmp/fifo
0</tmp/fifo cat - >/tmp/logfile
Then, in window2:
(ls -l /tmp/foo /tmp/nofile /tmp/foo1 /tmp/nofile /tmp/nofile; echo successful test; ls /tmp/nofile1111) 2>&1 1>/tmp/fifo | tee /tmp/fifo 1>/dev/pts/2
Where you replace /dev/pts/2 with whatever tty you want the stderr to display.
The reason for the various successful and unsuccessful commands in the subshell is simply to generate a mingled stream of output and error messages, so that you can verify the correct ordering in the log file. Once you understand how it works, replace the “ls” and “echo” commands with scripts or commands of your choosing.
With this method, the ordering of output and error is preserved, the syntax is simple and clean, and there is only a single reference to the output file. Plus there is flexiblity in putting the extra copy of stderr wherever you want.
Try:
command 2>&1 | tee output.txt
Additionally, you can direct stdout and stderr to different places:
command > stdout.txt >& stderr.txt
command > stdout.txt |& program_for_stderr
So some combination of the above should work for you -- e.g. you could save stdout to a file, and stderr to both a file and piping to another program (with tee).
add this at the beginning of your script
#!/bin/bash
set -e
outfile=logfile
exec > >(cat >> $outfile)
exec 2> >(tee -a $outfile >&2)
# write your code here
STDOUT and STDERR will be written to $outfile, only STDERR will be seen on the console

How to silence output in a Bash script?

I have a program that outputs to stdout and would like to silence that output in a Bash script while piping to a file.
For example, running the program will output:
% myprogram
% WELCOME TO MY PROGRAM
% Done.
I want the following script to not output anything to the terminal:
#!/bin/bash
myprogram > sample.s
If it outputs to stderr as well you'll want to silence that. You can do that by redirecting file descriptor 2:
# Send stdout to out.log, stderr to err.log
myprogram > out.log 2> err.log
# Send both stdout and stderr to out.log
myprogram &> out.log # New bash syntax
myprogram > out.log 2>&1 # Older sh syntax
# Log output, hide errors.
myprogram > out.log 2> /dev/null
Redirect stderr to stdout
This will redirect the stderr (which is descriptor 2) to the file descriptor 1 which is the the stdout.
2>&1
Redirect stdout to File
Now when perform this you are redirecting the stdout to the file sample.s
myprogram > sample.s
Redirect stderr and stdout to File
Combining the two commands will result in redirecting both stderr and stdout to sample.s
myprogram > sample.s 2>&1
Redirect stderr and stdout to /dev/null
Redirect to /dev/null if you want to completely silent your application.
myprogram >/dev/null 2>&1
All output:
scriptname &>/dev/null
Portable:
scriptname >/dev/null 2>&1
Portable:
scriptname >/dev/null 2>/dev/null
For newer bash (no portable):
scriptname &>-
If you are still struggling to find an answer, specially if you produced a file for the output, and you prefer a clear alternative:
echo "hi" | grep "use this hack to hide the oputut :) "
If you want STDOUT and STDERR both [everything], then the simplest way is:
#!/bin/bash
myprogram >& sample.s
then run it like ./script, and you will get no output to your terminal. :)
the ">&" means STDERR and STDOUT. the & also works the same way with a pipe: ./script |& sed
that will send everything to sed
Try with:
myprogram &>/dev/null
to get no output
Useful variations:
Get only the STDERR in a file, while hiding any STDOUT even if the
program to hide isn't existing at all. (does not ever hang):
stty -echo && ./programMightNotExist 2> errors.log && stty echo
Detach completely and silence everything, even killing the parent
script won't abort ./prog (Does behave just like nohup):
./prog </dev/null >/dev/null 2>&1 &
nohup can be used as well to fully detach, as follow:
nohup ./prog &
A log file nohup.out will be created aside of the script, use tail -f nohup.out to read it.
Note: This answer is related to the question "How to turn off echo while executing a shell script Linux" which was in turn marked as duplicated to this one.
To actually turn off the echo the command is:
stty -echo
(this is, for instance; when you want to enter a password and you don't want it to be readable. Remember to turn echo on at the end of your script, otherwise the person that runs your script won't see what he/she types in from then on. To turn echo on run:
stty echo
For output only on error:
so [command]

Resources