I have a Bash script that runs a program with parameters. That program outputs some status (doing this, doing that...). There isn't any option for this program to be quiet. How can I prevent the script from displaying anything?
I am looking for something like Windows' "echo off".
The following sends standard output to the null device (bit bucket).
scriptname >/dev/null
And if you also want error messages to be sent there, use one of (the first may not work in all shells):
scriptname &>/dev/null
scriptname >/dev/null 2>&1
scriptname >/dev/null 2>/dev/null
And, if you want to record the messages, but not see them, replace /dev/null with an actual file, such as:
scriptname &>scriptname.out
For completeness, under Windows cmd.exe (where "nul" is the equivalent of "/dev/null"), it is:
scriptname >nul 2>nul
Something like
script > /dev/null 2>&1
This will prevent standard output and error output, redirecting them both to /dev/null.
An alternative that may fit in some situations is to assign the result of a command to a variable:
$ DUMMY=$( grep root /etc/passwd 2>&1 )
$ echo $?
0
$ DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
1
Since Bash and other POSIX commandline interpreters does not consider variable assignments as a command, the present command's return code is respected.
Note: assignement with the typeset or declare keyword is considered as a command, so the evaluated return code in case is the assignement itself and not the command executed in the sub-shell:
$ declare DUMMY=$( grep r00t /etc/passwd 2>&1 )
$ echo $?
0
Try
: $(yourcommand)
: is short for "do nothing".
$() is just your command.
Like andynormancx' post, use this (if you're working in an Unix environment):
scriptname > /dev/null
Or you can use this (if you're working in a Windows environment):
scriptname > nul
This is another option
scriptname |& :
Take a look at this example from The Linux Documentation Project:
3.6 Sample: stderr and stdout 2 file
This will place every output of a program to a file. This is suitable sometimes for cron entries, if you want a command to pass in absolute silence.
rm -f $(find / -name core) &> /dev/null
That said, you can use this simple redirection:
/path/to/command &>/dev/null
In your script you can add the following to the lines that you know are going to give an output:
some_code 2>>/dev/null
Or else you can also try
some_code >>/dev/null
Related
I tried to compare two files and output customized string. Following is my script.
#!/bin/bash
./${1} > tmp
if ! diff -q tmp ans.txt &>/dev/null; then
>&2 echo "different"
else
>&2 echo "same"
fi
When I execute script, I get:
sh cmp.sh ans.txt
different
Files tmp and ans.txt differ
The weird part is when I type diff -q tmp ans.txt &>/dev/null. No output will show up.
How to fix it(I don't want line:"Files tmp and ans.txt differ")? Thanks!
Most probably the version of sh you are using doesn't understand the bash (deprecated/obsolete) extension &> that redirect both stdout and stderr at the same time. In posix shell the command &>/dev/null I think is parsed as { command & }; > /dev/null - it results in running the command in the background & and the > /dev/null part I think is ignored, as it just redirect output of a nonexistent command - it's valid syntax, but executes nothing. Because running the command in the background succeeds, the if always succeeds.
Prefer not to use &> - use >/dev/null 2>&1 instead. Use diff to pretty print the files comparison. Use cmp in batch scripts to compare files.
if cmp -s tmp ans.txt; then
I was debugging a shell script, and the problem was that the following code
if curl doesntexist -i &> /dev/null;
then
echo True
else
echo False
fi
Is, if running zsh, not equivalent to:
if curl 6 -i 1> /dev/null 2> /dev/null;
then
echo True
else
echo False
fi
The later echos True as expected, but if I redirect with &>, the output is false.
I do not understand this behaviour, for example here it says that
&>name is like 1>name 2>name
Can someone explain why the two snippets do not behave the same if running in zsh? From zsh docu it says that it should also redirect stdout and stderr, sounds like it should do the same as in bash:
&> Redirects both standard output and standard error (file descriptor 2) in the manner of ‘> word’
I suspect your script does not have a shebang to indicate which shell should be used to execute it, that you have made it executable, and are executing it with something like ./test.sh. If that is the case, adding something like #!/bin/bash or #!/bin/zsh will solve the problem.
Without a shebang, what actually executes your script depends on which shell you are executing it from. bash will execute the script with bash. zsh, however, will execute the script with /bin/sh.
In bash, &> is a redirection operator that redirects both standard error and standard input to the same file.
In /bin/sh, though, it is not a redirection operator. The command curl doesntexist -i &> /dev/null is parsed as two separate commands:
curl doesntexist -i &
> /dev/null
The first runs curl in the background, and immediately returns with a 0 exit status. (The exit status of curl itself is never considered.) The second command is a valid empty command that simply opens > /dev/null for writing, then exits with a 0 exit status.
As a result, no matter what curl might do, the exit status that if cares about is just the last one in the list, that of > /dev/null. Since that is 0, you get the True path.
In bash, where &> is the valid redirection operator, if looks at the exit status of curl as expected.
I know how to redirect stdout to a file:
exec > foo.log
echo test
this will put the 'test' into the foo.log file.
Now I want to redirect the output into the log file AND keep it on stdout
i.e. it can be done trivially from outside the script:
script | tee foo.log
but I want to do declare it within the script itself
I tried
exec | tee foo.log
but it didn't work.
#!/usr/bin/env bash
# Redirect stdout ( > ) into a named pipe ( >() ) running "tee"
exec > >(tee -i logfile.txt)
# Without this, only stdout would be captured - i.e. your
# log file would not contain any error messages.
# SEE (and upvote) the answer by Adam Spiers, which keeps STDERR
# as a separate stream - I did not want to steal from him by simply
# adding his answer to mine.
exec 2>&1
echo "foo"
echo "bar" >&2
Note that this is bash, not sh. If you invoke the script with sh myscript.sh, you will get an error along the lines of syntax error near unexpected token '>'.
If you are working with signal traps, you might want to use the tee -i option to avoid disruption of the output if a signal occurs. (Thanks to JamesThomasMoon1979 for the comment.)
Tools that change their output depending on whether they write to a pipe or a terminal (ls using colors and columnized output, for example) will detect the above construct as meaning that they output to a pipe.
There are options to enforce the colorizing / columnizing (e.g. ls -C --color=always). Note that this will result in the color codes being written to the logfile as well, making it less readable.
The accepted answer does not preserve STDERR as a separate file descriptor. That means
./script.sh >/dev/null
will not output bar to the terminal, only to the logfile, and
./script.sh 2>/dev/null
will output both foo and bar to the terminal. Clearly that's not
the behaviour a normal user is likely to expect. This can be
fixed by using two separate tee processes both appending to the same
log file:
#!/bin/bash
# See (and upvote) the comment by JamesThomasMoon1979
# explaining the use of the -i option to tee.
exec > >(tee -ia foo.log)
exec 2> >(tee -ia foo.log >&2)
echo "foo"
echo "bar" >&2
(Note that the above does not initially truncate the log file - if you want that behaviour you should add
>foo.log
to the top of the script.)
The POSIX.1-2008 specification of tee(1) requires that output is unbuffered, i.e. not even line-buffered, so in this case it is possible that STDOUT and STDERR could end up on the same line of foo.log; however that could also happen on the terminal, so the log file will be a faithful reflection of what could be seen on the terminal, if not an exact mirror of it. If you want the STDOUT lines cleanly separated from the STDERR lines, consider using two log files, possibly with date stamp prefixes on each line to allow chronological reassembly later on.
Solution for busybox, macOS bash, and non-bash shells
The accepted answer is certainly the best choice for bash. I'm working in a Busybox environment without access to bash, and it does not understand the exec > >(tee log.txt) syntax. It also does not do exec >$PIPE properly, trying to create an ordinary file with the same name as the named pipe, which fails and hangs.
Hopefully this would be useful to someone else who doesn't have bash.
Also, for anyone using a named pipe, it is safe to rm $PIPE, because that unlinks the pipe from the VFS, but the processes that use it still maintain a reference count on it until they are finished.
Note the use of $* is not necessarily safe.
#!/bin/sh
if [ "$SELF_LOGGING" != "1" ]
then
# The parent process will enter this branch and set up logging
# Create a named piped for logging the child's output
PIPE=tmp.fifo
mkfifo $PIPE
# Launch the child process with stdout redirected to the named pipe
SELF_LOGGING=1 sh $0 $* >$PIPE &
# Save PID of child process
PID=$!
# Launch tee in a separate process
tee logfile <$PIPE &
# Unlink $PIPE because the parent process no longer needs it
rm $PIPE
# Wait for child process, which is running the rest of this script
wait $PID
# Return the error code from the child process
exit $?
fi
# The rest of the script goes here
Inside your script file, put all of the commands within parentheses, like this:
(
echo start
ls -l
echo end
) | tee foo.log
Easy way to make a bash script log to syslog. The script output is available both through /var/log/syslog and through stderr. syslog will add useful metadata, including timestamps.
Add this line at the top:
exec &> >(logger -t myscript -s)
Alternatively, send the log to a separate file:
exec &> >(ts |tee -a /tmp/myscript.output >&2 )
This requires moreutils (for the ts command, which adds timestamps).
Using the accepted answer my script kept returning exceptionally early (right after 'exec > >(tee ...)') leaving the rest of my script running in the background. As I couldn't get that solution to work my way I found another solution/work around to the problem:
# Logging setup
logfile=mylogfile
mkfifo ${logfile}.pipe
tee < ${logfile}.pipe $logfile &
exec &> ${logfile}.pipe
rm ${logfile}.pipe
# Rest of my script
This makes output from script go from the process, through the pipe into the sub background process of 'tee' that logs everything to disc and to original stdout of the script.
Note that 'exec &>' redirects both stdout and stderr, we could redirect them separately if we like, or change to 'exec >' if we just want stdout.
Even thou the pipe is removed from the file system in the beginning of the script it will continue to function until the processes finishes. We just can't reference it using the file name after the rm-line.
Bash 4 has a coproc command which establishes a named pipe to a command and allows you to communicate through it.
Can't say I'm comfortable with any of the solutions based on exec. I prefer to use tee directly, so I make the script call itself with tee when requested:
# my script:
check_tee_output()
{
# copy (append) stdout and stderr to log file if TEE is unset or true
if [[ -z $TEE || "$TEE" == true ]]; then
echo '-------------------------------------------' >> log.txt
echo '***' $(date) $0 $# >> log.txt
TEE=false $0 $# 2>&1 | tee --append log.txt
exit $?
fi
}
check_tee_output $#
rest of my script
This allows you to do this:
your_script.sh args # tee
TEE=true your_script.sh args # tee
TEE=false your_script.sh args # don't tee
export TEE=false
your_script.sh args # tee
You can customize this, e.g. make tee=false the default instead, make TEE hold the log file instead, etc. I guess this solution is similar to jbarlow's, but simpler, maybe mine has limitations that I have not come across yet.
Neither of these is a perfect solution, but here are a couple things you could try:
exec >foo.log
tail -f foo.log &
# rest of your script
or
PIPE=tmp.fifo
mkfifo $PIPE
exec >$PIPE
tee foo.log <$PIPE &
# rest of your script
rm $PIPE
The second one would leave a pipe file sitting around if something goes wrong with your script, which may or may not be a problem (i.e. maybe you could rm it in the parent shell afterwards).
I want to copy stdout to a log file from within a bash script, meaning I don't want to call the script with output piped to tee, I want the script itself to handle it. I've successfully used this answer to accomplish this, using the following code:
#!/bin/bash
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 10
echo "world"
This works, but has the downside of output being buffered until the script is completed, as is also discussed in the linked answer. In the above example, both "hello" and "world" will show up in the log only after the 10 seconds have passed.
I am aware of the stdbuf command, and if running the script with
stdbuf -oL ./myscript.sh
then stdout is indeed continuously printed both to the file and the terminal.
However, I'd like this to be handled from within the script as well. Is there any way to combine these two solutions? I'd rather not resort to a wrapper script that simply calls the original script enclosed with "stdbuf -oL".
You can use a workaround and make the script execute itself with stdbuf, if a special argument is present:
#!/bin/bash
if [[ "$1" != __BUFFERED__ ]]; then
prog="$0"
stdbuf -oL "$prog" __BUFFERED__ "$#"
else
shift #discard __BUFFERED__
exec > >(sed "s/^/[${1}] /" | tee -a myscript.log)
exec 2>&1
# <rest of script>
echo "hello"
sleep 1
echo "world"
fi
This will mostly work:
if you run the script with ./test, it shows unbuffered [] hello\n[] world.
if you run the script with ./test 123 456, it shows [123] hello\n[123] world like you want.
it won't work, however, if you run it with bash test - $0 is set to test which is not your script. Fixing this is not in the scope of this question though.
The delay in your first solution is caused by sed, not by tee. Try this instead:
#!/bin/bash
exec 6>&1 2>&1>&>(tee -a myscript.log)
To "undo" the tee effect:
exec 1>&6 2>&6 6>&-
I scheduled a script using at scheduler in linux.
The job ran fine but the echo statements which I had redirected to a file are no where to be found.
The at scheduling command is as follows:
at -f /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1 -v 09:50
Can anyone point out what is the issue with the above command.
I cannot see any echo statements from the script in the log.txt file
To include shell syntax like I/O redirection, you'll need to either fold it into your script, or pass the input to at via standard input, like so:
at -v 09:50 <<EOF
sh /app/data/scripts/func_test.sh >> /app/data/log/log.txt 2>&1
EOF
If func_test.sh is already executable, you can omit the sh from the beginning of the command; it's there to ensure that you are passing a valid command line to at.
You can also simply ensure that your script itself redirects all its output to a specific log file. As an example,
#!/bin/bash
echo foo
echo bar
becomes
#!/bin/bash
{
echo foo
echo bar
} >> /app/data/log/log.txt 2>&1
Then you can simply run your script with at using
at -f /app/data/scripts/func_test.sh -v 09:50
with no output redirection, because the script itself already redirects all its output to that file.