I want to create a script which can be run both in verbose and silent mode if preferred. My idea was to create a variable which contains "&> /dev/null" and if the script is runned silently I just clear it. The silent mode works fine but if I want to pass this as the last "argument" to something (see my example down) it does not work.
Here is an example:
I want to zip something (I know there is a -q option, the question is more like theorethical) and if I write this it works as I intended:
$ zip bar.zip foo
adding: foo (stored 0%)
$ zip bar.zip foo &> /dev/null
$
But when I declare the &> /dev/null as a variable I get this:
$ var="&> /dev/null"
$ zip bar.zip foo "$var"
zip warning: name not matched: &> /dev/null
updating: foo (stored 0%)
I saw other solutions to redirect parts of the script by intention but I'm curious if my idea can work and if not, why not?
It would be more convenient to use something like this in my script because I only need to redirect some lines or commands but too much to surround each with an if:
I'm trying something like this to be clear:
if verbose
verbose="&> /dev/null"
else
verbose=
command1 $1 $2
command2 $1
command3 $1 $2 $3 "$verbose"
command4
bash will recognize a file name of /dev/stdout as standard output, when used with a redirection, whether or not your file system has such an entry.
# If an argument is given your script, use that value
# Otherwise, use /dev/stdout
out=${1:-/dev/stdout}
zip bar.zip foo > "$out"
Your issue is the order of expansion performed by the shell. The variable is replaced by your stdout redirect AFTER passing the commands to the command zip
You need the shell to expand that variable BEFORE invoking zip command. TO do this you can use eval which takes a first pass at expanding variables and such.
eval zip bar.zip foo "$var"
However be weary of using eval, as unchecked values can lead to exploits, and its use is generally discouraged.
You could consider setting up your own file descriptor and set that to be /dev/null, a real file, or even stdout. ANd then each command that should go to one of those would be:
zip bar.zip foo > 3
http://www.tldp.org/LDP/abs/html/io-redirection.html#FDREF
Related
Is there a filename that is assignable to a variable (i.e. not a magic builtin shell token like &1) that will let me redirect to stdout?
What I finally want to do is run something like this in a cron script:
LOG=/tmp/some_file
...
some_command 2>&1 >> $LOG
echo "blah" >> $LOG
...
Conveniently, this lets me turn off log noise by redirecting to /dev/null later when I'm sure there is nothing that can fail (or, at least, nothing that I care about!) without rewriting the whole script. Yes, turning off logging isn't precisely best practice -- but once this script works, there is not much that can conceivably go wrong, and trashing the disk with megabytes of log info that nobody wants to read isn't desired.
In case something unexpectedly fails 5 years later, it is still possible to turn on logging again by flipping a switch.
On the other hand, while writing and debugging the script, which involves calling it manually from the shell, it would be extremely nice if it could just dump the output to the console. That way I wouldn't need to tail the logfile manually.
In other words, what I'm looking for is something like /proc/self/fd/0 in bash-talk that I can assign to LOG. As it happens, /proc/self/fd/0 works just fine on my Linux box, but I wonder if there isn't such a thing built into bash already (which would generally be preferrable).
Basic solution:
#!/bin/bash
LOG=/dev/null
# uncomment next line for debugging (logging)
# LOG=/tmp/some_file
{
some_command
echo "blah"
} | tee 1>$LOG 2>&1
More evolved:
#!/bin/bash
ENABLE_LOG=0 # 1 to log standard & error outputs
LOG=/tmp/some_file
{
some_command
echo "blah"
} | if (( $ENABLE_LOG ))
then
tee 1>$LOG 2>&1
fi
More elegant solution from DevSolar's idea:
#!/bin/bash
# uncomment next line for debugging (logging)
# exec 1> >(tee /tmp/some_file) 2>&1
some_command
echo "blah"
Thanks to the awesome hints by olibre and suvayu, I came up with this (for the record, version that I'm using now):
# log to file
# exec 1>> /tmp/logfile 2>&1
# be quiet
# exec 1> /dev/null 2>&1
# dump to console
exec 2>&1
Just need to uncomment one of the three, depending on what is desired, and don't worry about anything else, ever again. This either logs all subsequent output to a file or to console, or not at all.
No output duplicated, works universally the same for every command (without explicit redirects), no weird stuff, and as easy as it gets.
If I have understood your requirement clearly, the following should do what you want
exec 2>&1
exec >> $LOG
Stdout and stderr of all subsequent commands will be appended to the file $LOG.
Use /dev/stdout
Here's another SO answer that mentions this solution: Difference between stdout and /dev/stdout
I have a couple of BASH scripts that launch programs I commonly use with common arguments using variables for the command and arguments. The invocation appears at the end like so:
$PROGRAM $ARG1 $ARG2 &
Now I want to redirect output to stderr and stdout to /dev/null by default. I want to be able to disable that with a switch to the script (-v = verbose). But if I try to assign "2>&1 > /dev/null" to a var, say REDIRECTION, (clearing it if verbose is specified) and try to invoke like so:
$PROGRAM $ARG1 $ARG2 $REDIRECTION &
The redirection directives are passed as arguments to the program. Is there a way to do this? Or do I have to use my solution which is to have 2 separate invocation lines, one with and one without the redirection directives, depending on "verbosity"?
One option is to use the eval builtin, which processes its arguments like a Bash command, handling things like redirection operators. However, eval is pretty risky, since it will reprocess all of its arguments, even ones that have already been processed properly, and this can cause bizarre or unsafe behavior if not done perfectly.
Since the only problematic difference between the two versions of your command is the presence or absence of redirections, a better option is to use the exec builtin. This command:
exec >/dev/null 2>&1
redirects the STDOUT and STDERR of the remainder of a shell-script to /dev/null. This command:
[[ "$IS_VERBOSE" ]] || exec >/dev/null 2>&1
will run exec >/dev/null 2>&1 unless $IS_VERBOSE is non-blank, in which case it does nothing. (You can replace [[ "$IS_VERBOSE" ]] with whatever sort of conditional expression you're already using to detect verbosity.)
So, what you'd want is something like this:
(
[[ "$IS_VERBOSE" ]] || exec >/dev/null 2>&1
"$PROGRAM" "$ARG1" "$ARG2" &
)
The parentheses ( ... ) are to set up a subshell, so the exec command (if run) only affects up until the ).
(By the way, note that I changed your 2>&1 > /dev/null to > /dev/null 2>&1: the order matters.)
Rather than trying to encode shell syntax into parameter, just put the default destination for each redirection operator in a parameter, and allow a method for overriding the default.
# A little bash-specific, but appropriate defaults should exist for the shell
# of your choice.
STDOUT=/dev/stdout
STDERR=/dev/stderr
if some-test; then
STDOUT=some-other-file
fi
if some-other-test; then
STDERR=different-file
fi
$PROGRAM $ARG1 $ARG2 > $STDOUT 2> $STDERR
You can build any command line you like and then use eval to execute it:
line="echo hello"
if ....
line="$line > junk.out"
fi
eval $line
(BUt I would probably go with the 2-line approach you suggest as being easier to read and understand...)
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs
I have a proprietary command-line program that I want to call in a bash script. It has several options in a .conf file that are not available as command-line switches. I can point the program to any .conf file using a switch, -F.
I would rather not manage a .conf file separate from this script. Is there a way to create a temporary document to use a .conf file?
I tried the following:
echo setting=value|my_prog -F -
But it does not recognize the - as stdin.
You can try /dev/stdin instead of -.
You can also use a here document:
my_prog -F /dev/stdin <<OPTS
opt1 arg1
opt2 arg2
OPTS
Finally, you can let bash allocate a file descriptor for you (if you need stdin for something else, for example):
my_prog -F <(cat <<OPTS
opt1 arg1
opt2 arg2
OPTS
)
When writing this question, I figured it out and thought I would share:
exec 3< <(echo setting=value)
my_prog -F /dev/fd/3
It reads the #3 file descriptor and I don't need to manage any permissions or worry about deleting it when I'm done.
You can use process substitution for this:
my_prog -F <(command-that-generates-config)
where command-that-generates-config can be something like echo setting=value or a function.
It sounds like you want to do something like this:
#!/bin/bash
MYTMPFILE=/tmp/_myfilesettings.$$
cat <<-! > $MYTMPFILE
somekey=somevalue
someotherkey=somevalue
!
my_prog -F $MYTMPFILE
rm -f $MYTMPFILE
This uses what is known as a "here" document, in that all the contents between the "cat <<-!" up to ! is read in verbatim as stdin. The '-' in that basically tells the shell to remove all leading tabs.
You can use anything as the "To here" marker, e.g., this would work as well:
cat <<-EOF > somewhere
stuff
more stuff
EOF
It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.