conditional redirection in bash - bash

I have a bash script that I want to be quiet when run without attached tty (like from cron).
I now was looking for a way to conditionally redirect output to /dev/null in a single line.
This is an example of what I had in mind, but I will have many more commands that do output in the script
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
echo "is this visible?" $REDIRECT
Unfortunately, this does not work:
$ ./conditional-redirect.sh
is this visible?
$ echo "" | ./conditional-redirect.sh
is this visible? >& /dev/null
what I don't want to do is duplicate all commands in a with-redirection or with-no-redirection variant:
if tty -s; then
echo "is this visible?"
else
echo "is this visible?" >& /dev/null
fi
EDIT:
It would be great if the solution would provide me a way to output something in "quiet" mode, e.g. when something is really wrong, I might want to get a notice from cron.

For bash, you can use the line:
exec &>/dev/null
This will direct all stdout and stderr to /dev/null from that point on. It uses the non-argument version of exec.
Normally, something like exec xyzzy would replace the program in the current process with a new program but you can use this non-argument version to simply modify redirections while keeping the current program.
So, in your specific case, you could use something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec &>/dev/null
fi
If you want the majority of output to be discarded but still want to output some stuff, you can create a new file handle to do that. Something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec 3>&1 &>/dev/null
else
exec 3>&1
fi
echo Normal # won't see this.
echo Failure >&3 # will see this.

I found another solution, but I feel it is clumsy, compared to paxdiablo's answer:
if tty -s; then
REDIRECT=/dev/tty
else
REDIRECT=/dev/null
fi
echo "Normal output" &> $REDIRECT

You can use a function:
function the_code {
echo "is this visible?"
# as many code lines as you want
}
if tty -s; then # or other condition
the_code
else
the_code >& /dev/null
fi

This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}

The simplest solution is to use eval (a shell builtin), as it will act on the redirection in the expanded variable... and also act on anything else in the command line, so add extra quoting as required (note the extra single quotes added around the echo string below due to the '?' which would otherwise cause shell filename expansion to be attempted).
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
eval echo '"is this visible?"' $REDIRECT

Related

Bash: redirect to screen or /dev/null depending on flag

I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

How to put "> /dev/null 2>&1" into a variable?

How I wish it to work.
if [ $debug = 0 ]; then
silent=""
else
silent='> /dev/null 2>&1'
fi
#some command
wget some.url $silent
So in case $debug is set, it becomes
wget some.url > /dev/null 2>&1
Otherwise if $debug is not set to 1, it becomes
wget some.url
Storing "> /dev/null 2>&1" in a variable doesn't work. How can I do that?
This could be a good excuse to use a shell function:
silent() {
"$#" > /dev/null 2>&1
}
Now you can silence programs by running them with:
silent wget some.url
If you want to only silence things conditionally, that's easy enough:
silent() {
if [[ $debug ]] ; then
"$#"
else
"$#" &>/dev/null
fi
}
You need the shell to actually interpret the variable contents as part of the command line, not just as a string to be passed as an argument to the executable. Check out the eval shell builtin.
Watch out for security holes!
A slightly different approach, but something like this might work:
if [ $debug = 1 ]; then
exec > /dev/null
fi
#some command
wget some.url
It conditionally replaces stdout with /dev/null.
Use eval:
eval wget some.url $silent
eval causes the arguments to be reintepreted as a shell command, rather than as arguments to the program being called.
Be careful; don't run eval on unknown or external input, or you will expose yourself to a big security hole.

Bash command substitution stdout+stderr redirect

Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.

Shell scripting: die on any error

Suppose a shell script (/bin/sh or /bin/bash) contained several commands. How can I cleanly make the script terminate if any of the commands has a failing exit status? Obviously, one can use if blocks and/or callbacks, but is there a cleaner, more concise way? Using && is not really an option either, because the commands can be long, or the script could have non-trivial things like loops and conditionals.
With standard sh and bash, you can
set -e
It will
$ help set
...
-e Exit immediately if a command exits with a non-zero status.
It also works (from what I could gather) with zsh. It also should work for any Bourne shell descendant.
With csh/tcsh, you have to launch your script with #!/bin/csh -e
May be you could use:
$ <any_command> || exit 1
You can check $? to see what the most recent exit code is..
e.g
#!/bin/sh
# A Tidier approach
check_errs()
{
# Function. Parameter 1 is the return code
# Para. 2 is text to display on failure.
if [ "${1}" -ne "0" ]; then
echo "ERROR # ${1} : ${2}"
# as a bonus, make our script exit with the right error code.
exit ${1}
fi
}
### main script starts here ###
grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

Resources