Pipeline metacharacter in variable in bash - bash

In my bash script I need to check if logger binary exists. If so, I pipe the application output to it.
Edit--------
| It needs to be piping, the application should work permanently.
---------------
I tried to put the pipeline stuff to a variable and use it later. Something like:
if [ -e /usr/bin/logger ]; then
OUT=| /usr/bin/logger
fi
application param1 2>&1 $OUT > /dev/null &
but it doesn't work, the output is not piped to the logger. If I put the pipeline stuff directly into the application startline, it works. Unfortunately the real script becomes too complicated if I use command lines with and without logger stuff in if/else statements - the reason is that I already have if/else there and adding new ones will double the number of cases.
Simple test application
TMP=| wc -m
echo aas bdsd vasd $TMP
gives
$ ./test.sh
0
aas bdsd vasd
Seems that somehow command1 and command2 are executed separately.
I managed to solve the problem (in both test and real scripts) using eval and putting the conditional stuff in double quotes.
TMP="| wc -m"
eval echo aas bdsd vasd $TMP
$ ./test.sh
14
It feels like a workaround. What is the right way to do it?

The correct way to do this is to use if/else:
if [ -e /usr/bin/logger ]
then
application param1 2>&1 | /usr/bin/logger > /dev/null &
else
application param1 > /dev/null 2>&1 &
fi
Edit:
In the case of a complex construct, you should use a function:
foo () {
if [ ... ]
then
do_something
else
something_else
fi
while [ ... ]
do
loop_stuff
done
etc.
}
Then your log/no log if stays simple:
if [ -e /usr/bin/logger ]
then
foo 2>&1 | /usr/bin/logger > /dev/null &
else
foo > /dev/null 2>&1 &
fi

Just to throw another option into the mix, you could move the pipe outside the variable:
if [ -e /usr/bin/logger ]; then
logcmd=/usr/bin/logger
else
logcmd=/bin/cat
fi
application param1 2>&1 | $logcmd >/dev/null &
This avoids having duplicate commands for the two cases (or having to wrap everything in functions, per Dennis' suggestion). The disadvantage is that it's inefficient about how it handles the case where logger doesn't exist -- creating a cat process just to feed output to /dev/null is a complete waste. But a cat process isn't that big a resource drain, so if it makes your code cleaner it might be worth the waste.

Try
if [ -e /usr/bin/logger ]; then
logger $(application param1 2>&1)
fi
General rule: don't put commands inside variables and call it through the variable. Just run it directly off your script.

I tried a similar code and specified the last line like this and it worked
application param1 &2>1 ${OUT} > /dev/null

While ghostdog74's answer is correct, there is certainly a way to make this work
if [ -e /usr/bin/logger ]; then
OUT='| /usr/bin/logger'
fi
eval application param1 2>&1 $OUT > /dev/null
But I highly recommend that you think twice before using eval, and then don't use it.

Related

bash script , how to optionally log to log file, and/or console

I'm trying to create a bash script that prints results optionally to the console (STDOUT and STDERR), a log file or both.
I've got the getops working to set the options, but I can't seem to figure out the command. So far, I have something like this:
# default logs to both
export LOG_COMMAND=" 2>&1 | tee ${INSTALL_LOG}"
if [ "$ONLY_CONSOLE" == "true" ] ; then
export LOG_COMMAND=""
elif [ "$ONLY_LOG" == "true" ] ; then
export LOG_COMMAND=" | tee ${INSTALL_LOG}"
fi
echo "Starting script" ${LOG_COMMAND}
# ... then do more stuff ...
This script prints this to the console only and nothing to the log file:
Starting script 2>&1 | tee scriptfile.log
But I would like the script to print to the console and to the file scriptfile.log.
Has anyone done this before? It appears that the echo command is processing the ${LOG_COMMAND} as a variable.
Replacing the echo command with this works will print both:
echo "Starting script" 2>&1 | tee -a ${INSTALL_LOG}
However, this means that there is no option to print to console, log file, or both. It is hard coded to print to both.
There is definitely more than one way to do this, but the easiest may be to keep things simple.
After getopts is done and you've figured out what goes where, you use that to call main in the correct manner, and put all the rest of your logic in main (or called from main). For example:
main() {
# do stuff
# call other functions
# do more stuff
}
if [ $ONLY_CONSOLE -eq 0 ]
then
main
elif [ $ONLY_LOG -eq 0 ]
then
main > $INSTALL_LOG 2>&1
else # both
main 2>&1 | tee $INSTALL_LOG
fi
The alternative is an arcane combination of execs and other flow, but for a small number of options, this just makes the most sense to me.

Bash: redirect to screen or /dev/null depending on flag

I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2

How to put "> /dev/null 2>&1" into a variable?

How I wish it to work.
if [ $debug = 0 ]; then
silent=""
else
silent='> /dev/null 2>&1'
fi
#some command
wget some.url $silent
So in case $debug is set, it becomes
wget some.url > /dev/null 2>&1
Otherwise if $debug is not set to 1, it becomes
wget some.url
Storing "> /dev/null 2>&1" in a variable doesn't work. How can I do that?
This could be a good excuse to use a shell function:
silent() {
"$#" > /dev/null 2>&1
}
Now you can silence programs by running them with:
silent wget some.url
If you want to only silence things conditionally, that's easy enough:
silent() {
if [[ $debug ]] ; then
"$#"
else
"$#" &>/dev/null
fi
}
You need the shell to actually interpret the variable contents as part of the command line, not just as a string to be passed as an argument to the executable. Check out the eval shell builtin.
Watch out for security holes!
A slightly different approach, but something like this might work:
if [ $debug = 1 ]; then
exec > /dev/null
fi
#some command
wget some.url
It conditionally replaces stdout with /dev/null.
Use eval:
eval wget some.url $silent
eval causes the arguments to be reintepreted as a shell command, rather than as arguments to the program being called.
Be careful; don't run eval on unknown or external input, or you will expose yourself to a big security hole.

Bash command substitution stdout+stderr redirect

Good day. I have a series of commands that I wanted to execute via a function so that I could get the exit code and perform console output accordingly. With that being said, I have two issues here:
1) I can't seem to direct stderr to /dev/null.
2) The first echo line is not displayed until the $1 is executed. It's not really noticeable until I run commands that take a while to process, such as searching the hard drive for a file. Additionally, it's obvious that this is the case, because the output looks like:
sh-3.2# ./runScript.sh
sh-3.2# com.apple.auditd: Already loaded
sh-3.2# Attempting... Enable Security Auditing ...Success
In other words, the stderr was displayed before "Attempting... $2"
Here is the function I am trying to use:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
exec $1
if [ "$?" -ne 0 ]; then
echo -ne " ...Failure\n\r"
else
echo -ne " ...Success\n\r"
fi
}
saveChange "$(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)" "Enable Security Auditing"
Any help or advice is appreciated.
this is how you redirect stderr to /dev/null
command 2> /dev/null
e.g.
ls -l 2> /dev/null
Your second part (i.e. ordering of echo) -- It may be because of this you have while invoking the script. $(launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist)
The first echo line is displayed later because it is being execute second. $(...) will execute the code. Try the following:
#!/bin/bash
function saveChange {
echo -ne "Attempting... $2"
err=$($1 2>&1)
if [ -z "$err" ]; then
echo -ne " ...Success\n\r"
else
echo -ne " ...Failured\n\r"
exit 1
fi
}
saveChange "launchctl load -w /System/Library/LaunchDaemons/com.apple.auditd.plist" "Enable Security Auditing"
EDIT: Noticed that launchctl does not actually set $? on failure so capturing the STDERR to detect the error instead.

conditional redirection in bash

I have a bash script that I want to be quiet when run without attached tty (like from cron).
I now was looking for a way to conditionally redirect output to /dev/null in a single line.
This is an example of what I had in mind, but I will have many more commands that do output in the script
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
echo "is this visible?" $REDIRECT
Unfortunately, this does not work:
$ ./conditional-redirect.sh
is this visible?
$ echo "" | ./conditional-redirect.sh
is this visible? >& /dev/null
what I don't want to do is duplicate all commands in a with-redirection or with-no-redirection variant:
if tty -s; then
echo "is this visible?"
else
echo "is this visible?" >& /dev/null
fi
EDIT:
It would be great if the solution would provide me a way to output something in "quiet" mode, e.g. when something is really wrong, I might want to get a notice from cron.
For bash, you can use the line:
exec &>/dev/null
This will direct all stdout and stderr to /dev/null from that point on. It uses the non-argument version of exec.
Normally, something like exec xyzzy would replace the program in the current process with a new program but you can use this non-argument version to simply modify redirections while keeping the current program.
So, in your specific case, you could use something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec &>/dev/null
fi
If you want the majority of output to be discarded but still want to output some stuff, you can create a new file handle to do that. Something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec 3>&1 &>/dev/null
else
exec 3>&1
fi
echo Normal # won't see this.
echo Failure >&3 # will see this.
I found another solution, but I feel it is clumsy, compared to paxdiablo's answer:
if tty -s; then
REDIRECT=/dev/tty
else
REDIRECT=/dev/null
fi
echo "Normal output" &> $REDIRECT
You can use a function:
function the_code {
echo "is this visible?"
# as many code lines as you want
}
if tty -s; then # or other condition
the_code
else
the_code >& /dev/null
fi
This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}
The simplest solution is to use eval (a shell builtin), as it will act on the redirection in the expanded variable... and also act on anything else in the command line, so add extra quoting as required (note the extra single quotes added around the echo string below due to the '?' which would otherwise cause shell filename expansion to be attempted).
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
eval echo '"is this visible?"' $REDIRECT

Resources