Bash: redirect to screen or /dev/null depending on flag - bash

I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?

You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.

There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!

If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.

In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2

Related

I want to place after all: echo "etc" ( >> filename) [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

How to redirect the stream inline? It is possible not to keep repeating the code? [duplicate]

Is it possible to redirect all of the output of a Bourne shell script to somewhere, but with shell commands inside the script itself?
Redirecting the output of a single command is easy, but I want something more like this:
#!/bin/sh
if [ ! -t 0 ]; then
# redirect all of my output to a file here
fi
# rest of script...
Meaning: if the script is run non-interactively (for example, cron), save off the output of everything to a file. If run interactively from a shell, let the output go to stdout as usual.
I want to do this for a script normally run by the FreeBSD periodic utility. It's part of the daily run, which I don't normally care to see every day in email, so I don't have it sent. However, if something inside this one particular script fails, that's important to me and I'd like to be able to capture and email the output of this one part of the daily jobs.
Update: Joshua's answer is spot-on, but I also wanted to save and restore stdout and stderr around the entire script, which is done like this:
# save stdout and stderr to file
# descriptors 3 and 4,
# then redirect them to "foo"
exec 3>&1 4>&2 >foo 2>&1
# ...
# restore stdout and stderr
exec 1>&3 2>&4
Addressing the question as updated.
#...part of script without redirection...
{
#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...
#...residue of script without redirection...
The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)
You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.
The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.
Typically we would place one of these at or near the top of the script. Scripts that parse their command lines would do the redirection after parsing.
Send stdout to a file
exec > file
with stderr
exec > file
exec 2>&1
append both stdout and stderr to file
exec >> file
exec 2>&1
As Jonathan Leffler mentioned in his comment:
exec has two separate jobs. The first one is to replace the currently executing shell (script) with a new program. The other is changing the I/O redirections in the current shell. This is distinguished by having no argument to exec.
You can make the whole script a function like this:
main_function() {
do_things_here
}
then at the end of the script have this:
if [ -z $TERM ]; then
# if not run via terminal, log everything into a log file
main_function 2>&1 >> /var/log/my_uber_script.log
else
# run via terminal, only output to screen
main_function
fi
Alternatively, you may log everything into logfile each run and still output it to stdout by simply doing:
# log everything, but also output to stdout
main_function 2>&1 | tee -a /var/log/my_uber_script.log
For saving the original stdout and stderr you can use:
exec [fd number]<&1
exec [fd number]<&2
For example, the following code will print "walla1" and "walla2" to the log file (a.txt), "walla3" to stdout, "walla4" to stderr.
#!/bin/bash
exec 5<&1
exec 6<&2
exec 1> ~/a.txt 2>&1
echo "walla1"
echo "walla2" >&2
echo "walla3" >&5
echo "walla4" >&6
[ -t <&0 ] || exec >> test.log
I finally figured out how to do it. I wanted to not just save the output to a file but also, find out if the bash script ran successfully or not!
I've wrapped the bash commands inside a function and then called the function main_function with a tee output to a file. Afterwards, I've captured the output using if [ $? -eq 0 ].
#! /bin/sh -
main_function() {
python command.py
}
main_function > >(tee -a "/var/www/logs/output.txt") 2>&1
if [ $? -eq 0 ]
then
echo 'Success!'
else
echo 'Failure!'
fi

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

how to silently disable xtrace in a shell script?

I'm writing a shell script that loops over some values and run a long command line for each value. I'd like to print out these commands along the way, just like make does when running a makefile. I know I could just "echo" all commands before running them, but it feels inelegant. So I'm looking at set -x and similar mechanisms instead :
#!/bin/sh
for value in a long list of values
do
set -v
touch $value # imagine a complicated invocation here
set +v
done
My problem is: at each iteration, not only is the interresting line printed out, but also the set +x line as well. Is it somehow possible to prevent that ? If not, what workaround do you recommend ?
PS: the MWE above uses sh, but I also have bash and zsh installed in case that helps.
Sandbox it in a subshell:
(set -x; do_thing_you_want_traced)
Of course, changes to variables or the environment made in that subshell will be lost.
If you REALLY care about this, you could also use a DEBUG trap (using set -T to cause it to be inherited by functions) to implement your own set -x equivalent.
For instance, if using bash:
trap_fn() {
[[ $DEBUG && $BASH_COMMAND != "unset DEBUG" ]] && \
printf "[%s:%s] %s\n" "$BASH_SOURCE" "$LINENO" "$BASH_COMMAND"
return 0 # do not block execution in extdebug mode
}
trap trap_fn DEBUG
DEBUG=1
# ...do something you want traced...
unset DEBUG
That said, emitting BASH_COMMAND (as a DEBUG trap can do) is not fully equivalent of set -x; for instance, it does not show post-expansion values.
You want to try using a single-line xtrace:
function xtrace() {
# Print the line as if xtrace was turned on, using perl to filter out
# the extra colon character and the following "set +x" line.
(
set -x
# Colon is a no-op in bash, so nothing will execute.
: "$#"
set +x
) 2>&1 | perl -ne 's/^[+] :/+/ and print' 1>&2
# Execute the original line unmolested
"$#"
}
The original command executes in the same shell under an identity transformation. Just prior to running, you get a non-recursive xtrace of the arguments. This allows you to xtrace the commands you care about without spamming stederr with duplicate copies of every "echo" command.
# Example
for value in $long_list; do
computed_value=$(echo "$value" | sed 's/.../...')
xtrace some_command -x -y -z $value $computed_value ...
done
Next command disables 'xtrace' option:
$ set +o xtrace
I thought of
set -x >/dev/null 2>1; echo 1; echo 2; set +x >/dev/null 2>&1
but got
+ echo 1
1
+ echo 2
2
+ 1> /dev/null 2>& 1
I'm surprised by these results. .... But
set -x ; echo 1; echo 2; set +x
+ echo 1
1
+ echo 2
2
looks to meet your requirement.
I saw similar results when I put each statement on its only line (excepting the set +x)
IHTH.

conditional redirection in bash

I have a bash script that I want to be quiet when run without attached tty (like from cron).
I now was looking for a way to conditionally redirect output to /dev/null in a single line.
This is an example of what I had in mind, but I will have many more commands that do output in the script
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
echo "is this visible?" $REDIRECT
Unfortunately, this does not work:
$ ./conditional-redirect.sh
is this visible?
$ echo "" | ./conditional-redirect.sh
is this visible? >& /dev/null
what I don't want to do is duplicate all commands in a with-redirection or with-no-redirection variant:
if tty -s; then
echo "is this visible?"
else
echo "is this visible?" >& /dev/null
fi
EDIT:
It would be great if the solution would provide me a way to output something in "quiet" mode, e.g. when something is really wrong, I might want to get a notice from cron.
For bash, you can use the line:
exec &>/dev/null
This will direct all stdout and stderr to /dev/null from that point on. It uses the non-argument version of exec.
Normally, something like exec xyzzy would replace the program in the current process with a new program but you can use this non-argument version to simply modify redirections while keeping the current program.
So, in your specific case, you could use something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec &>/dev/null
fi
If you want the majority of output to be discarded but still want to output some stuff, you can create a new file handle to do that. Something like:
tty -s
if [[ $? -eq 1 ]] ; then
exec 3>&1 &>/dev/null
else
exec 3>&1
fi
echo Normal # won't see this.
echo Failure >&3 # will see this.
I found another solution, but I feel it is clumsy, compared to paxdiablo's answer:
if tty -s; then
REDIRECT=/dev/tty
else
REDIRECT=/dev/null
fi
echo "Normal output" &> $REDIRECT
You can use a function:
function the_code {
echo "is this visible?"
# as many code lines as you want
}
if tty -s; then # or other condition
the_code
else
the_code >& /dev/null
fi
This works well for me. If DUMP_FILE is empty things go to stdout otherwise to the file. It does the job without using explicit redirection, but just uses pipes and existing applications.
function stdout_or_file
{
local DUMP_FILE=${1:-}
if [ -z "${DUMP_FILE}" ]; then
cat
else
sed -n "w ${DUMP_FILE}"
fi
}
function foo()
{
local MSG=$1
echo "info: ${MSG}"
}
foo "bar" | stdout_or_file ${DUMP_FILE}
Of course, you can squeeze this also in one line
foo "bar" | if [ -z "${DUMP_FILE}" ]; then cat; else sed -n "w ${DUMP_FILE}"; fi
Besides sed -n "w ${DUMP_FILE}" another command that does the same is dd status=none of=${DUMP_FILE}
The simplest solution is to use eval (a shell builtin), as it will act on the redirection in the expanded variable... and also act on anything else in the command line, so add extra quoting as required (note the extra single quotes added around the echo string below due to the '?' which would otherwise cause shell filename expansion to be attempted).
#!/bin/bash
# conditional-redirect.sh
if tty -s; then
REDIRECT=
else
REDIRECT=">& /dev/null"
fi
eval echo '"is this visible?"' $REDIRECT

Resources