bash script , how to optionally log to log file, and/or console - bash

I'm trying to create a bash script that prints results optionally to the console (STDOUT and STDERR), a log file or both.
I've got the getops working to set the options, but I can't seem to figure out the command. So far, I have something like this:
# default logs to both
export LOG_COMMAND=" 2>&1 | tee ${INSTALL_LOG}"
if [ "$ONLY_CONSOLE" == "true" ] ; then
export LOG_COMMAND=""
elif [ "$ONLY_LOG" == "true" ] ; then
export LOG_COMMAND=" | tee ${INSTALL_LOG}"
fi
echo "Starting script" ${LOG_COMMAND}
# ... then do more stuff ...
This script prints this to the console only and nothing to the log file:
Starting script 2>&1 | tee scriptfile.log
But I would like the script to print to the console and to the file scriptfile.log.
Has anyone done this before? It appears that the echo command is processing the ${LOG_COMMAND} as a variable.
Replacing the echo command with this works will print both:
echo "Starting script" 2>&1 | tee -a ${INSTALL_LOG}
However, this means that there is no option to print to console, log file, or both. It is hard coded to print to both.

There is definitely more than one way to do this, but the easiest may be to keep things simple.
After getopts is done and you've figured out what goes where, you use that to call main in the correct manner, and put all the rest of your logic in main (or called from main). For example:
main() {
# do stuff
# call other functions
# do more stuff
}
if [ $ONLY_CONSOLE -eq 0 ]
then
main
elif [ $ONLY_LOG -eq 0 ]
then
main > $INSTALL_LOG 2>&1
else # both
main 2>&1 | tee $INSTALL_LOG
fi
The alternative is an arcane combination of execs and other flow, but for a small number of options, this just makes the most sense to me.

Related

Writing to a file multiple times with Bash

I am creating a bash script to automate some commands and I am having some trouble writing my error checking to the same file.
#!/bin/bash
touch ErrorLog.txt
bro-cut service < conn.log | sort | uniq -c > ProtocolHierarchy.txt
if [ $? -eq 0 ]; then
echo -e "OK Protocol Hierarchy Created\n" > ErrorLog.txt
else
echo -e "FAILED Creating Protocol Hierarchy\n" > ErrorLog.txt
fi
bro-cut id.orig_h < dns.log | sort | uniq -c > AllIPAddresses.txt
if [ $? -eq 0 ]; then
echo -e "OK Created all IP Addresses\n" > ErrorLog.txt
else
echo -e "FAILED Creating all IP Addresses\n" > ErrorLog.txt
fi
The goal being to have a file I can open and see that all the commands worked or failed, currently the file looks like this
-e OK Created all IP Addresses
When I would like it to look like this
OK Protocol Hierarchy Created
OK Created all IP Addresses
I am really new to bash scripting so any tips would be greatly appreciated!
Open it once, and write to that file descriptor multiple times.
# Open (creating or truncating) the output file (only once!)
exec 3>ErrorLog.txt
# Write a line to that already-open file
echo "something" >&3
# Write a second line to that already-open file
echo "something else" >&3
# Optional: close the output file (can also be implicit when the script exits)
exec 3>&-
The other common idiom is to open in append mode using >>, but doing that once per line is considerably less efficient.
# Open ErrorLog.txt, truncating if it exist, write one line, and close it
echo "something" >ErrorLog.txt
# Reopen ErrorLog.txt, write an additional line to the end, and close it again
echo "something else" >>ErrorLog.txt
Putting this practice to work in your script (and making some other best-practice improvements) looks like the following:
#!/bin/bash
# not related to file output, but to making sure we detect errors
# only works correctly if run with bash, not sh!
set -o pipefail ## set exit status based on whole pipeline, not just last command
# picking 3, since FD numbers 0-2 are reserved for stdin/stdout/stderr
exec 3>ErrorLog.txt
if bro-cut service <conn.log | sort | uniq -c >ProtocolHierarchy.txt; then
echo "OK Protocol Hierarchy Created" >&3
else
echo "FAILED Creating Protocol Hierarchy" >&3
fi
if bro-cut id.orig_h <dns.log | sort | uniq -c >AllIPAddresses.txt; then
echo "OK Created all IP Addresses" >&3
else
echo "FAILED Creating all IP Addresses" >&3
fi

Bash: redirect to screen or /dev/null depending on flag

I'm trying to come up with a way script to pass a silent flag in a bash so that all output will be directed to /dev/null if it is present and to the screen if it is not.
An MWE of my script would be:
#!/bin/bash
# Check if silent flag is on.
if [ $2 = "-s" ]; then
echo "Silent mode."
# Non-working line.
out_var = "to screen"
else
echo $1
# Non-working line.
out_var = "/dev/null"
fi
command1 > out_var
command2 > out_var
echo "End."
I call the script with two variables, the first one is irrelevant and the second one ($2) is the actual silent flag (-s):
./myscript.sh first_variable -s
Obviously the out_var lines don't work, but they give an idea of what I want: a way to direct the output of command1 and command2 to either the screen or to /dev/null depending on -s being present or not.
How could I do this?
You can use the naked exec command to redirect the current program without starting a new one.
Hence, a -s flag could be processed with something like:
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
The following complete script shows how to do it:
#!/bin/bash
echo XYZZY
if [[ "$1" == "-s" ]] ; then
exec >/dev/null 2>&1
fi
echo PLUGH
If you run it with -s, you get XYZZY but no PLUGH output (well, technically, you do get PLUGH output but it's sent to the /dev/null bit bucket).
If you run it without -s, you get both lines.
The before and after echo statements show that exec is acting as described, simply changing redirection for the current program rather than attempting to re-execute it.
As an aside, I've assumed you meant "to screen" to be "to the current standard output", which may or may not be the actual terminal device (for example if it's already been redirected to somewhere else). If you do want the actual terminal device, it can still be done (using /dev/tty for example) but that would be an unusual requirement.
There are lots of things that could be wrong with your script; I won't attempt to guess since you didn't post any actual output or errors.
However, there are a couple of things that can help:
You need to figure out where your output is really going. Standard output and standard error are two different things, and redirecting one doesn't necessarily redirect the other.
In Bash, you can send output to /dev/stdout or /dev/stderr, so you might want to try something like:
# Send standard output to the tty/pty, or wherever stdout is currently going.
cmd > /dev/stdout
# Do the same thing, but with standard error instead.
cmd > /dev/stderr
Redirect standard error to standard output, and then send standard output to /dev/null. Order matters here.
cmd 2>&1 > /dev/null
There may be other problems with your script, too, but for issues with Bash shell redirections the GNU Bash manual is the canonical source of information. Hope it helps!
If you don't want to redirect all output from your script, you can use eval. For example:
$ fd=1
$ eval "echo hi >$a" >/dev/null
$ fd=2
$ eval "echo hi >$a" >/dev/null
hi
Make sure you use double quotes so that the variable is replaced before eval evaluates it.
In your case, you just needed to change out_var = "to screen" to out_var = "/dev/tty". And use it like this command1 > $out_var (see the '$' you are lacking)
I implemented it like this
# Set debug flag as desired
DEBUG=1
# DEBUG=0
if [ "$DEBUG" -eq "1" ]; then
OUT='/dev/tty'
else
OUT='/dev/null'
fi
# actual script use commands like this
command > $OUT 2>&1
# or like this if you need
command 2> $OUT
Of course you can also set the debug mode from a cli option, see How do I parse command line arguments in Bash?
And you can have multiple debug or verbose levels like this
# Set VERBOSE level as desired
# VERBOSE=0
VERBOSE=1
# VERBOSE=2
VERBOSE1='/dev/null'
VERBOSE2='/dev/null'
if [ "$VERBOSE" -gte 1 ]; then
VERBOSE1='/dev/tty'
fi
if [ "$VERBOSE" -gte 2 ]; then
VERBOSE2='/dev/tty'
fi
# actual script use commands like this
command > $VERBOSE1 2>&1
# or like this if you need
command 2> $VERBOSE2

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

Script works until it's modified to redirect stdout & stderr to a log file

Explanation
When The Script below is ran with no modifications, it outputs (correctly):
two one START
After uncommenting the exec statements in the script below, file descriptor 3 points to standard output, and file descriptor 4 points to standard error:
exec 3>&1 4>&2
And all standard output & standard error gets logged to a file instead of printing to the console:
# Two variances seen in the script below
exec 1>"TEST-${mode:1}.log" 2>&1 # Log Name based on subcommand
exec 1>"TEST-bootstrap.log" 2>&1 # Initial "bootstrap" log file
When the exec statements are in place, the script should create three files (with contents):
TEST-bootstrap.log (sucessfully created - empty file)
TEST-one.log (sucessfully created)
one START
TEST-two.log (is not created)
two one START
But instead it seems to stop after the first log file and never creates TEST-two.log. What could be the issue considering it works with the exec statements commented out?
The Script
#!/bin/bash
SELFD=$(cd -P $(dirname ${0}) >/dev/null 2>&1; pwd) # Current scripts directory
SELF=$(basename ${0}) # Current scripts name
SELFX=${SELFD}/${SELF} # The current script exec path
fnOne() { echo "one ${*}"; }
fnTwo() { echo "two ${*}"; }
subcommand() {
# exec 3>&1 4>&2
# exec 1>"TEST-${mode:1}.log" 2>&1
case "${mode}" in
-one) fnOne ${*}; ;;
-two) fnTwo ${*}; ;;
esac
}
bootstrap() {
# exec 3>&1 4>&2
# exec 1>"TEST-bootstrap.log" 2>&1
echo "START" \
| while read line; do ${SELFX} -one ${line}; done \
| while read line; do ${SELFX} -two ${line}; done
exit 0
}
mode=${1}; shift
case "${mode:0:1}" in
-) subcommand ${*} ;;
*) bootstrap ${mode} ${*} ;;
esac
exit 0
This script is strictly an isolated example of the problem I am facing in another more complex script of mine. I've tried to keep it as concise as possible but will expand upon it if needed.
What I'm trying to accomplish (extra reading for those interested)
In The Script above, I am using while loops instead of gnu-parallel for simplicity sake, and am using simple functions that echo "one" and "two" for ease of debugging & asking the question here.
In my actual script, the program would have the following functions: list-archives, download, extract, process that would fetch a list of archives from a URL, download them, extract their contents, and process the resulting files respectively. Considering these operations take a varying amount of time, I planned on running them in parallel. My script's bootstrap() function would look something like this:
# program ./myscript
list-archives "${1}" \
| parallel myscript -download \
| parallel myscript -extract \
| parallel myscript -process
And would be called like this:
./myscript "http://www.example.com"
What I'm trying to accomplish is a way to start a program that can can call it's own functions in parallel and record everything it does. This would be useful to determine when the data was last fetched, debug any errors, etc.
I'd also like to have the program record logs when it's invoked with a subcommand, e.g.
# only list achives
>> ./myscript -list-archives "http://www.example.com"
# download this specific archive
>> ./myscript -download "http://www.example.com/archive.zip"
# ...
I think this is the problem:
echo "START" \
| while read line; do ${SELFX} -one ${line}; done \
| while read line; do ${SELFX} -two ${line}; done
The first while loop reads START from the echo statement. The second while loop processes the output of the first while loop. But since the script is redirecting stdout to a file, nothing is piped to the second loop, so it exits immediately.
I'm not sure how to "fix" this, since it's not clear what you're trying to accomplish. If you want to feed something to the next command in the pipeline, you can echo to FD 3, which you've carefully used to save the original stdout.
I've accepted #Barmar's answer as the answer and the comment left by #Wumpus Q. Wumbley nudged me in the right direction.
The solution was to use tee /dev/fd/3 on the case statement like so:
subcommand() {
exec 3>&1 4>&2
exec 1>"TEST-${mode:1}.log" 2>&1
case "${mode}" in
-one) fnOne ${*}; ;;
-two) fnTwo ${*}; ;;
esac | tee /dev/fd/3
}

Pipeline metacharacter in variable in bash

In my bash script I need to check if logger binary exists. If so, I pipe the application output to it.
Edit--------
| It needs to be piping, the application should work permanently.
---------------
I tried to put the pipeline stuff to a variable and use it later. Something like:
if [ -e /usr/bin/logger ]; then
OUT=| /usr/bin/logger
fi
application param1 2>&1 $OUT > /dev/null &
but it doesn't work, the output is not piped to the logger. If I put the pipeline stuff directly into the application startline, it works. Unfortunately the real script becomes too complicated if I use command lines with and without logger stuff in if/else statements - the reason is that I already have if/else there and adding new ones will double the number of cases.
Simple test application
TMP=| wc -m
echo aas bdsd vasd $TMP
gives
$ ./test.sh
0
aas bdsd vasd
Seems that somehow command1 and command2 are executed separately.
I managed to solve the problem (in both test and real scripts) using eval and putting the conditional stuff in double quotes.
TMP="| wc -m"
eval echo aas bdsd vasd $TMP
$ ./test.sh
14
It feels like a workaround. What is the right way to do it?
The correct way to do this is to use if/else:
if [ -e /usr/bin/logger ]
then
application param1 2>&1 | /usr/bin/logger > /dev/null &
else
application param1 > /dev/null 2>&1 &
fi
Edit:
In the case of a complex construct, you should use a function:
foo () {
if [ ... ]
then
do_something
else
something_else
fi
while [ ... ]
do
loop_stuff
done
etc.
}
Then your log/no log if stays simple:
if [ -e /usr/bin/logger ]
then
foo 2>&1 | /usr/bin/logger > /dev/null &
else
foo > /dev/null 2>&1 &
fi
Just to throw another option into the mix, you could move the pipe outside the variable:
if [ -e /usr/bin/logger ]; then
logcmd=/usr/bin/logger
else
logcmd=/bin/cat
fi
application param1 2>&1 | $logcmd >/dev/null &
This avoids having duplicate commands for the two cases (or having to wrap everything in functions, per Dennis' suggestion). The disadvantage is that it's inefficient about how it handles the case where logger doesn't exist -- creating a cat process just to feed output to /dev/null is a complete waste. But a cat process isn't that big a resource drain, so if it makes your code cleaner it might be worth the waste.
Try
if [ -e /usr/bin/logger ]; then
logger $(application param1 2>&1)
fi
General rule: don't put commands inside variables and call it through the variable. Just run it directly off your script.
I tried a similar code and specified the last line like this and it worked
application param1 &2>1 ${OUT} > /dev/null
While ghostdog74's answer is correct, there is certainly a way to make this work
if [ -e /usr/bin/logger ]; then
OUT='| /usr/bin/logger'
fi
eval application param1 2>&1 $OUT > /dev/null
But I highly recommend that you think twice before using eval, and then don't use it.

Resources