Is there a standard Bash tool that acts like echo but outputs to stderr rather than stdout?
I know I can do echo foo 1>&2 but it's kinda ugly and, I suspect, error prone (e.g. more likely to get edited wrong when things change).
You could do this, which facilitates reading:
>&2 echo "error"
>&2 copies file descriptor #2 to file descriptor #1. Therefore, after this redirection is performed, both file descriptors will refer to the same file: the one file descriptor #2 was originally referring to. For more information see the Bash Hackers Illustrated Redirection Tutorial.
You could define a function:
echoerr() { echo "$#" 1>&2; }
echoerr hello world
This would be faster than a script and have no dependencies.
Camilo Martin's bash specific suggestion uses a "here string" and will print anything you pass to it, including arguments (-n) that echo would normally swallow:
echoerr() { cat <<< "$#" 1>&2; }
Glenn Jackman's solution also avoids the argument swallowing problem:
echoerr() { printf "%s\n" "$*" >&2; }
Since 1 is the standard output, you do not have to explicitly name it in front of an output redirection like >. Instead, you can simply type:
echo This message goes to stderr >&2
Since you seem to be worried that 1>&2 will be difficult for you to reliably type, the elimination of the redundant 1 might be a slight encouragement to you!
Another option
echo foo >>/dev/stderr
No, that's the standard way to do it. It shouldn't cause errors.
If you don't mind logging the message also to syslog, the not_so_ugly way is:
logger -s $msg
The -s option means: "Output the message to standard error as well as to the system log."
Another option that I recently stumbled on is this:
{
echo "First error line"
echo "Second error line"
echo "Third error line"
} >&2
This uses only Bash built-ins while making multi-line error output less error prone (since you don't have to remember to add &>2 to every line).
Note: I'm answering the post- not the misleading/vague "echo that outputs to stderr" question (already answered by OP).
Use a function to show the intention and source the implementation you want. E.g.
#!/bin/bash
[ -x error_handling ] && . error_handling
filename="foobar.txt"
config_error $filename "invalid value!"
output_xml_error "No such account"
debug_output "Skipping cache"
log_error "Timeout downloading archive"
notify_admin "Out of disk space!"
fatal "failed to open logger!"
And error_handling being:
ADMIN_EMAIL=root#localhost
config_error() { filename="$1"; shift; echo "Config error in $filename: $*" 2>&1; }
output_xml_error() { echo "<error>$*</error>" 2>&1; }
debug_output() { [ "$DEBUG"=="1" ] && echo "DEBUG: $*"; }
log_error() { logger -s "$*"; }
fatal() { which logger >/dev/null && logger -s "FATAL: $*" || echo "FATAL: $*"; exit 100; }
notify_admin() { echo "$*" | mail -s "Error from script" "$ADMIN_EMAIL"; }
Reasons that handle concerns in OP:
nicest syntax possible (meaningful words instead of ugly symbols)
harder to make an error (especially if you reuse the script)
it's not a standard Bash tool, but it can be a standard shell library for you or your company/organization
Other reasons:
clarity - shows intention to other maintainers
speed - functions are faster than shell scripts
reusability - a function can call another function
configurability - no need to edit original script
debugging - easier to find the line responsible for an error (especially if you're deadling with a ton of redirecting/filtering output)
robustness - if a function is missing and you can't edit the script, you can fall back to using external tool with the same name (e.g. log_error can be aliased to logger on Linux)
switching implementations - you can switch to external tools by removing the "x" attribute of the library
output agnostic - you no longer have to care if it goes to STDERR or elsewhere
personalizing - you can configure behavior with environment variables
My suggestion:
echo "my errz" >> /proc/self/fd/2
or
echo "my errz" >> /dev/stderr
echo "my errz" > /proc/self/fd/2 will effectively output to stderr because /proc/self is a link to the current process, and /proc/self/fd holds the process opened file descriptors, and then, 0, 1, and 2 stand for stdin, stdout and stderr respectively.
The /proc/self link doesn't work on MacOS, however, /proc/self/fd/* is available on Termux on Android, but not /dev/stderr. How to detect the OS from a Bash script? can help if you need to make your script more portable by determining which variant to use.
Don't use cat as some have mentioned here. cat is a program
while echo and printf are bash (shell) builtins. Launching a program or another script (also mentioned above) means to create a new process with all its costs. Using builtins, writing functions is quite cheap, because there is no need to create (execute) a process (-environment).
The opener asks "is there any standard tool to output (pipe) to stderr", the short answer is : NO ... why? ... redirecting pipes is an elementary concept in systems like unix (Linux...) and bash (sh) builds up on these concepts.
I agree with the opener that redirecting with notations like this: &2>1 is not very pleasant for modern programmers, but that's bash. Bash was not intended to write huge and robust programs, it is intended to help the admins to get there work with less keypresses ;-)
And at least, you can place the redirection anywhere in the line:
$ echo This message >&2 goes to stderr
This message goes to stderr
This is a simple STDERR function, which redirect the pipe input to STDERR.
#!/bin/bash
# *************************************************************
# This function redirect the pipe input to STDERR.
#
# #param stream
# #return string
#
function STDERR () {
cat - 1>&2
}
# remove the directory /bubu
if rm /bubu 2>/dev/null; then
echo "Bubu is gone."
else
echo "Has anyone seen Bubu?" | STDERR
fi
# run the bubu.sh and redirect you output
tux#earth:~$ ./bubu.sh >/tmp/bubu.log 2>/tmp/bubu.err
read is a shell builtin command that prints to stderr, and can be used like echo without performing redirection tricks:
read -t 0.1 -p "This will be sent to stderr"
The -t 0.1 is a timeout that disables read's main functionality, storing one line of stdin into a variable.
Combining solution suggested by James Roth and Glenn Jackman
add ANSI color code to display the error message in red:
echoerr() { printf "\e[31;1m%s\e[0m\n" "$*" >&2; }
# if somehow \e is not working on your terminal, use \u001b instead
# echoerr() { printf "\u001b[31;1m%s\u001b[0m\n" "$*" >&2; }
echoerr "This error message should be RED"
Make a script
#!/bin/sh
echo $* 1>&2
that would be your tool.
Or make a function if you don't want to have a script in separate file.
Here is a function for checking the exit status of the last command, showing error and terminate the script.
or_exit() {
local exit_status=$?
local message=$*
if [ "$exit_status" -gt 0 ]
then
echo "$(date '+%F %T') [$(basename "$0" .sh)] [ERROR] $message" >&2
exit "$exit_status"
fi
}
Usage:
gzip "$data_dir"
or_exit "Cannot gzip $data_dir"
rm -rf "$junk"
or_exit Cannot remove $junk folder
The function prints out the script name and the date in order to be useful when the script is called from crontab and logs the errors.
59 23 * * * /my/backup.sh 2>> /my/error.log
Related
I'm developing a BASH script which invokes another BASH script which prints a line to stdout. That output is captured by the first BASH script and used later. It works, but it has the downside that any other output which is printed by the second script will cause this part to behave unexpectedly, because there will be extra content.
main.sh
#!/bin/bash
# Invoke worker.sh and capture its standard output to stats
stats=$(worker.sh --generate-stats)
echo "stats=$stats"
worker.sh
#!/bin/bash
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
In this over-simplified example, it's not a problem to use this construct, but as worker.sh grows in size and complexity, it's hard to remember that no other command can print to stdout without confounding the behavior, and if someone else works on worker.sh without realizing they can't print to stdout, it can easily get fouled. So what is considered good practice to generate output in one script and use it in the other?
I'm wondering if a fifo would be appropriate, or another file descriptor, or just a plain file. Or if exec should be used in this case, something like what is shown here https://www.tldp.org/LDP/abs/html/x17974.html:
#!/bin/bash
exec 6>&1 # Link file descriptor #6 with stdout.
# Saves stdout.
exec >&2 # stdout now goes to stderr
echo "Didn't know I shouldn't print to stdout"
exec 1>&6 6>&- # Restore stdout and close file descriptor #6.
[[ $1 == "--generate-stats" ]] && echo "cpu=90 mem=50 disk=15"
But I wouldn't want to use that if it's not considered good practice.
Many command-line utilities have quiet and verbose modes; it's generally considered good practice to have the most verbose output (debugging, tracing, etc.) be separated to standard error anyway, but it's common to have normal output be formatted for human legibility (e.g. include table headings and column separators) and quiet mode output be just the bare data for programmatic use. (For one example, see docker images vs docker images -q). So that would be my recommendation - have worker.sh take a flag indicating whether its output is being consumed programmatically, and write it such that its output is all sent via a function that checks that flag and filters appropriately.
Maybe a different approach would be for the second script to test to see if it's stdout is being used programatically:
gash.sh:
#!/bin/bash
data=$(./another.sh)
echo "Received $data"
another.sh:
#!/bin/bash
# for -t see man isatty(3). 1 is file descriptor 1 - stdout
if [ -t 1 ]; then
echo "stdout is a terminal"
else
echo "stdout is not a terminal"
fi
Gives (where $ is a generic keyboard prompt):
$ bash gash.sh
Received stdout is not a terminal
$ bash another.sh
stdout is a terminal
You could then set a flag to change script behaviour (ls(1) does a similar thing). However, you should be prepared for this:
$ bash another.sh|more
stdout is not a terminal
$ bash another.sh > out.txt
$ cat out.txt
stdout is not a terminal
I have a lot of bash commands.Some of them fail for different reasons.
I want to check if some of my errors contain a substring.
Here's an example:
#!/bin/bash
if [[ $(cp nosuchfile /foobar) =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
else
echo "No match"
fi
When I run it, the error is printed to screen and I get "No match":
$ ./myscript
cp: cannot stat 'nosuchfile': No such file or directory
No match
Instead, I wanted the error to be captured and match my condition:
$ ./myscript
File does not exist. Please check your files and try again.
How do I correctly match against the error message?
P.S. I've found some solution, what do you think about this?
out=`cp file1 file2 2>&1`
if [[ $out =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
elif [[ $out =~ "omitting directory" ]]; then
echo "You have specified a directory instead of a file"
fi
I'd do it like this
# Make sure we always get error messages in the same language
# regardless of what the user has specified.
export LC_ALL=C
case $(cp file1 file2 2>&1) in
#or use backticks; double quoting the case argument is not necessary
#but you can do it if you wish
#(it won't get split or glob-expanded in either case)
*"No such file"*)
echo >&2 "File does not exist. Please check your files and try again."
;;
*"omitting directory"*)
echo >&2 "You have specified a directory instead of a file"
;;
esac
This'll work with any POSIX shell too, which might come in handy if you ever decide to
convert your bash scripts to POSIX shell (dash is quite a bit faster than bash).
You need the first 2>&1 redirection because executables normally output information not primarily meant for further machine processing to stderr.
You should use the >&2 redirections with the echos because what you're ouputting there fits into that category.
PSkocik's answer is the correct one for one you need to check for a specific string in an error message. However, if you came looking for ways to detect errors:
I want to check whether or not a command failed
Check the exit code instead of the error messages:
if cp nosuchfile /foobar
then
echo "The copy was successful."
else
ret="$?"
echo "The copy failed with exit code $ret"
fi
I want to differentiate different kinds of failures
Before looking for substrings, check the exit code documentation for your command. For example, man wget lists:
EXIT STATUS
Wget may return one of several error codes if it encounters problems.
0 No problems occurred.
1 Generic error code.
2 Parse error---for instance, when parsing command-line options
3 File I/O error.
(...)
in which case you can check it directly:
wget "$url"
case "$?" in
0) echo "No problem!";;
6) echo "Incorrect password, try again";;
*) echo "Some other error occurred :(" ;;
esac
Not all commands are this disciplined in their exit status, so you may need to check for substrings instead.
Both examples:
out=`cp file1 file2 2>&1`
and
case $(cp file1 file2 2>&1) in
have the same issue because they mixing the stderr and stdout into one output which can be examined. The problem is when you trying the complex command with interactive output i.e top or ddrescueand you need to preserve stdout untouched and examine only the stderr.
To omit this issue you can try this (working only in bash > v4.2!):
shopt -s lastpipe
declare errmsg_variable="errmsg_variable UNSET"
command 3>&1 1>&2 2>&3 | read errmsg_variable
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Explanation
This line
command 3>&1 1>&2 2>&3 | read errmsg_variable
redirecting stderr to the errmsg_variable (using file descriptors trick and pipe) without mixing with stdout. Normally pipes spawning own sub-processes and after executing command with pipes all assignments are not visible in the main process so examining them in the rest of code can't be effective. To prevent this you have to change standard shell behavior by using:
shopt -s lastpipe
which executes last pipe manipulation in command as in the current process so:
| read errmsg_variable
assignes content "pumped" to pipe (in our case error message) into variable which resides in the main process. Now you can examine this variable in the rest of code to find specific sub-string:
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Suppose I have this script:
logfile=$1
echo "This is just a debug message indicating the script is starting to run..."
# Do some work...
echo "Results: x, y and z." >> $logfile
Is it possible to invoke the script from the command-line such that $logfile is actually stdout?
Why? I would like to have a script that prints part of its output to stdout or, optionally, to a file.
"But why not remove the >> $logfile part and just invoke it with ./script >> filename when you want to write to a file?", you may ask.
Well, because I just want to do this "optional redirect" thing for some output messages. In the example above, just the second message should be affected.
Use /dev/stdout, if your operating system is Linux or something similarly compliant with convention. Or:
#!/bin/bash
# works on bash even if OS doesn't provide a /dev/stdout
# for non-bash shells, consider using exec 3>&1 explicitly if $1 is empty
exec 3>${1:-/dev/stdout}
echo "This is just a debug message indicating the script is starting to run..." >&2
echo "Results: x, y and z." >&3
This is also vastly more efficient than putting >>"$filename" on every line that should log to the file, which reopens the file for output on each command.
I use set -e options on top of my bash scripts to stop executing on any errors. But also I can to use -e on echo command like the following:
echo -e "Some text".
I have two questions:
How to handle correctly handle errors in bash scripts?
What -e options means in echo command?
The "correct" way to handle bash errors depends on the situation and what you want to accomplish.
In some cases, the if statement approach that barmar describes is the best way to handle a problem.
The vagaries of set -e
set -e will silently stop a script as soon as there is an uncaught error. It will print no message. So, if you want to know why or what line caused the script to fail, you will be frustrated.
Further, as documented on Greg's FAQ, the behavior of set -e varies from one bash version to the next and can be quite surprising.
In sum, set -e has only limited uses.
A die function
In other cases, when a command fails, you want the script to exit immediately with a message. In perl, the die function provides a handy way to do this. This feature can be emulated in shell with a function:
die () {
echo "ERROR: $*. Aborting." >&2
exit 1
}
A call to die can then be easily attached to commands which have to succeed or else the script must be stopped. For example:
cp file1 dir/ || die "Failed to cp file1 to dir."
Here, due to the use of bash's OR control operator, ||, the die command is executed only if the command which precedes it fails.
If you want to handle an error instead of stopping the script when it happens, use if:
if ! some_command
then
# Do whatever you want here, for instance...
echo some_command got an error
fi
echo -e is unrelated. This -e option tells the echo command to process escape sequences in its arguments. See man echo for the list of escape sequences.
One way of handling error is to use -e in your shebang at start of your script and using a trap handler for ERR like this:
#!/bin/bash -e
errHandler () {
d=$(date '+%D %T :: ')
echo "$d Error, Exiting..." >&2
# can do more things like print to a log file etc or some cleanup
exit 1
}
trap errHandler ERR
Now this function errHandler will be called only when an error occurs in your script.
Millions of developers write shell scripts to solve various types of tasks. I use shell scripts to simplify deployment, life-cycle management, installation or simply as a glue language.
What I've noticed is nobody actually cares about shell scripts style and quality. A lot of teams spend many hours fixing Java, C++, ... style issues, but totally ignore issues in their shell scripts. By the way, usually there is no standard way to implement a shell script within a particular project, so the one may find dozens different, ugly and buggy scripts, spread around the codebase.
To overcome that issue in my projects I decided to create a shell script template, universal and good enough. I will provide my templates as is to make this question a bit more useful. Out of the box these templates provides:
command-line arguments handling
synchronization
some basic help
Arguments handling: getopts (latest version: shell-script-template#github)
#!/bin/bash
# ------------------------------------------------------------------
# [Author] Title
# Description
# ------------------------------------------------------------------
VERSION=0.1.0
SUBJECT=some-unique-id
USAGE="Usage: command -ihv args"
# --- Options processing -------------------------------------------
if [ $# == 0 ] ; then
echo $USAGE
exit 1;
fi
while getopts ":i:vh" optname
do
case "$optname" in
"v")
echo "Version $VERSION"
exit 0;
;;
"i")
echo "-i argument: $OPTARG"
;;
"h")
echo $USAGE
exit 0;
;;
"?")
echo "Unknown option $OPTARG"
exit 0;
;;
":")
echo "No argument value for option $OPTARG"
exit 0;
;;
*)
echo "Unknown error while processing options"
exit 0;
;;
esac
done
shift $(($OPTIND - 1))
param1=$1
param2=$2
# --- Locks -------------------------------------------------------
LOCK_FILE=/tmp/$SUBJECT.lock
if [ -f "$LOCK_FILE" ]; then
echo "Script is already running"
exit
fi
trap "rm -f $LOCK_FILE" EXIT
touch $LOCK_FILE
# --- Body --------------------------------------------------------
# SCRIPT LOGIC GOES HERE
echo $param1
echo $param2
# -----------------------------------------------------------------
Shell Flags (shFlags) allows to simplify command-line arguments handling a lot, so at some moment of time I decided not to ignore such possibility.
Arguments handling: shflags (latest version: shell-script-template#github)
#!/bin/bash
# ------------------------------------------------------------------
# [Author] Title
# Description
#
# This script uses shFlags -- Advanced command-line flag
# library for Unix shell scripts.
# http://code.google.com/p/shflags/
#
# Dependency:
# http://shflags.googlecode.com/svn/trunk/source/1.0/src/shflags
# ------------------------------------------------------------------
VERSION=0.1.0
SUBJECT=some-unique-id
USAGE="Usage: command -hv args"
# --- Option processing --------------------------------------------
if [ $# == 0 ] ; then
echo $USAGE
exit 1;
fi
. ./shflags
DEFINE_string 'aparam' 'adefault' 'First parameter'
DEFINE_string 'bparam' 'bdefault' 'Second parameter'
# parse command line
FLAGS "$#" || exit 1
eval set -- "${FLAGS_ARGV}"
shift $(($OPTIND - 1))
param1=$1
param2=$2
# --- Locks -------------------------------------------------------
LOCK_FILE=/tmp/${SUBJECT}.lock
if [ -f "$LOCK_FILE" ]; then
echo "Script is already running"
exit
fi
trap "rm -f $LOCK_FILE" EXIT
touch $LOCK_FILE
# -- Body ---------------------------------------------------------
# SCRIPT LOGIC GOES HERE
echo "Param A: $FLAGS_aparam"
echo "Param B: $FLAGS_bparam"
echo $param1
echo $param2
# -----------------------------------------------------------------
I do think these templates can be improved to simplify developer's life even more.
So the question is how to improve them to have the following:
built-in logging
better error handling
better portability
smaller footprint
built-in execution time tracking
This is the header of my script shell template (which can be found here: http://www.uxora.com/unix/shell-script/18-shell-script-template).
It is a man look alike which is used to by usage() to diplsay help as well.
#!/bin/ksh
#================================================================
# HEADER
#================================================================
#% SYNOPSIS
#+ ${SCRIPT_NAME} [-hv] [-o[file]] args ...
#%
#% DESCRIPTION
#% This is a script template
#% to start any good shell script.
#%
#% OPTIONS
#% -o [file], --output=[file] Set log file (default=/dev/null)
#% use DEFAULT keyword to autoname file
#% The default value is /dev/null.
#% -t, --timelog Add timestamp to log ("+%y/%m/%d#%H:%M:%S")
#% -x, --ignorelock Ignore if lock file exists
#% -h, --help Print this help
#% -v, --version Print script information
#%
#% EXAMPLES
#% ${SCRIPT_NAME} -o DEFAULT arg1 arg2
#%
#================================================================
#- IMPLEMENTATION
#- version ${SCRIPT_NAME} (www.uxora.com) 0.0.4
#- author Michel VONGVILAY
#- copyright Copyright (c) http://www.uxora.com
#- license GNU General Public License
#- script_id 12345
#-
#================================================================
# HISTORY
# 2015/03/01 : mvongvilay : Script creation
# 2015/04/01 : mvongvilay : Add long options and improvements
#
#================================================================
# DEBUG OPTION
# set -n # Uncomment to check your syntax, without execution.
# set -x # Uncomment to debug this shell script
#
#================================================================
# END_OF_HEADER
#================================================================
And here is the usage functions to go with:
#== needed variables ==#
SCRIPT_HEADSIZE=$(head -200 ${0} |grep -n "^# END_OF_HEADER" | cut -f1 -d:)
SCRIPT_NAME="$(basename ${0})"
#== usage functions ==#
usage() { printf "Usage: "; head -${SCRIPT_HEADSIZE:-99} ${0} | grep -e "^#+" | sed -e "s/^#+[ ]*//g" -e "s/\${SCRIPT_NAME}/${SCRIPT_NAME}/g" ; }
usagefull() { head -${SCRIPT_HEADSIZE:-99} ${0} | grep -e "^#[%+-]" | sed -e "s/^#[%+-]//g" -e "s/\${SCRIPT_NAME}/${SCRIPT_NAME}/g" ; }
scriptinfo() { head -${SCRIPT_HEADSIZE:-99} ${0} | grep -e "^#-" | sed -e "s/^#-//g" -e "s/\${SCRIPT_NAME}/${SCRIPT_NAME}/g"; }
Here is what you should obtain:
# Display help
$ ./template.sh --help
SYNOPSIS
template.sh [-hv] [-o[file]] args ...
DESCRIPTION
This is a script template
to start any good shell script.
OPTIONS
-o [file], --output=[file] Set log file (default=/dev/null)
use DEFAULT keyword to autoname file
The default value is /dev/null.
-t, --timelog Add timestamp to log ("+%y/%m/%d#%H:%M:%S")
-x, --ignorelock Ignore if lock file exists
-h, --help Print this help
-v, --version Print script information
EXAMPLES
template.sh -o DEFAULT arg1 arg2
IMPLEMENTATION
version template.sh (www.uxora.com) 0.0.4
author Michel VONGVILAY
copyright Copyright (c) http://www.uxora.com
license GNU General Public License
script_id 12345
# Display version info
$ ./template.sh -v
IMPLEMENTATION
version template.sh (www.uxora.com) 0.0.4
author Michel VONGVILAY
copyright Copyright (c) http://www.uxora.com
license GNU General Public License
script_id 12345
You can get the full script template here: http://www.uxora.com/unix/shell-script/18-shell-script-template
I would steer clear of relying on bash as the shell and model your solution on top of shell syntax defined by POSIX and use /bin/sh on the shebang. We had a number of surprises recently when Ubuntu changed /bin/sh to dash.
Another pandemic in the shell world is a general misunderstanding of exit status codes. Exiting with an understandable code is what lets other shell scripts programmatically react to specific failures. Unfortunately, there is not a lot of guidance on this beyond the "sysexits.h" header file.
If you are looking for more information about good shell scripting practices, concentrate on Korn shell scripting resources. Ksh programming tends to focus on really programming as opposed to writing haphazard scripts.
Personally, I haven't found much use for shell templates. The unfortunate truth is that most engineers will simply copy and paste your template and continue to write the same sloppy shell code. A better approach is to create a library of shell functions with well-defined semantics and then convince others to use them. This approach will also help with change control. For example, if you find a defect in a template, then every script that was based on it is broken and would require modifications. Using a library makes it possible to fix defects in one place.
Welcome to the world of shell scripting. Writing shell scripts is a bit of a lost art that seems to be entering a renaissance. There were some good books written on the subject in the late 90's - UNIX Shell Programming by Burns and Arthur comes to mind though the Amazon reviews for the book make it seem awful. IMHO, effective shell code embraces the UNIX philosophy as described by Eric S. Raymond in The Art of Unix Programming.
Here's my bash boilerplate with some sane options explained in the comments
#!/usr/bin/env bash
set -e # Abort script at first error, when a command exits with non-zero status (except in until or while loops, if-tests, list constructs)
set -u # Attempt to use undefined variable outputs error message, and forces an exit
set -x # Similar to verbose mode (-v), but expands commands
set -o pipefail # Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
If you're concerned about portability, do not use == in tests. Use = instead. Do not explicitly check if $# is 0. Instead, use ${n?error message} the first time you reference a required argument (eg ${3?error message}). This prevents the extremely annoying practice of emitting a usage statement instead of an error message. And most importantly, always put error messages on the right stream and exit with the correct status. For example:
echo "Unknown error while processing options" >&2
exit 1;
It is often convenient to do something like:
die() { echo "$*"; exit 1; } >&2
I will also share my results. The idea behind all these examples is to encourage overall quality. It is also important to make sure final result is safe enough.
Logging
It is really crucial to have proper logging available from the same beginning. I'm just trying to think about production usage.
TAG="foo"
LOG_FILE="example.log"
function log() {
if [ $HIDE_LOG ]; then
echo -e "[$TAG] $#" >> $LOG_FILE
else
echo "[`date +"%Y/%m/%d:%H:%M:%S %z"`] [$TAG] $#" | tee -a $LOG_FILE
fi
}
log "[I] service start"
log "[D] debug message"
Command test
This is about safety, real-life environments and proper error-handling. Could be optional.
function is_command () {
log "[I] check if commad $1 exists"
type "$1" &> /dev/null ;
}
CMD=zip
if is_command ${CMD} ; then
log "[I] '${CMD}' command found"
else
log "[E] '${CMD}' command not found"
fi
Template processing
Could be only my subjective opinion, but anyway. I used several different ways to generate some configuration/etc right from the script. Perl, sed and others do the job, but look a little bit scary.
Recently I noticed a better way:
function process_template() {
source $1 > $2
result=$?
if [ $result -ne 0 ]; then
log "[E] Error during template processing: '$1' > '$2'"
fi
return $result
}
VALUE1="tmpl-value-1"
VALUE2="tmpl-value-2"
VALUE3="tmpl-value-3"
process_template template.tmpl template.result
Template example
echo "Line1: ${VALUE1}
Line2: ${VALUE2}
Line3: ${VALUE3}"
Result example
Line1: tmpl-value-1
Line2: tmpl-value-2
Line3: tmpl-value-3
There is no more helpful thing to a shell script than a well documented behaviour with examples of usage and a list of known bugs. I think that no one program can be titled bulletproof, and bugs may appear in every moment (especially when your script gets used by the other people), so the only thing I’m taking care of is the good coding style and using only these things that the script really need. You’re standing on the way of aggregating, and it always going to become a large system which comes with a lot of unused modules, which is hard to port and hard to support. And the more system trying to be portable, the bigger it grows. Seriously, shell scripts do not need it to be implemented in that way. They must be kept as small as possible to simplify further use.
If the system really needs something big and bulletproof, it’s time to think about C99 or even C++.