Check if command error contains a substring - bash

I have a lot of bash commands.Some of them fail for different reasons.
I want to check if some of my errors contain a substring.
Here's an example:
#!/bin/bash
if [[ $(cp nosuchfile /foobar) =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
else
echo "No match"
fi
When I run it, the error is printed to screen and I get "No match":
$ ./myscript
cp: cannot stat 'nosuchfile': No such file or directory
No match
Instead, I wanted the error to be captured and match my condition:
$ ./myscript
File does not exist. Please check your files and try again.
How do I correctly match against the error message?
P.S. I've found some solution, what do you think about this?
out=`cp file1 file2 2>&1`
if [[ $out =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
elif [[ $out =~ "omitting directory" ]]; then
echo "You have specified a directory instead of a file"
fi

I'd do it like this
# Make sure we always get error messages in the same language
# regardless of what the user has specified.
export LC_ALL=C
case $(cp file1 file2 2>&1) in
#or use backticks; double quoting the case argument is not necessary
#but you can do it if you wish
#(it won't get split or glob-expanded in either case)
*"No such file"*)
echo >&2 "File does not exist. Please check your files and try again."
;;
*"omitting directory"*)
echo >&2 "You have specified a directory instead of a file"
;;
esac
This'll work with any POSIX shell too, which might come in handy if you ever decide to
convert your bash scripts to POSIX shell (dash is quite a bit faster than bash).
You need the first 2>&1 redirection because executables normally output information not primarily meant for further machine processing to stderr.
You should use the >&2 redirections with the echos because what you're ouputting there fits into that category.

PSkocik's answer is the correct one for one you need to check for a specific string in an error message. However, if you came looking for ways to detect errors:
I want to check whether or not a command failed
Check the exit code instead of the error messages:
if cp nosuchfile /foobar
then
echo "The copy was successful."
else
ret="$?"
echo "The copy failed with exit code $ret"
fi
I want to differentiate different kinds of failures
Before looking for substrings, check the exit code documentation for your command. For example, man wget lists:
EXIT STATUS
Wget may return one of several error codes if it encounters problems.
0 No problems occurred.
1 Generic error code.
2 Parse error---for instance, when parsing command-line options
3 File I/O error.
(...)
in which case you can check it directly:
wget "$url"
case "$?" in
0) echo "No problem!";;
6) echo "Incorrect password, try again";;
*) echo "Some other error occurred :(" ;;
esac
Not all commands are this disciplined in their exit status, so you may need to check for substrings instead.

Both examples:
out=`cp file1 file2 2>&1`
and
case $(cp file1 file2 2>&1) in
have the same issue because they mixing the stderr and stdout into one output which can be examined. The problem is when you trying the complex command with interactive output i.e top or ddrescueand you need to preserve stdout untouched and examine only the stderr.
To omit this issue you can try this (working only in bash > v4.2!):
shopt -s lastpipe
declare errmsg_variable="errmsg_variable UNSET"
command 3>&1 1>&2 2>&3 | read errmsg_variable
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Explanation
This line
command 3>&1 1>&2 2>&3 | read errmsg_variable
redirecting stderr to the errmsg_variable (using file descriptors trick and pipe) without mixing with stdout. Normally pipes spawning own sub-processes and after executing command with pipes all assignments are not visible in the main process so examining them in the rest of code can't be effective. To prevent this you have to change standard shell behavior by using:
shopt -s lastpipe
which executes last pipe manipulation in command as in the current process so:
| read errmsg_variable
assignes content "pumped" to pipe (in our case error message) into variable which resides in the main process. Now you can examine this variable in the rest of code to find specific sub-string:
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi

Related

Cannot figure out how to fix shellcheck complaint that I should not use a glob as a command when starting one script from another

Example script:
#!/bin/bash
printf '1\n1\n1\n1\n' | ./script2*.sh >/dev/null 2>/dev/null
Shellcheck returns the following:
In script1.sh line 3:
printf '1\n1\n1\n1\n' | ./script2*.sh >/dev/null 2>/dev/null
^-- SC2211: This is a glob used as a command name. Was it supposed to be in ${..}, array, or is it missing quoting?
According to https://github.com/koalaman/shellcheck/wiki/SC2211, there should be no exceptions to this rule.
Specifically, it suggests "If you want to specify a command name via glob, e.g. to not hard code version in ./myprogram-*/foo, expand to array or parameters first to allow handling the cases of 0 or 2+ matches."
The reason I'm using the glob in the first place is that I append or change the date to any script that I have just created or changed. Interestingly enough, when I use "bash script2*.sh" instead of "./script2*.sh" the complaint goes away.
Have I fixed the problem or I am tricking shellcheck into ignoring a problem that should not be ignored? If I am using bad bash syntax, how might I execute another script that needs to be referenced to using a glob the proper way?
The problem with this is that ./script2*.sh may end up running
./script2-20171225.sh ./script2-20180226.sh ./script2-copy.sh
which is a strange and probably unintentional thing to do, especially if the script is confused by such arguments, or if you wanted your most up-to-date file to be used. Your "fix" has the same fundamental problem.
The suggestion you mention would take the form:
array=(./script2*.sh)
[ "${#array[#]}" -ne 1 ] && { echo "Multiple matches" >&2; exit 1; }
"${array[0]}"
and guard against this problem.
Since you appear to assume that you'll only ever have exactly one matching file to be invoked without parameters, you can turn this into a function:
runByGlob() {
if (( $# != 1 ))
then
echo "Expected exactly 1 match but found $#: $*" >&2
exit 1
elif command -v "$1" > /dev/null 2>&1
then
"$1"
else
echo "Glob is not a valid command: $*" >&2
exit 1
fi
}
whatever | runByGlob ./script2*.sh
Now if you ever have zero or multiple matching files, it will abort with an error instead of potentially running the wrong file with strange arguments.

Unix Shell Script: Display custom message if given extension file not exists in directory

I have the following unix shell script, which is used to list the files in the given directory. Only we need to pass the extension of the file and script should list the file or files or display custom message.
My try:
Script:
#!/bin/sh
FileNameWithPath=`ls home\docs\customers\*.$1 | wc -w`
if [ $FileNameWithPath -gt 0 ]
then
ls home\docs\customes\*.$1
else
echo "Custom Message about failure(File not found)"
fi
Run:
$ ./Test.sh txt
Note: The above script works fine if i give file extension which is exists but if i give some non exists file extension it will through error plus custom error message. I just want to print custom message that's it.
You can do it with a single command:
ls home/docs/customers/*.$1 2> /dev/null || echo "Custom message about failure (File not found)"
The first command (the 'ls') try to list the files. If it fails, it will print an error message (suppressed by '2> /dev/null') and returns an error code. Since the exit code is different by 0, the second part (the 'echo') will be executed.
If you want to keep your code, you can drop the ls error redirecting stderr to /dev/null in this way:
FileNameWithPath=`ls home\docs\customers\*.$1 2>/dev/null | wc -w`
This doesn't require use of ls.
You can do this with globbing itself:
# turn on glob failure for no matches
shopt -s failglob
# list files or a custom error message
(echo home/docs/customers/*."$1") 2>/dev/null ||
echo "Custom Message about failure"
The error message you get happens in the line where you are assigning to FileNameWithPath. You can suppress it by redirecting it to /dev/null. i.e. 2>/dev/null.
It is much better (and Posix compliant) to use $() instead of the backtick operator, given that you started your script with #!/bin/sh rather than #!/bin/bash. You will then be portable across the modern bourne shells.
Another big win for using $() is that they can be nested easily, whereas you have to escape the backtick when you nest it.
As Andrea Carron points out in their answer, you can do the whole thing on one line using the || logical-or operator. This is a very common idiom.
On the off-chance that your MVCE refers to something more complex, I fixed it for you below.
#!/bin/sh
FileNameWithPath=$(ls home\docs\customers\*.$1 2>/dev/null | wc -w)
if [ $FileNameWithPath -gt 0 ]
then
ls home\docs\customes\*.$1
else
echo "Custom Message about failure(File not found)"
fi
Just add error redirection to null device file in second line of your script:-
FileNameWithPath=`ls home\docs\customers\*.$1 2>/dev/null | wc -w`

How to handle errors in bash scripts correctly?

I use set -e options on top of my bash scripts to stop executing on any errors. But also I can to use -e on echo command like the following:
echo -e "Some text".
I have two questions:
How to handle correctly handle errors in bash scripts?
What -e options means in echo command?
The "correct" way to handle bash errors depends on the situation and what you want to accomplish.
In some cases, the if statement approach that barmar describes is the best way to handle a problem.
The vagaries of set -e
set -e will silently stop a script as soon as there is an uncaught error. It will print no message. So, if you want to know why or what line caused the script to fail, you will be frustrated.
Further, as documented on Greg's FAQ, the behavior of set -e varies from one bash version to the next and can be quite surprising.
In sum, set -e has only limited uses.
A die function
In other cases, when a command fails, you want the script to exit immediately with a message. In perl, the die function provides a handy way to do this. This feature can be emulated in shell with a function:
die () {
echo "ERROR: $*. Aborting." >&2
exit 1
}
A call to die can then be easily attached to commands which have to succeed or else the script must be stopped. For example:
cp file1 dir/ || die "Failed to cp file1 to dir."
Here, due to the use of bash's OR control operator, ||, the die command is executed only if the command which precedes it fails.
If you want to handle an error instead of stopping the script when it happens, use if:
if ! some_command
then
# Do whatever you want here, for instance...
echo some_command got an error
fi
echo -e is unrelated. This -e option tells the echo command to process escape sequences in its arguments. See man echo for the list of escape sequences.
One way of handling error is to use -e in your shebang at start of your script and using a trap handler for ERR like this:
#!/bin/bash -e
errHandler () {
d=$(date '+%D %T :: ')
echo "$d Error, Exiting..." >&2
# can do more things like print to a log file etc or some cleanup
exit 1
}
trap errHandler ERR
Now this function errHandler will be called only when an error occurs in your script.

How to redirect output of command to diff

I am trying to write a loop, and this doesn't work:
for t in `ls $TESTS_PATH1/cmd*.in` ; do
diff $t.out <($parser_test `cat $t`)
# Print result
if [[ $? -eq 0 ]] ; then
printf "$t ** TEST PASSED **"
else
printf "$t ** TEST FAILED **"
fi
done
This also doesn't help:
$parser_test `cat $t` | $DIFF $t.out -
Diff shows that output differs (it's strange, I see output of needed error line as it was printed to stdout, and not caught by diff), but when running with temporary file, everything works fine:
for t in `ls $TESTS_PATH1/cmd*.in` ; do
# Compare output with template
$parser_test `cat $t` 1> $TMP_FILE 2> $TMP_FILE
diff $TMP_FILE $t.out
# Print result
if [[ $? -eq 0 ]] ; then
printf "$t $CGREEN** TEST PASSED **$CBLACK"
else
printf "$t $CRED** TEST FAILED **$CBLACK"
fi
done
I must avoid using temporary file. Why first loop doesn't work and how to fix it?
Thank you.
P.S. Files *.in contain erroneous command line parameters for program, and *.out contain errors messages that program must print for these parameters.
First, to your error, you need to redirect standard error:
diff $t.out <($parser_test `cat $t` 2>&1)
Second, to all the other problems you may not be aware of:
don't use ls with a for loop (it has numerous problems, such as unexpected behavior in filenames containing spaces); instead, use: for t in $TESTS_PATH1/cmd*.in; do
to support file names with spaces, quote your variable expansion: "$t" instead of $t
don't use backquotes; they are deprecated in favor of $(command)
don't use a subshell to cat one file; instead, just run: $parser_test <$t
use either [[ $? == 0 ]] (new syntax) or [ $? -eq 0 ] (old syntax)
if you use printf instead of echo, don't forget that you need to add \n at the end of the line manually
never use 1> $TMP_FILE 2> $TMP_FILE - this just overwrites stdout with stderr in a non-predictable manner. If you want to combine standard out and standard error, use: 1>$TMP_FILE 2>&1
by convention, ALL_CAPS names are used for/by environment variables. In-script variable names are recommended to be no_caps.
you don't need to use $? right after executing a command, it's redundant. Instead, you can directly run: if command; then ...
After fixing all that, your script would look like this:
for t in $tests_path1/cmd*.in; do
if diff "$t.out" <($parser_test <"$t" 2>&1); then
echo "$t ** TEST PASSED **"
else
echo "$t ** TEST FAILED **"
fi
done
If you don't care for the actual output of diff, you can add >/dev/null right after diff to silence it.
Third, if I understand correctly, your file names are of the form foo.in and foo.out, and not foo.in and foo.in.out (like the script above expects). If this is true, you need to change the diff line to this:
diff "${t/.in}.out" <($parser_test <"$t" 2>&1)
In your second test you are capturing standard error, but in the first one (and the pipe example) stderr remains uncaptured, and perhaps that's the "diff" (pun intended).
You can probably add a '2>&1' in the proper place to combine the stderr and stdout streams.
.eg.
diff $t.out <($parser_test cat $t 2>&1)
Not to mention, you don't say what "doesn't work" means, does that mean it doesn't find a difference, or it exits with an error message? Please clarify in case you need more info.

echo that outputs to stderr

Is there a standard Bash tool that acts like echo but outputs to stderr rather than stdout?
I know I can do echo foo 1>&2 but it's kinda ugly and, I suspect, error prone (e.g. more likely to get edited wrong when things change).
You could do this, which facilitates reading:
>&2 echo "error"
>&2 copies file descriptor #2 to file descriptor #1. Therefore, after this redirection is performed, both file descriptors will refer to the same file: the one file descriptor #2 was originally referring to. For more information see the Bash Hackers Illustrated Redirection Tutorial.
You could define a function:
echoerr() { echo "$#" 1>&2; }
echoerr hello world
This would be faster than a script and have no dependencies.
Camilo Martin's bash specific suggestion uses a "here string" and will print anything you pass to it, including arguments (-n) that echo would normally swallow:
echoerr() { cat <<< "$#" 1>&2; }
Glenn Jackman's solution also avoids the argument swallowing problem:
echoerr() { printf "%s\n" "$*" >&2; }
Since 1 is the standard output, you do not have to explicitly name it in front of an output redirection like >. Instead, you can simply type:
echo This message goes to stderr >&2
Since you seem to be worried that 1>&2 will be difficult for you to reliably type, the elimination of the redundant 1 might be a slight encouragement to you!
Another option
echo foo >>/dev/stderr
No, that's the standard way to do it. It shouldn't cause errors.
If you don't mind logging the message also to syslog, the not_so_ugly way is:
logger -s $msg
The -s option means: "Output the message to standard error as well as to the system log."
Another option that I recently stumbled on is this:
{
echo "First error line"
echo "Second error line"
echo "Third error line"
} >&2
This uses only Bash built-ins while making multi-line error output less error prone (since you don't have to remember to add &>2 to every line).
Note: I'm answering the post- not the misleading/vague "echo that outputs to stderr" question (already answered by OP).
Use a function to show the intention and source the implementation you want. E.g.
#!/bin/bash
[ -x error_handling ] && . error_handling
filename="foobar.txt"
config_error $filename "invalid value!"
output_xml_error "No such account"
debug_output "Skipping cache"
log_error "Timeout downloading archive"
notify_admin "Out of disk space!"
fatal "failed to open logger!"
And error_handling being:
ADMIN_EMAIL=root#localhost
config_error() { filename="$1"; shift; echo "Config error in $filename: $*" 2>&1; }
output_xml_error() { echo "<error>$*</error>" 2>&1; }
debug_output() { [ "$DEBUG"=="1" ] && echo "DEBUG: $*"; }
log_error() { logger -s "$*"; }
fatal() { which logger >/dev/null && logger -s "FATAL: $*" || echo "FATAL: $*"; exit 100; }
notify_admin() { echo "$*" | mail -s "Error from script" "$ADMIN_EMAIL"; }
Reasons that handle concerns in OP:
nicest syntax possible (meaningful words instead of ugly symbols)
harder to make an error (especially if you reuse the script)
it's not a standard Bash tool, but it can be a standard shell library for you or your company/organization
Other reasons:
clarity - shows intention to other maintainers
speed - functions are faster than shell scripts
reusability - a function can call another function
configurability - no need to edit original script
debugging - easier to find the line responsible for an error (especially if you're deadling with a ton of redirecting/filtering output)
robustness - if a function is missing and you can't edit the script, you can fall back to using external tool with the same name (e.g. log_error can be aliased to logger on Linux)
switching implementations - you can switch to external tools by removing the "x" attribute of the library
output agnostic - you no longer have to care if it goes to STDERR or elsewhere
personalizing - you can configure behavior with environment variables
My suggestion:
echo "my errz" >> /proc/self/fd/2
or
echo "my errz" >> /dev/stderr
echo "my errz" > /proc/self/fd/2 will effectively output to stderr because /proc/self is a link to the current process, and /proc/self/fd holds the process opened file descriptors, and then, 0, 1, and 2 stand for stdin, stdout and stderr respectively.
The /proc/self link doesn't work on MacOS, however, /proc/self/fd/* is available on Termux on Android, but not /dev/stderr. How to detect the OS from a Bash script? can help if you need to make your script more portable by determining which variant to use.
Don't use cat as some have mentioned here. cat is a program
while echo and printf are bash (shell) builtins. Launching a program or another script (also mentioned above) means to create a new process with all its costs. Using builtins, writing functions is quite cheap, because there is no need to create (execute) a process (-environment).
The opener asks "is there any standard tool to output (pipe) to stderr", the short answer is : NO ... why? ... redirecting pipes is an elementary concept in systems like unix (Linux...) and bash (sh) builds up on these concepts.
I agree with the opener that redirecting with notations like this: &2>1 is not very pleasant for modern programmers, but that's bash. Bash was not intended to write huge and robust programs, it is intended to help the admins to get there work with less keypresses ;-)
And at least, you can place the redirection anywhere in the line:
$ echo This message >&2 goes to stderr
This message goes to stderr
This is a simple STDERR function, which redirect the pipe input to STDERR.
#!/bin/bash
# *************************************************************
# This function redirect the pipe input to STDERR.
#
# #param stream
# #return string
#
function STDERR () {
cat - 1>&2
}
# remove the directory /bubu
if rm /bubu 2>/dev/null; then
echo "Bubu is gone."
else
echo "Has anyone seen Bubu?" | STDERR
fi
# run the bubu.sh and redirect you output
tux#earth:~$ ./bubu.sh >/tmp/bubu.log 2>/tmp/bubu.err
read is a shell builtin command that prints to stderr, and can be used like echo without performing redirection tricks:
read -t 0.1 -p "This will be sent to stderr"
The -t 0.1 is a timeout that disables read's main functionality, storing one line of stdin into a variable.
Combining solution suggested by James Roth and Glenn Jackman
add ANSI color code to display the error message in red:
echoerr() { printf "\e[31;1m%s\e[0m\n" "$*" >&2; }
# if somehow \e is not working on your terminal, use \u001b instead
# echoerr() { printf "\u001b[31;1m%s\u001b[0m\n" "$*" >&2; }
echoerr "This error message should be RED"
Make a script
#!/bin/sh
echo $* 1>&2
that would be your tool.
Or make a function if you don't want to have a script in separate file.
Here is a function for checking the exit status of the last command, showing error and terminate the script.
or_exit() {
local exit_status=$?
local message=$*
if [ "$exit_status" -gt 0 ]
then
echo "$(date '+%F %T') [$(basename "$0" .sh)] [ERROR] $message" >&2
exit "$exit_status"
fi
}
Usage:
gzip "$data_dir"
or_exit "Cannot gzip $data_dir"
rm -rf "$junk"
or_exit Cannot remove $junk folder
The function prints out the script name and the date in order to be useful when the script is called from crontab and logs the errors.
59 23 * * * /my/backup.sh 2>> /my/error.log

Resources