Case statement not working in bash, conditions not apply - bash

Case statement not working. Pressing Enter(empty string) not make script exit, other cases not working too. No one exit 1 commands run when it should, all cases fails when I type text specially for it.
I find out what case works, but exit 1 statement in it not exits the script. How to exit script in that place correctly?
#!/bin/bash
...
get_virtual_host() {
if [ -t 0 ]; then
read -p "Create virtualhost (= Folder name,case sensitive)" -r host
else
# same as 'read' but for GUI
host=$(zenity --forms --add-entry=Name --text='Create virtualhost (= Folder name,case sensitive)')
fi
case "$host" in
"") notify_user "Bad input: empty" ; exit 1 ;;
*"*"*) notify_user "Bad input: wildcard" ; exit 1 ;;
*[[:space:]]*) notify_user "Bad input: whitespace" ; exit 1 ;;
esac
echo "$host"
}
host=$(get_virtual_host)
Addition to clarify:
notify_user () {
echo "$1" >&2
[ -t 0 ] || if type -p notify-send >/dev/null; then notify-send "$1"; else xmessage -buttons Ok:0 -nearmouse "$1" -timeout 10; fi
}

The function is in fact written correctly. It's how it's called that's the problem.
host=$(get_virtual_host)
When you capture a command's output the command runs in a subshell. Exiting the subshell doesn't directly cause the parent shell to exit; the parent shell needs to check the subshell's exit status.
host=$(get_virtual_host) || exit
This will exit the parent if get_virtual_host fails. A bare exit without an explicit exit code forwards the existing value of $?.

Related

Exit script function in bash

I have below script which I will be running via jenkins:
#!/bin/bash
function getToken
{
echo "function to get token"
}
function call_init
{
echo "Creating a config file"
}
function call_list
{
echo "calling list*"
}
#Starting execution
if [[ -z "$TOKEN" ]]; then
TOKEN=$(getToken)
if [ $? -ne 0 ]; then
exit 1
fi
fi
echo "Creating a config file and populating it"
call_init
if [ $? -ne 0 ]; then
exit 1
fi
if [ -n $ACTION ]; then
case "$ACTION" in
'list') echo "Action is list"
call_list
if [ $? -ne 0 ]; then
exit 1
fi
;;
'update') echo "Section is update"
;;
'delete') echo "Section is delete"
;;
*) echo "This is a default message"
;;
esac
fi
As you see that theres a lot of repetition of the below code which helps me fail the jenkins job by throwing the error code 1:
if [ $? -ne 0 ]; then
exit 1
fi
What would be the most efficient way to handle this? I need it to always exit the code with 1.
P.S: I went through Checking Bash exit status of several commands efficiently, however was not able to get it work for the above script.
The best approach is to use explicit error checking.
Your current pattern can be streamlined, the following are all equivelant:
run_command
if [ $? -ne 0 ]; then
print_error
exit 1
fi
if ! run_command; then
print_error
exit 1
fi
run_command || { print_error; exit 1; }
Or in its simplest form, with no error message:
run_command || exit 1
As an alternative, you might want to use set -e.
You might also be interested in set -o pipefail.
These are not the preferred solution, as #William has pointed out, but can be useful for getting simple scripts to throw errors:
Note that set -e is generally not considered best practice. It's semantics are extremely unexpected in the edge cases (eg, if you invoke set -e in a function), and more importantly have changed dramatically with different versions of the shell. It is far better to explicitly invoke exit by running cmd || exit
You can use set -e to cause bash to bail out if a command returns non-zero, and set +e to disable this behaviour.
set: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
Set or unset values of shell options and positional parameters.
Change the value of shell attributes and positional parameters, or
display the names and values of shell variables.
Options:
[...]
-e Exit immediately if a command exits with a non-zero status.
[...]
-o option-name
Set the variable corresponding to option-name:
[...]
pipefail the return value of a pipeline is the status of
the last command to exit with a non-zero status,
or zero if no command exited with a non-zero status
[...]
To make use of this, you must enable the option before it would be required.
For example:
# disable 'exit immediately' (the default)
set +e
echo "running false..."
false
echo "we're still running"
# enable 'exit immediately'
set -e
echo "running false..."
false
echo "this should never get printed"
set -o pipefail must be used in conjunction with set -e:
# enable 'exit immediately'
set -e
# disable 'pipefail' (the default)
set +o pipefail
echo "running false | true..."
false | true
echo "we're still running (only the last exit status is considered)"
# enable 'pipefail'
set -o pipefail
echo "running false | true..."
false | true
echo "this should never get printed"

How to execute a bash script line by line? [duplicate]

This question already has answers here:
Automatic exit from Bash shell script on error [duplicate]
(8 answers)
Closed 6 years ago.
#Example Script
wget http://file1.com
cd /dir
wget http://file2.com
wget http://file3.com
I want to execute the bash script line by line and test the exit code ($?) of each execution and determine whether to proceed or not:
It basically means I need to add the following script below every line in the original script:
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
and the original script becomes:
#Example Script
wget http://file1.com
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
cd /dir
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
wget http://file2.com
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
wget http://file3.com
if test $? -eq 0
then
echo "No error"
else
echo "ERROR"
exit
fi
But the script becomes bloated.
Is there a better method?
One can use set -e but it's not without it's own pitfalls. Alternative one can bail out on errors:
command || exit 1
And an your if-statement can be written less verbose:
if command; then
The above is the same as:
command
if test "$?" -eq 0; then
set -e makes the script fail on non-zero exit status of any command. set +e removes the setting.
There are many ways to do that.
For example can use set in order to automatically stop on "bad" rc; simply by putting
set -e
on top of your script. Alternatively, you could write a "check_rc" function; see here for some starting points.
Or, you start with this:
check_error () {
if [ $RET == 0 ]; then
echo "DONE"
echo ""
else
echo "ERROR"
exit 1
fi
}
To be used with:
echo "some example command"
RET=$? ; check_error
As said; many ways to do this.
Best bet is to use set -e to terminate the script as soon as any non-zero return code is observed. Alternatively you can write a function to deal with error traps and call it after every command, this will reduce the if...else part and you can print any message before exiting.
trap errorsRead ERR;
function errorsRead() {
echo "Some none-zero return code observed..";
exit 1;
}
somecommand #command of your need
errorsRead # calling trap handling function
You can do this contraption:
wget http://file1.com || exit 1
This will terminate the script with error code 1 if a command returns a non-zero (failed) result.

How to process basic commandline arguments in Bash?

So I started today taking a look at scripting using vim and I'm just so very lost and was looking for some help in a few areas.
For my first project,I want to process a file as a command line argument, and if a file isn't included when the user executes this script, then a usage message should be displayed, followed by exiting the program.
I have no clue where to even start with that, will I need and if ... then statement, or what?
Save vim for later and try to learn one thing at a time. A simpler text editor is called nano.
Now, as far as checking for a file as an argument, and showing a usage message otherwise, this is a typical pattern:
PROGNAME="$0"
function show_usage()
{
echo "Usage: ${PROGNAME} <filename>" >&2
echo "..." >&2
exit 1
}
if [[ $# -lt 1 ]]; then
show_usage
fi
echo "Contents of ${1}:"
cat "$1"
Let's break this down.
PROGNAME="$0"
$0 is the name of the script, as it was called on the command line.
function show_usage()
{
echo "Usage: ${PROGNAME} <filename>" >&2
echo "..." >&2
exit 1
}
This is the function that prints the "usage" message and exits with a failure status code. 0 is success, anything other than 0 is a failure. Note that we redirect our echo to &2--this prints the usage message on Standard Error rather than Standard Output.
if [[ $# -lt 1 ]]; then
show_usage
fi
$# is the number of arguments passed to the script. If that number is less than 1, print the usage message and exit.
echo "Contents of ${1}:"
cat "$1"
$1 is out filename--the first argument of the script. We can do whatever processing we want to here, with $1 being the filename. Hope this helps!
i think you're asking how to write a bash script that requires a file as a command-line argument, and exits with a usage message if there's a problem with that:
#!/bin/bash
# check if user provided exactly one command-line argument:
if [ $# -ne 1 ]; then
echo "Usage: `basename "$0"` file"
exit 1
# now check if the provided argument corresponds to a real file
elif [ ! -f "$1" ]; then
echo "Error: couldn't find $1."
exit 1
fi
# do things with the file...
stat "$1"
head "$1"
tail "$1"
grep 'xyz' "$1"

BASH: How tell if script uses exit?

Say I have two scripts that just print back the return code from a useless subscript:
script1
(echo; exit 0)
echo $?
script2
(echo)
echo $?
Both give back 0. But is there a way to tell that the first subscript explicitly uses the exit command?
After some research I got some breakthrough. Namely you can setup an exit_handler that can tell if there was an exit call by simply examining the last command.
#! /bin/bash
exit_handler () {
ret=$?
if echo "$BASH_COMMAND" | grep -e "^exit " >> /dev/null
then
echo "it was an explicit exit"
else
echo "it was an implicit exit"
fi
exit $ret
}
trap "exit_handler" EXIT
exit 22
This will print
it was an explicit exit
Now in order to tell the parent, instead of echoing, we can rather write to a file, a named pipe or whatever.
As per noting of choroba, exit without an argument will give implicit call, which is admittedly wrong since exit (without argument) is the same as exit $?. For that reason the regex has to take that into consideration:
#! /bin/bash
exit_handler () {
ret=$?
if echo "$BASH_COMMAND" | grep -e "^exit \|^exit$" >> /dev/null
then
echo "it was an explicit exit"
else
echo "it was an implicit exit"
fi
exit $ret
}
trap "exit_handler" EXIT
exit 22

In bash, is there an equivalent of die "error msg"

In perl, you can exit with an error msg with die "some msg". Is there an equivalent single command in bash? Right now, I'm achieving this using commands: echo "some msg" && exit 1
You can roll your own easily enough:
die() { echo "$*" 1>&2 ; exit 1; }
...
die "Kaboom"
Here's what I'm using. It's too small to put in a library so I must have typed it hundreds of times ...
warn () {
echo "$0:" "$#" >&2
}
die () {
rc=$1
shift
warn "$#"
exit $rc
}
Usage: die 127 "Syntax error"
This is a very close function to perl's "die" (but with function name):
function die
{
local message=$1
[ -z "$message" ] && message="Died"
echo "$message at ${BASH_SOURCE[1]}:${FUNCNAME[1]} line ${BASH_LINENO[0]}." >&2
exit 1
}
And bash way of dying if built-in function is failed (with function name)
function die
{
local message=$1
[ -z "$message" ] && message="Died"
echo "${BASH_SOURCE[1]}: line ${BASH_LINENO[0]}: ${FUNCNAME[1]}: $message." >&2
exit 1
}
So, Bash is keeping all needed info in several environment variables:
LINENO - current executed line number
FUNCNAME - call stack of functions, first element (index 0) is current function, second (index 1) is function that called current function
BASH_LINENO - call stack of line numbers, where corresponding FUNCNAME was called
BASH_SOURCE - array of source file, where corresponfing FUNCNAME is stored
Yep, that's pretty much how you do it.
You might use a semicolon or newline instead of &&, since you want to exit whether or not echo succeeds (though I'm not sure what would make it fail).
Programming in a shell means using lots of little commands (some built-in commands, some tiny programs) that do one thing well and connecting them with file redirection, exit code logic and other glue.
It may seem weird if you're used to languages where everything is done using functions or methods, but you get used to it.
# echo pass params and print them to a log file
wlog(){
# check terminal if exists echo
test -t 1 && echo "`date +%Y.%m.%d-%H:%M:%S` [$$] $*"
# check LogFile and
test -z $LogFile || {
echo "`date +%Y.%m.%d-%H:%M:%S` [$$] $*" >> $LogFile
} #eof test
}
# eof function wlog
# exit with passed status and message
Exit(){
ExitStatus=0
case $1 in
[0-9]) ExitStatus="$1"; shift 1;;
esac
Msg="$*"
test "$ExitStatus" = "0" || Msg=" ERROR: $Msg : $#"
wlog " $Msg"
exit $ExitStatus
}
#eof function Exit

Resources