Shell script not running on Mac OS - bash

I have what amounts to a very simple bash script that executes a deployment. Here is my code:
#!/usr/bin/env bash
function print_help
{
echo '
Deploy the application.
Usage:
-r reinstall
-h show help
'
}
reinstall=false
while getopts "rh" opt; do
case ${opt} in
r)
echo "clean"
reinstall=true
;;
h)
echo "help"
print_help
exit 0
;;
esac
done
I am calling the script as follows:
. deploy.sh -h
No matter what I do, neither option (i.e. -r, -h) results in the respective echo and in the case of -h the print_help function isn't called.
What am doing wrong?

getopts uses a global variable OPTIND to keep track about which argument it processes currently. Each option it parses, it increments/changes OPTIND to keep track which argument will be next.
If you call getopt without changing OPTIND it will start from where it last ended. If it already parsed first argument, it would want to continue parsing from the second argument, etc. Because there is no second argument the second (or later) time you source your script, there is only -h, getopt will just fail, because it thinks it already parsed -h.
If you want to re-parse arguments in current shell, you need to just reset OPTIND=1. Or start a fresh new shell, which will reset OPTIND to 1 by itself.

If there's a space between the dot and your script name then you're not lauching the script but just sourcing it to your current shell session !
if you want to run your scipt you should do :
# chmod +x ./deploy.sh
# ./deploy.sh -h
if you source it the functtions and variables inside your scipt will be available in your current shell session this could allow you to do things like that :
# . ./deploy.sh
# print_help

Related

Changing the value of the variable from another shell script

I have a shell script that has a on/off switch inside. I programmed the Housekeeping.sh to execute this certain line of code if the value is at 1 and don't execute it if it's at 0.
The following is the code on my Housekeeping.sh:
ARCHIVE_SWITCH=1
if [[ $ARCHIVE_SWITCH -eq 1 ]]; then
sqlplus ${BPS_SCHEMA}/${DB_PASSWORD}#${ORACLE_SID} #${BATCH_HOME}/sql/switch_archive.sql
fi
Now I want to create another shell script file that I'll execute to automatically change the variable ARCHIVE_SWITCHequals to 0 everytime I execute this script.
Is there any other way that I can change the value of the variable ARCHIVE_SWITCH from another shell script file that I'll execute manually?
I'd use an option to the script:
bash housekeeping.sh # default is off
bash housekeeping.sh -a # archive switch is on
#!/usr/bin/env bash
archive_switch=0
while getopts :a opt; do
case $opt
a) archive_switch=1 ;;
*) echo "unknown option -$opt" >&2 ;;
esac
done
shift $((OPTIND-1))
if ((archive_switch)); then
sqlplus ...
fi

Setting environment variables with background processes running in parallel

I have a file that takes 1 min to source. So within that file I need to source, I created functions and then ran them in parallel using &. The exported variables from the child processes are not available in the current environment. Is there a solution or trick to solve this issue? Thanks.
Sample:
#!/bin/bash
function getCNAME() {
curl ...... grep
export CNAME
}
function getBNAME() {
curl ...... grep
export BNAME
}
getCNAME &
getBNAME &
And then I have a main file that calls the source command on the code above and tries to use the variables BNAME and CNAME. But is unable to do so. If I remove the & it does have access to those variable but takes a long time to source the file.
You can't use export in your subshell and expect the parent shell to have access to the resulting variable. Consider using process substitutions instead:
#!/bin/bash
# note that if you're sourcing this, as you should be, the shebang will be ignored.
# ...hopefully it's just there for your editor's syntax highlighting.
rc=0
orig_pipefail_setting=$(shopt -p pipefail)
shopt -s pipefail # make sure if either curl _or_ grep fails the entire pipeline does too
# start both processes in the background, with their stdout on two different FDs
exec 4< <(curl ... | grep ... && printf '\0')
exec 5< <(curl ... | grep ... && printf '\0')
# read from those FDs into variables in the current shell
IFS= read -r -d '' BNAME <&4 || { (( rc |= $? )); echo "Error reading BNAME" >&2; }
IFS= read -r -d '' CNAME <&5 || { (( rc |= $? )); echo "Error reading CNAME" >&2; }
exec 4<&- 5<&- # close those file descriptors now that we're done with them
export BNAME CNAME # note that you probably don't actually need to export these
eval "$orig_pipefail_setting" # turn pipefail back off, if it wasn't on when we started
return "$rc" # ...return with an exit status reflecting whether we had any errors
That way file descriptors 4 and 5 will each be attached to a shell pipeline running curl and feeding its output to grep; both of them are started in the background before we try to read from either, so they're both running at the same time.
Are you sure the last two lines shouldn't be:
getCNAME
getBNAME
Edit - OP has fixed this, it used to read:
CNAME
BNAME
If you are sourcing a script (. /my/script), it is not a child process, and its variables will be available in the current shell. You don't even need export.
If you are executing a script normally, it is a child process, and you can't set variables in the parent shell.
The only methods I'm aware of of transferring data to the parent shell are via a file.
The variables should be available.
Check for bugs in your script:
Make sure you haven't used local for the variables in the functions.
Do echo "$CNAME" at the bottom of the sourced script, to test the functions are actually working at all.
EDIT
I did a little more investigation. Here is the problem: & puts the command/function in a subshell. That's why the variable is not available. In a sourced script, without &, it would be.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
These are referred to as asynchronous commands.

getopt erroneously caches arguments

I've created a script in my bash_aliases to make SSH'ing onto servers easier. However, I'm getting some odd behavior that I don't understand. The below script works as you'd expect, except for when it's re-used.
If I use it like this for this first time in a shell, it works exactly as expected:
$>sdev -s myservername
ssh -i ~/.ssh/id_rsa currentuser#myservername.devdomain.com
However, if I run that a second time, without specifying -s|--server, it will use the server name from the last time I ran this, having seemingly cached it:
$>sdev
ssh -i ~/.ssh/id_rsa currentuser#myservername.devdomain.com
It should have exited with an error and output this message: /bin/bash: A server name (-s|--server) is required.
This happens with any of the arguments; that is, if I specify an argument, and then the next time I don't, this method will use the argument from the last time it was supplied.
Obviously, this is not the behavior I want. What's responsible in my script for doing that, and how do I fix it?
#!/bin/bash
sdev() {
getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
echo "`getopt --test` failed in this environment"
exit 1
fi
OPTIONS=u:,k:,p,s:
LONGOPTIONS=user:,key:,prod,server:
# -temporarily store output to be able to check for errors
# -e.g. use “--options” parameter by name to activate quoting/enhanced mode
# -pass arguments only via -- "$#" to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$#")
if [[ $? -ne 0 ]]; then
# e.g. $? == 1
# then getopt has complained about wrong arguments to stdout
exit 2
fi
# read getopt’s output this way to handle the quoting right:
eval set -- "$PARSED"
domain=devdomain
user="$(whoami)"
key=id_rsa
# now enjoy the options in order and nicely split until we see --
while true; do
case "$1" in
-u|--user)
user="$2"
shift 2
;;
-k|--key)
key="$2".pem
shift 2
;;
-p|--prod)
domain=proddomain
shift
;;
-s|--server)
server="$2"
shift 2
;;
--)
shift
break
;;
*)
echo "Programming error"
exit 3
;;
esac
done
if [ -z "$server" ]; then
echo "$0: A server name (-s|--server) is required."
kill -INT $$
fi
echo "ssh -i ~/.ssh/$key.pem $user#$server.$domain.com"
ssh -i ~/.ssh/$key $user#$server.$domain.com
}
server is a global shell variable, so it's shared between runs of the function (as long as they're run in the same shell). That is, when you run sdev -s myservername, it sets the variable server to "myservername". Later, when you run just sdev, it checks to see if $server is empty, finds it's not, and goes ahead and uses it.
Solution: use local variables! Actually, it'd be best to declare all of the variables you use in the function as local; that way, you don't run the risk of interfering with something else that's trying to use the same variable name. I'd also recommend avoiding all-caps variable names (like OPTIONS, LONGOPTIONS, and PARSED) -- there are a bunch of all-caps variables that have special meanings to the shell and/or other programs, and if you use one of those by mistake it can cause weird problems.
Anyway, here's the simple solution: add this near the beginning of the script:
local server=""

Getopts in sourced Bash function works interactively, but not in test script?

I have a Bash function library and one function is proving problematic for testing. prunner is a function that is meant to provide some of the functionality of GNU Parallel, and avoid the scoping issues of trying to use other Bash functions in Perl. It supports setting a command to run against the list of arguments with -c, and setting the number of background jobs to run concurrently with -t.
In testing it, I have ended up with the following scenario:
prunner -c "gzip -fk" *.out - works as expected in test.bash and interactively.
find . -maxdepth 1 -name "*.out" | prunner -c echo -t 6 - does not work, seemingly ignoring -c echo.
Testing was performed on Ubuntu 16.04 with Bash 4.3 and on Mac OS X with Bash 4.4.
What appears to be happening with the latter in test.bash is that getopts is refusing to process -c, and thus prunner will try to directly execute the argument without the prefix command it was given. The strange part is that I am able to observe it accepting the -t option, so getopts is at least partially working. Bash debugging with set -x has not been able to shed any light on why this is happening for me.
Here is the function in question, lightly modified to use echo instead of log and quit so that it can be used separately from the rest of my library:
prunner () {
local PQUEUE=()
while getopts ":c:t:" OPT ; do
case ${OPT} in
c) local PCMD="${OPTARG}" ;;
t) local THREADS="${OPTARG}" ;;
:) echo "ERROR: Option '-${OPTARG}' requires an argument." ;;
*) echo "ERROR: Option '-${OPTARG}' is not defined." ;;
esac
done
shift $(($OPTIND-1))
for ARG in "$#" ; do
PQUEUE+=("$ARG")
done
if [ ! -t 0 ] ; then
while read -r LINE ; do
PQUEUE+=("$LINE")
done
fi
local QCOUNT="${#PQUEUE[#]}"
local INDEX=0
echo "Starting parallel execution of $QCOUNT jobs with ${THREADS:-8} threads using command prefix '$PCMD'."
until [ ${#PQUEUE[#]} == 0 ] ; do
if [ "$(jobs -rp | wc -l)" -lt "${THREADS:-8}" ] ; then
echo "Starting command in parallel ($(($INDEX+1))/$QCOUNT): ${PCMD} ${PQUEUE[$INDEX]}"
eval "${PCMD} ${PQUEUE[$INDEX]}" || true &
unset PQUEUE[$INDEX]
((INDEX++)) || true
fi
done
wait
echo "Parallel execution finished for $QCOUNT jobs."
}
Can anyone please help me to determine why -c options are not working correctly for prunner when lines are piped to stdin?
My guess is that you are executing the two commands in the same shell. In that case, in the second invocation, OPTIND will have the value 3 (which is where it got to on the first invocation) and that is where getopts will start scanning.
If you use getopts to parse arguments to a function (as opposed to a script), declare local OPTIND=1 to avoid invocations from interfering with each other.
Perhaps you are already doing this, but make sure to pass the top-level shell parameters to your function. The function will receive the parameters via the call, for example:
xyz () {
echo "First arg: ${1}"
echo "Second arg: ${2}"
}
xyz "This is" "very simple"
In your example, you should always be calling the function with the standard args so that they can be processed in the method via getopts.
prunner "$#"
Note that pruner will not modify the standard args outside of the function.

Automatic exit from Bash shell script on error [duplicate]

This question already has answers here:
Aborting a shell script if any command returns a non-zero value
(10 answers)
Closed 3 years ago.
I've been writing some shell script and I would find it useful if there was the ability to halt the execution of said shell script if any of the commands failed. See below for an example:
#!/bin/bash
cd some_dir
./configure --some-flags
make
make install
So in this case, if the script can't change to the indicated directory, then it would certainly not want to do a ./configure afterwards if it fails.
Now I'm well aware that I could have an if check for each command (which I think is a hopeless solution), but is there a global setting to make the script exit if one of the commands fails?
Use the set -e builtin:
#!/bin/bash
set -e
# Any subsequent(*) commands which fail will cause the shell script to exit immediately
Alternatively, you can pass -e on the command line:
bash -e my_script.sh
You can also disable this behavior with set +e.
You may also want to employ all or some of the the -e -u -x and -o pipefail options like so:
set -euxo pipefail
-e exits on error, -u errors on undefined variables, -x prints commands before execution, and -o (for option) pipefail exits on command pipe failures. Some gotchas and workarounds are documented well here.
(*) Note:
The shell does not exit if the command that fails is part of the
command list immediately following a while or until keyword,
part of the test following the if or elif reserved words, part
of any command executed in a && or || list except the command
following the final && or ||, any command in a pipeline but
the last, or if the command's return value is being inverted with
!
(from man bash)
To exit the script as soon as one of the commands failed, add this at the beginning:
set -e
This causes the script to exit immediately when some command that is not part of some test (like in a if [ ... ] condition or a && construct) exits with a non-zero exit code.
Use it in conjunction with pipefail.
set -e
set -o pipefail
-e (errexit): Abort the script at the first error, when a command exits with non-zero status (except in until or while loops, if-tests, and list constructs)
-o pipefail: Causes a pipeline to return the exit status of the last command in the pipe that returned a non-zero return value.
Chapter 33. Options
Here is how to do it:
#!/bin/sh
abort()
{
echo >&2 '
***************
*** ABORTED ***
***************
'
echo "An error occurred. Exiting..." >&2
exit 1
}
trap 'abort' 0
set -e
# Add your script below....
# If an error occurs, the abort() function will be called.
#----------------------------------------------------------
# ===> Your script goes here
# Done!
trap : 0
echo >&2 '
************
*** DONE ***
************
'
An alternative to the accepted answer that fits in the first line:
#!/bin/bash -e
cd some_dir
./configure --some-flags
make
make install
One idiom is:
cd some_dir && ./configure --some-flags && make && make install
I realize that can get long, but for larger scripts you could break it into logical functions.
I think that what you are looking for is the trap command:
trap command signal [signal ...]
For more information, see this page.
Another option is to use the set -e command at the top of your script - it will make the script exit if any program / command returns a non true value.
One point missed in the existing answers is show how to inherit the error traps. The bash shell provides one such option for that using set
-E
If set, any trap on ERR is inherited by shell functions, command substitutions, and commands executed in a subshell environment. The ERR trap is normally not inherited in such cases.
Adam Rosenfield's answer recommendation to use set -e is right in certain cases but it has its own potential pitfalls. See GreyCat's BashFAQ - 105 - Why doesn't set -e (or set -o errexit, or trap ERR) do what I expected?
According to the manual, set -e exits
if a simple commandexits with a non-zero status. The shell does not exit if the command that fails is part of the command list immediately following a while or until keyword, part of the test in a if statement, part of an && or || list except the command following the final && or ||, any command in a pipeline but the last, or if the command's return value is being inverted via !".
which means, set -e does not work under the following simple cases (detailed explanations can be found on the wiki)
Using the arithmetic operator let or $((..)) ( bash 4.1 onwards) to increment a variable value as
#!/usr/bin/env bash
set -e
i=0
let i++ # or ((i++)) on bash 4.1 or later
echo "i is $i"
If the offending command is not part of the last command executed via && or ||. For e.g. the below trap wouldn't fire when its expected to
#!/usr/bin/env bash
set -e
test -d nosuchdir && echo no dir
echo survived
When used incorrectly in an if statement as, the exit code of the if statement is the exit code of the last executed command. In the example below the last executed command was echo which wouldn't fire the trap, even though the test -d failed
#!/usr/bin/env bash
set -e
f() { if test -d nosuchdir; then echo no dir; fi; }
f
echo survived
When used with command-substitution, they are ignored, unless inherit_errexit is set with bash 4.4
#!/usr/bin/env bash
set -e
foo=$(expr 1-1; true)
echo survived
when you use commands that look like assignments but aren't, such as export, declare, typeset or local. Here the function call to f will not exit as local has swept the error code that was set previously.
set -e
f() { local var=$(somecommand that fails); }
g() { local var; var=$(somecommand that fails); }
When used in a pipeline, and the offending command is not part of the last command. For e.g. the below command would still go through. One options is to enable pipefail by returning the exit code of the first failed process:
set -e
somecommand that fails | cat -
echo survived
The ideal recommendation is to not use set -e and implement an own version of error checking instead. More information on implementing custom error handling on one of my answers to Raise error in a Bash script

Resources