getopt erroneously caches arguments - bash

I've created a script in my bash_aliases to make SSH'ing onto servers easier. However, I'm getting some odd behavior that I don't understand. The below script works as you'd expect, except for when it's re-used.
If I use it like this for this first time in a shell, it works exactly as expected:
$>sdev -s myservername
ssh -i ~/.ssh/id_rsa currentuser#myservername.devdomain.com
However, if I run that a second time, without specifying -s|--server, it will use the server name from the last time I ran this, having seemingly cached it:
$>sdev
ssh -i ~/.ssh/id_rsa currentuser#myservername.devdomain.com
It should have exited with an error and output this message: /bin/bash: A server name (-s|--server) is required.
This happens with any of the arguments; that is, if I specify an argument, and then the next time I don't, this method will use the argument from the last time it was supplied.
Obviously, this is not the behavior I want. What's responsible in my script for doing that, and how do I fix it?
#!/bin/bash
sdev() {
getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
echo "`getopt --test` failed in this environment"
exit 1
fi
OPTIONS=u:,k:,p,s:
LONGOPTIONS=user:,key:,prod,server:
# -temporarily store output to be able to check for errors
# -e.g. use “--options” parameter by name to activate quoting/enhanced mode
# -pass arguments only via -- "$#" to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$#")
if [[ $? -ne 0 ]]; then
# e.g. $? == 1
# then getopt has complained about wrong arguments to stdout
exit 2
fi
# read getopt’s output this way to handle the quoting right:
eval set -- "$PARSED"
domain=devdomain
user="$(whoami)"
key=id_rsa
# now enjoy the options in order and nicely split until we see --
while true; do
case "$1" in
-u|--user)
user="$2"
shift 2
;;
-k|--key)
key="$2".pem
shift 2
;;
-p|--prod)
domain=proddomain
shift
;;
-s|--server)
server="$2"
shift 2
;;
--)
shift
break
;;
*)
echo "Programming error"
exit 3
;;
esac
done
if [ -z "$server" ]; then
echo "$0: A server name (-s|--server) is required."
kill -INT $$
fi
echo "ssh -i ~/.ssh/$key.pem $user#$server.$domain.com"
ssh -i ~/.ssh/$key $user#$server.$domain.com
}

server is a global shell variable, so it's shared between runs of the function (as long as they're run in the same shell). That is, when you run sdev -s myservername, it sets the variable server to "myservername". Later, when you run just sdev, it checks to see if $server is empty, finds it's not, and goes ahead and uses it.
Solution: use local variables! Actually, it'd be best to declare all of the variables you use in the function as local; that way, you don't run the risk of interfering with something else that's trying to use the same variable name. I'd also recommend avoiding all-caps variable names (like OPTIONS, LONGOPTIONS, and PARSED) -- there are a bunch of all-caps variables that have special meanings to the shell and/or other programs, and if you use one of those by mistake it can cause weird problems.
Anyway, here's the simple solution: add this near the beginning of the script:
local server=""

Related

Execute local script on remote host by passing remote host parameters on command line together with script arguments

is anybody aware if there is a syntax to pass a remote host parameters (user and IP/hostname) together with script arguments on local host and make it execute on the remote host?
I'm not meaning like this: $ ssh user#remoteServer "bash -s" -- < /path/script.ssh -a X -b Y
I want instead for the script to be able to be passed like this: $/path/script.ssh user#remoteServer -a X -b Y
But I'm not sure how to achieve, in the script, this kind of behaviour:
[...] script [...]
connect to user#remoteServer
[...] execute the script code (on the remote host) [...]
end of script
Any suggestion? Do I need to work this from another way instead?
EDIT
I've managed to make the script execute something after it connects via SSH, but I'm a bit as for why some commands are executed before they are passed to the remote host terminal; my code looks like this at the moment:
while getopts 'ha:u:d:s:w:c:' OPT; do
case $OPT in
a) host=$OPTARG;;
u) user=$OPTARG ;;
d) device=$OPTARG ;;
s) sensor=$OPTARG ;;
w) warn_thresh=$OPTARG ;;
c) crit_thresh=$OPTARG ;;
h) print_help
*) printf "Wrong option or value\n"
print_help
esac
done
shift $(($OPTIND - 1))
# Check if host is reachable
if (( $# )); then
ssh ${user}#${host} < $0
# Check for sensor program or file
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Read temperature information
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Check for errors
if (())
then
# Temperature above critical threshold
# Check for warnings
elif (())
then
# Temperature above warning threshold
fi
# Produce Nagios output
printf [......]
fi
The script seemingly runs without issue, but I get no output.
A simplistic example -
if (( $# )) # if there are arguments
then ssh "$1" < $0 # connect to the first and execute this script there
else whoami # on the remote, there will be no args...
uname -n # if remote needs arguments, change the test condition
date # these statements can be as complex as needed
fi
My example script just takes a target system login as its first argument.
Run it with no args it outputs the data for the current system; use a login, it runs there.
If you have password-less logins with authorized keys it's very smooth, otherwise it will prompt you.
Just parse your arguments and behave accordingly. :)
If you need arguments on the remote, use a more complex test to decide which branch to take...
Edit 2
I repeat: If you need arguments on the remote, use a more complex test to decide which branch to take...
while getopts 'ha:u:d:s:w:c:' OPT; do
case $OPT in
a) host=$OPTARG;;
u) user=$OPTARG ;;
d) device=$OPTARG ;;
s) sensor=$OPTARG ;;
w) warn_thresh=$OPTARG ;;
c) crit_thresh=$OPTARG ;;
h) print_help
*) printf "Wrong option or value\n"
print_help
esac
done
shift $(($OPTIND - 1))
# handoff to remote host
if [[ -n "$host" ]]
then scp "${user}#${host}:/tmp/" "$0"
ssh "${user}#${host}" "/tmp/${0##*/} -d $device -s $sensor -w $warn_thresh -c $crit_thresh"
exit $?
fi
# if it gets here, we're ON the remote host, so code accordingly
# Check for sensor program or file
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Read temperature information
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Check for errors
if (())
then
# Temperature above critical threshold
# Check for warnings
elif (())
then
# Temperature above warning threshold
fi
# Produce Nagios output
printf [......]
fi

Shell script not running on Mac OS

I have what amounts to a very simple bash script that executes a deployment. Here is my code:
#!/usr/bin/env bash
function print_help
{
echo '
Deploy the application.
Usage:
-r reinstall
-h show help
'
}
reinstall=false
while getopts "rh" opt; do
case ${opt} in
r)
echo "clean"
reinstall=true
;;
h)
echo "help"
print_help
exit 0
;;
esac
done
I am calling the script as follows:
. deploy.sh -h
No matter what I do, neither option (i.e. -r, -h) results in the respective echo and in the case of -h the print_help function isn't called.
What am doing wrong?
getopts uses a global variable OPTIND to keep track about which argument it processes currently. Each option it parses, it increments/changes OPTIND to keep track which argument will be next.
If you call getopt without changing OPTIND it will start from where it last ended. If it already parsed first argument, it would want to continue parsing from the second argument, etc. Because there is no second argument the second (or later) time you source your script, there is only -h, getopt will just fail, because it thinks it already parsed -h.
If you want to re-parse arguments in current shell, you need to just reset OPTIND=1. Or start a fresh new shell, which will reset OPTIND to 1 by itself.
If there's a space between the dot and your script name then you're not lauching the script but just sourcing it to your current shell session !
if you want to run your scipt you should do :
# chmod +x ./deploy.sh
# ./deploy.sh -h
if you source it the functtions and variables inside your scipt will be available in your current shell session this could allow you to do things like that :
# . ./deploy.sh
# print_help

Can a shell script flag have optional arguments if parsing with getopts?

I have a script that I want to run in three ways:
Without a flag -- ./script.sh
With a flag but no parameter -- ./script.sh -u
With a flag that takes a parameter -- ./script.sh -u username
Is there a way to do this?
After reading some guides (examples here and here) it doesn't seem like this is a possibility, especially if I want to use getopts.
Can I do this with getopts or will I need to parse my options another way? My goal is to continue using getopts if I can.
The non-getopts example in BashFAQ #35 can cover the use case:
user_set=0 # 1 if any -u is given
user= # set to specific string for -u, if provided
while :; do
case $1 in
-u=*) user_set=1; user=${1#*=} ;;
-u) user_set=1
if [ -n "$2" ]; then
user=$2
shift
fi ;;
--) shift; break ;;
*) break ;;
esac
shift
done

Getopts in sourced Bash function works interactively, but not in test script?

I have a Bash function library and one function is proving problematic for testing. prunner is a function that is meant to provide some of the functionality of GNU Parallel, and avoid the scoping issues of trying to use other Bash functions in Perl. It supports setting a command to run against the list of arguments with -c, and setting the number of background jobs to run concurrently with -t.
In testing it, I have ended up with the following scenario:
prunner -c "gzip -fk" *.out - works as expected in test.bash and interactively.
find . -maxdepth 1 -name "*.out" | prunner -c echo -t 6 - does not work, seemingly ignoring -c echo.
Testing was performed on Ubuntu 16.04 with Bash 4.3 and on Mac OS X with Bash 4.4.
What appears to be happening with the latter in test.bash is that getopts is refusing to process -c, and thus prunner will try to directly execute the argument without the prefix command it was given. The strange part is that I am able to observe it accepting the -t option, so getopts is at least partially working. Bash debugging with set -x has not been able to shed any light on why this is happening for me.
Here is the function in question, lightly modified to use echo instead of log and quit so that it can be used separately from the rest of my library:
prunner () {
local PQUEUE=()
while getopts ":c:t:" OPT ; do
case ${OPT} in
c) local PCMD="${OPTARG}" ;;
t) local THREADS="${OPTARG}" ;;
:) echo "ERROR: Option '-${OPTARG}' requires an argument." ;;
*) echo "ERROR: Option '-${OPTARG}' is not defined." ;;
esac
done
shift $(($OPTIND-1))
for ARG in "$#" ; do
PQUEUE+=("$ARG")
done
if [ ! -t 0 ] ; then
while read -r LINE ; do
PQUEUE+=("$LINE")
done
fi
local QCOUNT="${#PQUEUE[#]}"
local INDEX=0
echo "Starting parallel execution of $QCOUNT jobs with ${THREADS:-8} threads using command prefix '$PCMD'."
until [ ${#PQUEUE[#]} == 0 ] ; do
if [ "$(jobs -rp | wc -l)" -lt "${THREADS:-8}" ] ; then
echo "Starting command in parallel ($(($INDEX+1))/$QCOUNT): ${PCMD} ${PQUEUE[$INDEX]}"
eval "${PCMD} ${PQUEUE[$INDEX]}" || true &
unset PQUEUE[$INDEX]
((INDEX++)) || true
fi
done
wait
echo "Parallel execution finished for $QCOUNT jobs."
}
Can anyone please help me to determine why -c options are not working correctly for prunner when lines are piped to stdin?
My guess is that you are executing the two commands in the same shell. In that case, in the second invocation, OPTIND will have the value 3 (which is where it got to on the first invocation) and that is where getopts will start scanning.
If you use getopts to parse arguments to a function (as opposed to a script), declare local OPTIND=1 to avoid invocations from interfering with each other.
Perhaps you are already doing this, but make sure to pass the top-level shell parameters to your function. The function will receive the parameters via the call, for example:
xyz () {
echo "First arg: ${1}"
echo "Second arg: ${2}"
}
xyz "This is" "very simple"
In your example, you should always be calling the function with the standard args so that they can be processed in the method via getopts.
prunner "$#"
Note that pruner will not modify the standard args outside of the function.

Simple bash script for starting application silently

Here I am again. Today I wrote a little script that is supposed to start an application silently in my debian env.
Easy as
silent "npm search 1234556"
This works but not at all.
As you can see, I commented the section where I have some troubles.
This line:
$($cmdLine) &
doesn't hide application output but this one
$($1 >/dev/null 2>/dev/null) &
works perfectly. What am I missing? Many thanks.
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>/dev/null"
fi
# not working
$($cmdLine) &
# works perfectly
#$($1 >/dev/null 2>/dev/null) &
With the use of evil eval following script will work:
#!/bin/sh
# Silently exec a command line passed as argument
errorsRedirect=""
if [ -z "$1" ]; then
echo "Please, don't joke me..."
exit 1
fi
cmdLine="$1 >/dev/null"
# if passed a second parameter, errors will be hidden
if [ -n "$2" ]; then
cmdLine="$cmdLine 2>&1"
fi
eval "$cmdLine &"
Rather than building up a command with redirection tacked on the end, you can incrementally apply it:
#!/bin/sh
if [ -z "$1" ]; then
exit
fi
exec >/dev/null
if [ -n "$2" ]; then
exec 2>&1
fi
exec $1
This first redirects stdout of the shell script to /dev/null. If the second argument is given, it redirects stderr of the shell script too. Then it runs the command which will inherit stdout and stderr from the script.
I removed the ampersand (&) since being silent has nothing to do with running in the background. You can add it back (and remove the exec on the last line) if it is what you want.
I added exec at the end as it is slightly more efficient. Since it is the end of the shell script, there is nothing left to do, so you may as well be done with it, hence exec.
& means that you're doing sort of multitask whereas
1 >/dev/null 2>/dev/null
means that you redirect the output to a sort of garbage and that's why you don't see anything.
Furthermore cmdLine="$1 >/dev/null" is incorrect, you should use ' instead of " :
cmdLine='$1 >/dev/null'
you can build your command line in a var and run a bash with it in background:
bash -c "$cmdLine"&
Note that it might be useful to store the output (out/err) of the program, instead of trow them in null.
In addition, why do you need errorsRedirect??
You can even add a wait at the end, just to be safe...if you want...
#!/bin/sh
# Daniele Brugnara
# October, 2013
# Silently exec a command line passed as argument
[ ! $1 ] && echo "Please, don't joke me..." && exit 1
cmdLine="$1>/dev/null"
# if passed a second parameter, errors will be hidden
[ $2 ] && cmdLine+=" 2>/dev/null"
# not working
echo "Running \"$cmdLine\""
bash -c "$cmdLine" &
wait

Resources