Execute local script on remote host by passing remote host parameters on command line together with script arguments - bash

is anybody aware if there is a syntax to pass a remote host parameters (user and IP/hostname) together with script arguments on local host and make it execute on the remote host?
I'm not meaning like this: $ ssh user#remoteServer "bash -s" -- < /path/script.ssh -a X -b Y
I want instead for the script to be able to be passed like this: $/path/script.ssh user#remoteServer -a X -b Y
But I'm not sure how to achieve, in the script, this kind of behaviour:
[...] script [...]
connect to user#remoteServer
[...] execute the script code (on the remote host) [...]
end of script
Any suggestion? Do I need to work this from another way instead?
EDIT
I've managed to make the script execute something after it connects via SSH, but I'm a bit as for why some commands are executed before they are passed to the remote host terminal; my code looks like this at the moment:
while getopts 'ha:u:d:s:w:c:' OPT; do
case $OPT in
a) host=$OPTARG;;
u) user=$OPTARG ;;
d) device=$OPTARG ;;
s) sensor=$OPTARG ;;
w) warn_thresh=$OPTARG ;;
c) crit_thresh=$OPTARG ;;
h) print_help
*) printf "Wrong option or value\n"
print_help
esac
done
shift $(($OPTIND - 1))
# Check if host is reachable
if (( $# )); then
ssh ${user}#${host} < $0
# Check for sensor program or file
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Read temperature information
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Check for errors
if (())
then
# Temperature above critical threshold
# Check for warnings
elif (())
then
# Temperature above warning threshold
fi
# Produce Nagios output
printf [......]
fi
The script seemingly runs without issue, but I get no output.

A simplistic example -
if (( $# )) # if there are arguments
then ssh "$1" < $0 # connect to the first and execute this script there
else whoami # on the remote, there will be no args...
uname -n # if remote needs arguments, change the test condition
date # these statements can be as complex as needed
fi
My example script just takes a target system login as its first argument.
Run it with no args it outputs the data for the current system; use a login, it runs there.
If you have password-less logins with authorized keys it's very smooth, otherwise it will prompt you.
Just parse your arguments and behave accordingly. :)
If you need arguments on the remote, use a more complex test to decide which branch to take...
Edit 2
I repeat: If you need arguments on the remote, use a more complex test to decide which branch to take...
while getopts 'ha:u:d:s:w:c:' OPT; do
case $OPT in
a) host=$OPTARG;;
u) user=$OPTARG ;;
d) device=$OPTARG ;;
s) sensor=$OPTARG ;;
w) warn_thresh=$OPTARG ;;
c) crit_thresh=$OPTARG ;;
h) print_help
*) printf "Wrong option or value\n"
print_help
esac
done
shift $(($OPTIND - 1))
# handoff to remote host
if [[ -n "$host" ]]
then scp "${user}#${host}:/tmp/" "$0"
ssh "${user}#${host}" "/tmp/${0##*/} -d $device -s $sensor -w $warn_thresh -c $crit_thresh"
exit $?
fi
# if it gets here, we're ON the remote host, so code accordingly
# Check for sensor program or file
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Read temperature information
case $device in
linux) do things
raspberry) do things
amlogic) do things
esac
# Check for errors
if (())
then
# Temperature above critical threshold
# Check for warnings
elif (())
then
# Temperature above warning threshold
fi
# Produce Nagios output
printf [......]
fi

Related

Can a shell script flag have optional arguments if parsing with getopts?

I have a script that I want to run in three ways:
Without a flag -- ./script.sh
With a flag but no parameter -- ./script.sh -u
With a flag that takes a parameter -- ./script.sh -u username
Is there a way to do this?
After reading some guides (examples here and here) it doesn't seem like this is a possibility, especially if I want to use getopts.
Can I do this with getopts or will I need to parse my options another way? My goal is to continue using getopts if I can.
The non-getopts example in BashFAQ #35 can cover the use case:
user_set=0 # 1 if any -u is given
user= # set to specific string for -u, if provided
while :; do
case $1 in
-u=*) user_set=1; user=${1#*=} ;;
-u) user_set=1
if [ -n "$2" ]; then
user=$2
shift
fi ;;
--) shift; break ;;
*) break ;;
esac
shift
done

getopt erroneously caches arguments

I've created a script in my bash_aliases to make SSH'ing onto servers easier. However, I'm getting some odd behavior that I don't understand. The below script works as you'd expect, except for when it's re-used.
If I use it like this for this first time in a shell, it works exactly as expected:
$>sdev -s myservername
ssh -i ~/.ssh/id_rsa currentuser#myservername.devdomain.com
However, if I run that a second time, without specifying -s|--server, it will use the server name from the last time I ran this, having seemingly cached it:
$>sdev
ssh -i ~/.ssh/id_rsa currentuser#myservername.devdomain.com
It should have exited with an error and output this message: /bin/bash: A server name (-s|--server) is required.
This happens with any of the arguments; that is, if I specify an argument, and then the next time I don't, this method will use the argument from the last time it was supplied.
Obviously, this is not the behavior I want. What's responsible in my script for doing that, and how do I fix it?
#!/bin/bash
sdev() {
getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
echo "`getopt --test` failed in this environment"
exit 1
fi
OPTIONS=u:,k:,p,s:
LONGOPTIONS=user:,key:,prod,server:
# -temporarily store output to be able to check for errors
# -e.g. use “--options” parameter by name to activate quoting/enhanced mode
# -pass arguments only via -- "$#" to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$#")
if [[ $? -ne 0 ]]; then
# e.g. $? == 1
# then getopt has complained about wrong arguments to stdout
exit 2
fi
# read getopt’s output this way to handle the quoting right:
eval set -- "$PARSED"
domain=devdomain
user="$(whoami)"
key=id_rsa
# now enjoy the options in order and nicely split until we see --
while true; do
case "$1" in
-u|--user)
user="$2"
shift 2
;;
-k|--key)
key="$2".pem
shift 2
;;
-p|--prod)
domain=proddomain
shift
;;
-s|--server)
server="$2"
shift 2
;;
--)
shift
break
;;
*)
echo "Programming error"
exit 3
;;
esac
done
if [ -z "$server" ]; then
echo "$0: A server name (-s|--server) is required."
kill -INT $$
fi
echo "ssh -i ~/.ssh/$key.pem $user#$server.$domain.com"
ssh -i ~/.ssh/$key $user#$server.$domain.com
}
server is a global shell variable, so it's shared between runs of the function (as long as they're run in the same shell). That is, when you run sdev -s myservername, it sets the variable server to "myservername". Later, when you run just sdev, it checks to see if $server is empty, finds it's not, and goes ahead and uses it.
Solution: use local variables! Actually, it'd be best to declare all of the variables you use in the function as local; that way, you don't run the risk of interfering with something else that's trying to use the same variable name. I'd also recommend avoiding all-caps variable names (like OPTIONS, LONGOPTIONS, and PARSED) -- there are a bunch of all-caps variables that have special meanings to the shell and/or other programs, and if you use one of those by mistake it can cause weird problems.
Anyway, here's the simple solution: add this near the beginning of the script:
local server=""

Getopts in sourced Bash function works interactively, but not in test script?

I have a Bash function library and one function is proving problematic for testing. prunner is a function that is meant to provide some of the functionality of GNU Parallel, and avoid the scoping issues of trying to use other Bash functions in Perl. It supports setting a command to run against the list of arguments with -c, and setting the number of background jobs to run concurrently with -t.
In testing it, I have ended up with the following scenario:
prunner -c "gzip -fk" *.out - works as expected in test.bash and interactively.
find . -maxdepth 1 -name "*.out" | prunner -c echo -t 6 - does not work, seemingly ignoring -c echo.
Testing was performed on Ubuntu 16.04 with Bash 4.3 and on Mac OS X with Bash 4.4.
What appears to be happening with the latter in test.bash is that getopts is refusing to process -c, and thus prunner will try to directly execute the argument without the prefix command it was given. The strange part is that I am able to observe it accepting the -t option, so getopts is at least partially working. Bash debugging with set -x has not been able to shed any light on why this is happening for me.
Here is the function in question, lightly modified to use echo instead of log and quit so that it can be used separately from the rest of my library:
prunner () {
local PQUEUE=()
while getopts ":c:t:" OPT ; do
case ${OPT} in
c) local PCMD="${OPTARG}" ;;
t) local THREADS="${OPTARG}" ;;
:) echo "ERROR: Option '-${OPTARG}' requires an argument." ;;
*) echo "ERROR: Option '-${OPTARG}' is not defined." ;;
esac
done
shift $(($OPTIND-1))
for ARG in "$#" ; do
PQUEUE+=("$ARG")
done
if [ ! -t 0 ] ; then
while read -r LINE ; do
PQUEUE+=("$LINE")
done
fi
local QCOUNT="${#PQUEUE[#]}"
local INDEX=0
echo "Starting parallel execution of $QCOUNT jobs with ${THREADS:-8} threads using command prefix '$PCMD'."
until [ ${#PQUEUE[#]} == 0 ] ; do
if [ "$(jobs -rp | wc -l)" -lt "${THREADS:-8}" ] ; then
echo "Starting command in parallel ($(($INDEX+1))/$QCOUNT): ${PCMD} ${PQUEUE[$INDEX]}"
eval "${PCMD} ${PQUEUE[$INDEX]}" || true &
unset PQUEUE[$INDEX]
((INDEX++)) || true
fi
done
wait
echo "Parallel execution finished for $QCOUNT jobs."
}
Can anyone please help me to determine why -c options are not working correctly for prunner when lines are piped to stdin?
My guess is that you are executing the two commands in the same shell. In that case, in the second invocation, OPTIND will have the value 3 (which is where it got to on the first invocation) and that is where getopts will start scanning.
If you use getopts to parse arguments to a function (as opposed to a script), declare local OPTIND=1 to avoid invocations from interfering with each other.
Perhaps you are already doing this, but make sure to pass the top-level shell parameters to your function. The function will receive the parameters via the call, for example:
xyz () {
echo "First arg: ${1}"
echo "Second arg: ${2}"
}
xyz "This is" "very simple"
In your example, you should always be calling the function with the standard args so that they can be processed in the method via getopts.
prunner "$#"
Note that pruner will not modify the standard args outside of the function.

How to use an argument/parameter name as a variable in a bash script

I'm trying to write a script that allows connection to various servers, e.g.
#!/bin/bash
# list of servers
server1=10.10.10.10
server2=20.20.20.20
ssh ${$1}
And I'd like to run it like:
sh connect.sh server1
Can't figure out how to use the parameter's name as a variable. Arrays do not work on my Ubuntu too.
Use shell indirection like this:
x=5
y=x
echo ${!y}
5
For your script, following works:
#!/bin/bash
# list of servers
server1=10.10.10.10
server2=20.20.20.20
arg1="$1"
ssh ${!arg1}
Easiest way would be to switch on $1:
case "$1" in
server1) ssh "$server1"
;;
server2) ssh "$server2"
;;
*) ssh "$server1" # when no parameter is passed default to server1
;;
esac
Try this:
#!/bin/bash
# list of servers
server1=10.10.10.10
server2=20.20.20.20
if [ "$1" == "server1" ]; then
ssh $server1;
elif [ "$1" == "server2" ]; then
ssh $server2;
fi

Error with Bash script and ssh

I have a bash script where I ssh to a remote host and then create a file depending on the operating system (case statement in bash). When I execute this code on OS X, I expect the value Darwin to be evaluated and the file eg2.txt to be created. However, for some reason the evaluation fails to choose Darwin and it selects * and then creates the file none.txt. Has anyone run into a similar issue? Can someone tell what is wrong?
#!/bin/bash
ssh -l user $1 "cd Desktop;
opname=`uname -s`;
echo \"first\" > first.txt
case \"$opname\" in
"Darwin") echo \"Darwin\" > eg2.txt ;;
"Linux") sed -i \"/$2/d\" choice_list.txt ;;
*) touch none.txt ;;
esac"
P.S. I am running this code primarily on a Mac.
The problem is that your $opname variable is being expanded (into the empty string) by the Bash instance that's running ssh (i.e., on the client-side), rather than being passed over SSH to be handled by the Bash instance on the server-side.
To fix this, you can either use single-quotes instead of double-quotes:
#!/bin/bash
ssh -l user $1 'cd Desktop;
opname=`uname -s`;
echo "first" > first.txt
case "$opname" in
Darwin) echo "Darwin" > eg2.txt ;;
Linux) sed -i "/$2/d" choice_list.txt ;;
*) touch none.txt ;;
esac'
or else you can quote your $ using \:
#!/bin/bash
ssh -l user $1 "cd Desktop;
opname=`uname -s`;
echo \"first\" > first.txt
case \"\$opname\" in
"Darwin") echo \"Darwin\" > eg2.txt ;;
"Linux") sed -i \"/\$2/d\" choice_list.txt ;;
*) touch none.txt ;;
esac"

Resources