how to load array parameter in another shell file dynamically over ssh connection - bash

I need to call my executable which is placed in an on-prem server by using an ssh connection and pass a dynamics parameter.
based on my requirement, users should be able to add or remove parameters as they want to work with the executable on the on-prem server.
I wrote a translator to identify any new parameter added to the console but now when I want to pass it via ssh, I am facing 2 problems.
what if I have a value that contains space?
how to load these values dynamically & use them as arguments on my shell script on the server?
**Also take note that I am sending some additional parameters that are not related to my executable argument but I need them as well.
params=(
"$MASTER"
"$NAME"
"$QUEUE"
service.enabled=true
)
for var_name in "${!conf__#}";
do
key=${var_name#conf__};
key=${key//_/.};
value=${!var_name};
params+=( --conf "$key=$value" );
done
echo "${params[#]}"
ssh -o StrictHostKeyChecking=no myuser#server_ip "/bin/bash -s" < deploy_script.sh "${params[#]}"
My deploy_script.sh file will be something like the below file.
#!/bin/bash
set -e
AR_MASTER=${1}
AR_NAME=${2}
AR_QUEUE=${3}
AR_SER_EN=${4}
# How can I get the other dynamic parameters???
main() {
my-executable \
"--master "$AR_MASTER \
"--name "$AR_NAME \
"--queue "$AR_QUEUE \
"--conf service.enabled="$AR_SER_EN \
??? #how to add the additional configuration dynamically?
}
main "$#"
Would you mind help me in figure it out?

Related

How to set variables using terragrunt before_hook

I need to use some gcloud commands in order to create a Redis instance on GCP as terraform does not support some options that I need.
I'm trying this:
terraform {
# Before apply, run script.
before_hook "create_redis_script" {
commands = ["apply"]
execute = ["REDIS_REGION=${local.module_vars.redis_region}", "REDIS_PROJECT=${local.module_vars.redis_project}", "REDIS_VPC=${local.module_vars.redis_vpc}", "REDIS_PREFIX_LENGHT=${local.module_vars.redis_prefix_lenght}", "REDIS_RESERVED_RANGE_NAME=${local.module_vars.redis_reserved_range_name}", "REDIS_RANGE_DESCRIPTION=${local.module_vars.redis_range_description}", "REDIS_NAME=${local.module_vars.redis_name}", "REDIS_SIZE=${local.module_vars.redis_size}", "REDIS_ZONE=${local.module_vars.redis_zone}", "REDIS_ALT_ZONE=${local.module_vars.redis_alt_zone}", "REDIS_VERSION=${local.module_vars.redis_version}", "bash", "../../../scripts/create-redis-instance.sh"]
}
The script is like this:
echo "[+]Creating IP Allocation Automatically using <$REDIS_VPC-network\/$REDIS_PREFIX_LENGHT>"
gcloud compute addresses create $REDIS_RESERVED_RANGE_NAME \
--global \
--purpose=VPC_PEERING \
--prefix-lenght=$REDIS_PREFIX_LENGHT \
--description=$REDIS_RANGE_DESCRIPTION \
--network=$REDIS_VPC
The error I get is:
terragrunt apply
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exec: "REDIS_REGION=us-east1": executable file not found in $PATH
ERRO[0002] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
I encountered the same issue and resigned myself to pass the values as parameters instead of environment variables.
It involves to modify the script and is a far less clearer declaration, but it works :|

Passing a parameter's value to shell function prints only the name of the parameter

I need to pass a parameter to my shell function, which looks like this:
function deploy {
docker create \
--name=$1_temp \
-e test_postgres_database=$2 \
-e test_publicAddress="http://${3}:9696"\
# other irrelevant stuff
I am passing the following parameters:
deploy test_container test_name #1 test_database #2 ip_address #3
So when, I pass those 3 parameters, based on them a new container is created. However, the third parameter is something special. So there is another function, which gets the ip of the container.
function get_container_ip_address {
container_id=($(docker ps --format "{{.ID}} {{.Names}}" | grep $1))
echo $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${container_id[0]})
So, the execution of the deploy function actually looks like this:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database ip_address
Let's say the IP address of the container is 1.1.1.1, so the ip_address=1.1.1.1.
However, when I execute the script and create the container, its IP address is:
"http://ip_address:9696" and not "http://1.1.1.1:9696".
I also tried the following:
...
-e test_publicAddress="http://$3:9696"\
...
But I still got the same result. Is there a way I can get the value of the passed parameter? By the way, I am sure it contains the needed ip address as I use it elsewhere (not in a function) and I printed it for testing. Thank you in advance!
so run this like that:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database $ip_address
when you call it without the $ the script leave it alone like a string ip_address

How can we remove a request parameter once (at startup) from a JMeter Test Plan?

I am using jmeter from command line to perform an automated suite of tests on some targets.
It looks like this :
for files in ./*.jmx; do
./jmeter \
-n -t ${file}/Test_perfs_qgis_SHORT.jmx \
-l ${TEST_DIR_PATH}/at_bench.log \
-e \
-o ${TEST_DIR_PATH}/report \
-J TEST_DIR_PATH="${TEST_DIR_PATH}" \
-J COMMON_PARAM="someValue" \
-J ANOTHER_COMMON_PARAM="anotherValue" \
-J SPECIFIC_PARAM="someValue Or emptyIfNotExpected"
fi
Most of the targets share the same GET template, or at least allow unexpected parameter (that will then just be ignored).
But some target fail when they receive extra parameter.
Thus, I added a PreProcessor to remove the parameter when their value is not provided.
if((vars.get("SPECIFIC_PARAM") == null)||(vars.get("SPECIFIC_PARAM")=="")){
sampler.getArguments().removeArgument("MAP");
}
And this works well. But Since I have around 50000 call, this will be triggered... a few times!
Considering this is for testing purpose, I fear this may have impact on results (though this may also be quite the same for all request).
Anyway, I am trying to find a way to remove it at startup : once for all requests.
Anyone has a tip on how to do it?
Considering what you are removing (an argument of the sampler), it cannot be removed elsewhere/globally. Maybe you could instead have 2 templates: one with and one without that parameter, and select the template with If controller base don value of the variable:
If Controller with condition: "${SPECIFIC_PARAM}"==""
Sampler without MAP argument
If Controller with condition: "${SPECIFIC_PARAM}"!=""
Sampler with MAP argument

JMeter distributed testing and command line parameters

I have been using JMeter parameters to specify test attributes like testduration, rampup period etc for load test. I specify these parameters in shell script and it looks like this -
JMETER_PATH="/home/<user>/apache-jmeter-2.13/bin/jmeter.sh"
${JMETER_PATH} \
-Jjmeter.save.saveservice.output_format=csv \
-Jjmeter.save.saveservice.response_data.on_error=true \
-Jjmeter.save.saveservice.print_field_names=true \
-JCUSTOMERS_THREADS=1 \
-JGTI_THREADS=1 \
// Some more properties
Everything goes good here.
Now I added distributed testing and modified above script with JMeter Server related info. Hence new script looks as -
JMETER_PATH="/home/<user>/apache-jmeter-2.13/bin/jmeter.sh"
${JMETER_PATH} \
-Jjmeter.save.saveservice.output_format=csv \
-Jjmeter.save.saveservice.response_data.on_error=true \
-Jjmeter.save.saveservice.print_field_names=true \
-Jsample_variables=counter,accessToken \
-JCUSTOMERS_THREADS=1 \
-JGTI_THREADS=1 \
// Some more properties
-n \
-R 127.0.0.1:24001,127.0.0.1:24002,127.0.0.1:24003,127.0.0.1:24004,127.0.0.1:24005,127.0.0.1:24006,127.0.0.1:24007,127.0.0.1:24008,127.0.0.1:24009,12$
-Djava.rmi.server.hostname=127.0.0.1 \
Distributed test runs well but test does not take parameters specified in script above into consideration rather it takes the default value from JMeter test plan -
Did I mess up any configuration?
Use -G instead of -J for properties to be sent to remote machines as well. -J is local only.
-D[prop_name]=[value] - defines a java system property value.
-J[prop name]=[value] - defines a local JMeter property.
-G[prop name]=[value] - defines a JMeter property to be sent to all remote servers.
-G[propertyfile] - defines a file containing JMeter properties to be sent to all remote servers.
From here
Replace -J with -G, for more details go to link below or can see the image atached.
If you want to run your load test in distributed mode refer to URL click here
And search for Server Mode (1.4.5)

knife vsphere requests root password - is unattended execution possible?

Is there any way to ruyn the knife vsphere for unattended execution? I have a deploy shell script which I am using to help me:
cat deploy-production-20-vm.sh
#!/bin/bash
##############################################
# These are machine dependent variables (need to change)
##############################################
HOST_NAME=$1
IP_ADDRESS="$2/24"
CHEF_BOOTSTRAP_IP_ADDRESS="$2"
RUNLIST=\"$3\"
CHEF_HOST= $HOSTNAME.my.lan
##############################################
# These are psuedo-environment independent variables (could change)
##############################################
DATASTORE="dcesxds04"
##############################################
# These are environment dependent variables (should not change per env)
##############################################
TEMPLATE="\"CentOS\""
NETWORK="\"VM Network\""
CLUSTER="ProdCluster01" #knife-vsphere calls this a resource pool
GATEWAY="10.7.20.1"
DNS="\"10.7.20.11,10.8.20.11,10.6.20.11\""
##############################################
# the magic
##############################################
VM_CLONE_CMD="knife vsphere vm clone $HOST_NAME \
--template $TEMPLATE \
--cips $IP_ADDRESS \
--vsdc MarkleyDC\
--datastore $DATASTORE \
--cvlan $NETWORK\
--resource-pool $CLUSTER \
--cgw $GATEWAY \
--cdnsips $DNS \
--start true \
--bootstrap true \
--fqdn $CHEF_BOOTSTRAP_IP_ADDRESS \
--chost $HOST_NAME\
--cdomain my.lan \
--run-list=$RUNLIST"
echo $VM_CLONE_CMD
eval $VM_CLONE_CMD
Which echos (as a single line):
knife vsphere vm clone dcbsmtest --template "CentOS" --cips 10.7.20.84/24
--vsdc MarkleyDC --datastore dcesxds04 --cvlan "VM Network"
--resource-pool ProdCluster01 --cgw 10.7.20.1
--cdnsips "10.7.20.11,10.8.20.11,10.6.20.11" --start true
--bootstrap true --fqdn 10.7.20.84 --chost dcbsmtest --cdomain my.lan
--run-list="role[my-env-prod-server]"
When it runs it outputs:
Cloning template CentOS Template to new VM dcbsmtest
Finished creating virtual machine dcbsmtest
Powered on virtual machine dcbsmtest
Waiting for sshd...done
Doing old-style registration with the validation key at /home/me/chef-repo/.chef/our-validator.pem...
Delete your validation key in order to use your user credentials instead
Connecting to 10.7.20.84
root#10.7.20.84's password:
If I step away form my desk and it prompts for PWD - then sometimes it times out and the connection is lost and chef doesn't bootstrap. Also I would like to be able to automate all of this to be elastic based on system needs - which won't work with attended execution.
The idea I am going to run with, unless provided a better solution is to have a default password in the template and pass it on the command line to knife, and have chef change the password once the build is complete, minimizing the exposure of a hard coded password in the bash script controlling knife...
Update: I wanted to add that this is working like a charm. Ideally we could have changed the centOs template we were deploying - but it wasn't possible here - so this is a fine alternative (as we changed the root password after deploy anyhow).

Resources