How can we remove a request parameter once (at startup) from a JMeter Test Plan? - jmeter

I am using jmeter from command line to perform an automated suite of tests on some targets.
It looks like this :
for files in ./*.jmx; do
./jmeter \
-n -t ${file}/Test_perfs_qgis_SHORT.jmx \
-l ${TEST_DIR_PATH}/at_bench.log \
-e \
-o ${TEST_DIR_PATH}/report \
-J TEST_DIR_PATH="${TEST_DIR_PATH}" \
-J COMMON_PARAM="someValue" \
-J ANOTHER_COMMON_PARAM="anotherValue" \
-J SPECIFIC_PARAM="someValue Or emptyIfNotExpected"
fi
Most of the targets share the same GET template, or at least allow unexpected parameter (that will then just be ignored).
But some target fail when they receive extra parameter.
Thus, I added a PreProcessor to remove the parameter when their value is not provided.
if((vars.get("SPECIFIC_PARAM") == null)||(vars.get("SPECIFIC_PARAM")=="")){
sampler.getArguments().removeArgument("MAP");
}
And this works well. But Since I have around 50000 call, this will be triggered... a few times!
Considering this is for testing purpose, I fear this may have impact on results (though this may also be quite the same for all request).
Anyway, I am trying to find a way to remove it at startup : once for all requests.
Anyone has a tip on how to do it?

Considering what you are removing (an argument of the sampler), it cannot be removed elsewhere/globally. Maybe you could instead have 2 templates: one with and one without that parameter, and select the template with If controller base don value of the variable:
If Controller with condition: "${SPECIFIC_PARAM}"==""
Sampler without MAP argument
If Controller with condition: "${SPECIFIC_PARAM}"!=""
Sampler with MAP argument

Related

how to load array parameter in another shell file dynamically over ssh connection

I need to call my executable which is placed in an on-prem server by using an ssh connection and pass a dynamics parameter.
based on my requirement, users should be able to add or remove parameters as they want to work with the executable on the on-prem server.
I wrote a translator to identify any new parameter added to the console but now when I want to pass it via ssh, I am facing 2 problems.
what if I have a value that contains space?
how to load these values dynamically & use them as arguments on my shell script on the server?
**Also take note that I am sending some additional parameters that are not related to my executable argument but I need them as well.
params=(
"$MASTER"
"$NAME"
"$QUEUE"
service.enabled=true
)
for var_name in "${!conf__#}";
do
key=${var_name#conf__};
key=${key//_/.};
value=${!var_name};
params+=( --conf "$key=$value" );
done
echo "${params[#]}"
ssh -o StrictHostKeyChecking=no myuser#server_ip "/bin/bash -s" < deploy_script.sh "${params[#]}"
My deploy_script.sh file will be something like the below file.
#!/bin/bash
set -e
AR_MASTER=${1}
AR_NAME=${2}
AR_QUEUE=${3}
AR_SER_EN=${4}
# How can I get the other dynamic parameters???
main() {
my-executable \
"--master "$AR_MASTER \
"--name "$AR_NAME \
"--queue "$AR_QUEUE \
"--conf service.enabled="$AR_SER_EN \
??? #how to add the additional configuration dynamically?
}
main "$#"
Would you mind help me in figure it out?

problem with snakemake submitting jobs with multiple wildcard on SGE

I used snakemake on LSF cluster before and everything worked just fine. However, recently I migrated to SGE cluster and I am getting a very strange error when I try to run a job with more than one wildcard.
When I try to submit a job based on this rule
rule download_reads :
threads : 1
output : "data/{sp}/raw_reads/{accesion}_1.fastq.gz"
shell : "scripts/download_reads.sh {wildcards.sp} {wildcards.accesion} data/{wildcards.sp}/raw_reads/{wildcards.accesion}"
I get a following error (snakemake_clust.sh details bellow)
./snakemake_clust.sh data/Ecol1/raw_reads/SRA123456_1.fastq.gz
Building DAG of jobs...
Using shell: /bin/bash
Provided cluster nodes: 10
Job counts:
count jobs
1 download_reads
1
[Thu Jul 30 12:08:57 2020]
rule download_reads:
output: data/Ecol1/raw_reads/SRA123456_1.fastq.gz
jobid: 0
wildcards: sp=Ecol1, accesion=SRA123456
scripts/download_reads.sh Ecol1 SRA123456 data/Ecol1/raw_reads/SRA123456
Unable to run job: ERROR! two files are specified for the same host
ERROR! two files are specified for the same host
Exiting.
Error submitting jobscript (exit code 1):
Shutting down, this might take some time.
When I replace the sp wildcard with a constant, it works as expected:
rule download_reads :
threads : 1
output : "data/Ecol1/raw_reads/{accesion}_1.fastq.gz"
shell : "scripts/download_reads.sh Ecol1 {wildcards.accesion} data/Ecol1/raw_reads/{wildcards.accesion}"
I.e. I get
Submitted job 1 with external jobid 'Your job 50731 ("download_reads") has been submitted'.
I wonder why I might have this problem, I am sure I used exactly the same rule on the LSF-based cluster before without any problem.
some details
The snakemake submitting script looks like this
#!/usr/bin/env bash
mkdir -p logs
snakemake $# -p --jobs 10 --latency-wait 120 --cluster "qsub \
-N {rule} \
-pe smp64 \
{threads} \
-cwd \
-b y \
-o \"logs/{rule}.{wildcards}.out\" \
-e \"logs/{rule}.{wildcards}.err\""
-b y makes the command executed as it is, -cwd changes the working directory on the computing node the the working directory from where the job was submitted. Other flags / specifications are clear I hope.
Also, I am aware of --drmaa flag, but I think out cluster is not well configured for that. --cluster was till now a more robust solution.
-- edit 1 --
When I execute exactly the same snakefile locally (on the fronend, without the --cluster flag), the script gets executed as expected. It seems to be a problem of interaction of snakemake and the scheduler.
-o \"logs/{rule}.{wildcards}.out\" \
-e \"logs/{rule}.{wildcards}.err\""
This is a random guess... More than one wildcards are concatenated by space before replacing them into logs/{rule}.{wildcards}.err. So despite you use double quotes, SGE treats the resulting string as two files and throws the error. What if you use single quotes instead? Like:
-o 'logs/{rule}.{wildcards}.out' \
-e 'logs/{rule}.{wildcards}.err'
Alternatively, you could concatenate the wildcards in the rule and use the result on the command line. E.g.:
rule one:
params:
wc= lambda wc: '_'.join(wc)
output: ...
Then use:
-o 'logs/{rule}.{params.wc}.out' \
-e 'logs/{rule}.{params.wc}.err'
(This second solution, if it works, kind of sucks though)

Problem with passing parameters in a shell function

I have the following shell script. There is a deploy function, that is used later in the script to deploy containers.
function deploy {
if [[ $4 -eq QA ]]; then
echo Building a QA Test container...
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-e SPECIAL_ENV_VARIABLE \
-p $2:9696 \
... # skipping other project specific settings
else
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-p $2:9696 \
... # skipping some project specific stuff
fi
During the deployment, I have to do some tests on the application (which is in containers). I use different containers for that, however I need to pass one additional parameter to the deploy function for my QA_test container, because I need a different setting in the docker create. Hence, why I put the if statement in the begining, which checks if the 4th argument equals 'QA', and if it does it creates a specific container with special env variables, otherwise if it has just the 3 arguments, it creates a 'normal' one. I was able to run the code with two seperate deploy functions, but I want to make my code better for readabiity. Anyway, this is how it should go:
Step 1: Normal tests:
deploy container_test 9696 test_database # 3 parameters
run tests... (this is not relevant to the question)
Step 2: QA testing:
deploy container_qa_test 9696 test_database QA # 4 parameters, so I can create a
# a special container
run tests... (again, not relevant to the question)
Step 3: If they are successful, deploy a production-ready container:
deploy production_container 9696 production_database # 3 parameters again
However what happens according to the log:
Step 1: test_container is created. However its created with the upper if, but there is not a 4th parameter that equals QA, however it executes it.
Step 2: this runs normal.
Step 3: production container is built as a QA container
It never reaches the else part, even if the condition is not satisfied. Can anyone give me some tips?
just change [[ $4 -eq QA ]] to :
if [[ "$4" == "QA" ]]; then
-eq used to comapre numbers ....

JMeter distributed testing and command line parameters

I have been using JMeter parameters to specify test attributes like testduration, rampup period etc for load test. I specify these parameters in shell script and it looks like this -
JMETER_PATH="/home/<user>/apache-jmeter-2.13/bin/jmeter.sh"
${JMETER_PATH} \
-Jjmeter.save.saveservice.output_format=csv \
-Jjmeter.save.saveservice.response_data.on_error=true \
-Jjmeter.save.saveservice.print_field_names=true \
-JCUSTOMERS_THREADS=1 \
-JGTI_THREADS=1 \
// Some more properties
Everything goes good here.
Now I added distributed testing and modified above script with JMeter Server related info. Hence new script looks as -
JMETER_PATH="/home/<user>/apache-jmeter-2.13/bin/jmeter.sh"
${JMETER_PATH} \
-Jjmeter.save.saveservice.output_format=csv \
-Jjmeter.save.saveservice.response_data.on_error=true \
-Jjmeter.save.saveservice.print_field_names=true \
-Jsample_variables=counter,accessToken \
-JCUSTOMERS_THREADS=1 \
-JGTI_THREADS=1 \
// Some more properties
-n \
-R 127.0.0.1:24001,127.0.0.1:24002,127.0.0.1:24003,127.0.0.1:24004,127.0.0.1:24005,127.0.0.1:24006,127.0.0.1:24007,127.0.0.1:24008,127.0.0.1:24009,12$
-Djava.rmi.server.hostname=127.0.0.1 \
Distributed test runs well but test does not take parameters specified in script above into consideration rather it takes the default value from JMeter test plan -
Did I mess up any configuration?
Use -G instead of -J for properties to be sent to remote machines as well. -J is local only.
-D[prop_name]=[value] - defines a java system property value.
-J[prop name]=[value] - defines a local JMeter property.
-G[prop name]=[value] - defines a JMeter property to be sent to all remote servers.
-G[propertyfile] - defines a file containing JMeter properties to be sent to all remote servers.
From here
Replace -J with -G, for more details go to link below or can see the image atached.
If you want to run your load test in distributed mode refer to URL click here
And search for Server Mode (1.4.5)

Jmeter - url from command line

I have been trying to pass the url for a web page from the command line to do performance testing using Jmeter.
I had set up the user defined variables like
NumberOfUsers ${__P(NumberOfUsers,2)}
HowManyTimesToRun ${__P(HowManyTimesToRun,2)}
RampUpTime ${__P(RampUpTime, 10)}
Host ${__P(Host)}
I tried using the Jmeter command as
./jmeter.sh -n -t Performance.jmx -l old88.jtl -JNumberOfUsers=5 -JRampUpTime=10JHowManyTimesToRun=2 -JHost=www.google.com
It seems to take all the values correctly except the host name. Is there a way to pass the url from command line. I use this property in HTTP request Defaults.
I do not see any issues here - except a space ' ' and '-' are missing for 'HowManyTimesToRun' property. Maybe it s a typo!!
./jmeter.sh -n -t Performance.jmx -l old88.jtl -JNumberOfUsers=5 -JRampUpTime=10 -JHowManyTimesToRun=2 -JHost=www.google.com

Resources