How to run multiple jmx scripts together in JMeter [closed] - jmeter

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Currently we are using Performance Center for the load test,eventually we will move to JMeter.
In Performance center we ran 200 scripts together.
In the Same way ,How to run multiple jmx scripts together in JMeter?

You can use JMeter Ant Task or JMeter Maven Plugin to kick off tests execution. Both tools have capabilities to execute tasks in parallel.
If needed you can merge execution result files with MergeResults plugin.
For more options on how JMeter test can be started refer to 5 Ways To Launch a JMeter Test without Using the JMeter GUI guide.

Using non-ui mode you can run multiple jmx scripts by providing -t option like,
Jmeter.bat or Jmeter.sh -n -t scritp1.jmx script2.jmx ...
or create multiple sessions using a wrapper script in shell or batch pgming which will run those scripts in parallel
like,
Jmeter.bat or Jmeter.sh -n -t scritp1.jmx &
Jmeter.bat or Jmeter.sh -n -t scritp2.jmx &

The easy way is to start all the scripts from the command line at the same time and have each script programmed with number of users, etc. I use Windows, and create a batch file containing multiple lines like:
start /REALTIME java -jar ApacheJMeter.jar -n -t test_script1.jmx -l results1.jtl -j log1.log -Dthreads=40 -Dduration=1800
However, for the scale of your setup, I would recommend you look into a better method of controlling your testing through automation.
I have set up a Jenkins server, which is able to run JMeter scripts using maven, or shell scripts, and then aggregate results, create graphs, etc. It is able to trigger jobs in parallel, sequence, randomly, or triggered by other events.
I would also look at setting up your scripts to use properties for number of threads, loops and duration, so you can control them easily from Jenkins without modifying the scripts. Create a User Defined Variables config element, and copy property values to variables, setting defaults, ie:
THREADS ${__P("threads", 25)}
This will mean ${THREADS} can be used in the thread group, and will be default value 25. If you assign a value on the command line it will be used instead of default. You can define it on the JVM command line, when you start JMeter like this:
-Dthreads=40
In your script, the value of ${THREADS} will now be 40.

Related

Jenkins run a failing test n times

I'm trying rerun a test n times on failure in Jenkins. I've seen Naginator, but my team would prefer not to add additional plugins.
(Note: we use EnvInject to inject environmental variables into the build process)
The next idea was to keep a variable with the number of times to retry, and decrement it on each new build. There was a stack overflow link (that I'm struggling to find at the moment) suggesting a groovy script that looks like this
def map = [:]
Int newRetries = RETRIES.toInteger() - 1
map.put(“RETRIES”, newRetries)
return map
However, groovy scripts in the "prepare an environment to run" section need admin privilages (which again my team would prefer to avoid).
The next idea was to use a property file and just do something along the lines of echo "RETRIES=$((RETRIES-1))" > env.properties and add an Inject Environmental variable step that reads in env.properties.
The problem is that within our Jenkins bash script echo "RETRIES=$((RETRIES-1))" prints "RETRIES=$((RETRIES-1))"
In a local terminal I can run
RETRIES=5
echo "RETRIES=$((RETRIES-1))"
> RETRIES=4
, but in Jenkins, RETRIES-1 doesn't get evaluated. Do any of you have an idea of why?
So I found 3 mistakes that I was making.
Jenkins pipelines execute a single step in parallel. This means you can't write to and read from a file in different portions of the same step without reading from the unupdated file (in practice) (From this stackoverflow Override environment variable created locally in Jenkins)
The default shell on Jenkins is /usr/bin/sh which isn't necessarily bash. I explicitly ran my script with bash -c "echo $Var" (the real issue here was that the remote machines were windows, oops)
My pipeline was failing before reading in the updated variable value from the file. I've moved the reading step earlier in the pipeline.

JMeter - Running preprocessor when called from command line?

I'm very new to JMeter.
My team has existing JMeter Test Plans that we generally run through the GUI. I am experimenting with running it from the command line.
In our test plans, we have a bunch of variables declared at the TestPlan level. These variables contain information for setting up different environments (eg test, prod, dev). Directly under the TestPlan is a JSR223 PreProcessor that basically takes the variable that shows what environment you're trying to run in and sets up the hostname, port, etc to the values for that environment.
When I run the test using the GUI, it works perfectly under the selected environment. However, when I run it with the command line using the following command:
jmeter -n -t testName.jmx -l Results.csv
it tries to run under the defaulted environment and doesn't change it to the environment I want. I tried adding a -Jenv=dev, but that didn't seem to make any difference.
Do preprocessors not run when called from the command line? Could there be something else that I'm missing? Given my inexperience here, I'm not even really sure how I can tell what the problem is.
Thanks so much!
It is very difficult to conclude what is the exact problem in your jmeter script. Looking into the problem statement I can sense that you need to find correct way of passing a variable from command line and processing it in a script.
Example of passing a variable from command line and handling it in jmeter is shown below:
Declare a variable with value as a property in jmeter. You can use anything here, I have used User Defined variable. Variable env is declared as ${__P(env)}
Using JSR to process that variable. Here just printing the value passed from parameter into jmeter log. Now the value can be reused using ${env} variable [note it is a variable and not a property, because we declared that way in User Defined variable].
String valuePassed = vars.get("env");
log.info("Parameter passed from command line: " + valuePassed);
Run from the Command Line using following command:
jmeter -n -t <>.jmx -Jenv=Prod -j sample.log
Result shown in log file
PreProcessors are executed only in conjunction with Samplers
If you don't have any Sampler in your Thread Group - none of PreProcessors will be executed.
Also be aware that PreProcessors obey JMeter Scoping Rules to wit:
if you have a PreProcessor added as a child of the Sampler - the PreProcessor will be executed before the given Sampler
if you have 2 Samplers and a PreProcessor at the same level - the PreProcessor will be executed before each Sampler

How to run parallel commands to upload directory in gcp in shell script

How to run parallel commands to upload directory in gcp in a shell script
The fastest way to run parallel commands in Powershell is using Jobs.
A quick search found this blog post:
https://4sysops.com/archives/powershell-background-job-basics/
As for the commands you want to run in parallel, I didn't get enough information from the question you asked. If you want to elaborate I might be able to try and help.

Hiding user input into a nested bash script

Bit of an odd one I know.
We have an install script provided to us by the product developers, we implement this product across multiple servers but the default install script they provide is only designed for a single server. As such we have created a shell script with SSHs across all the servers and runs the script, the issue is that the inputs into the default script are hidden but by running it through our script they are no longer hidden.
Here's the gist of it:
sshCmdString=echo "./INSTALLSCRIPT.sh CONFIG-PARAMETERS
ssh -T $sshuser#$HOST $sshCmdString
I'm wondering how I would go about altering the ssh script to hide the input again?
I know read -s hides input within a bash script but I've found that this hides all the outputs of running the script if I put in before/in the command to run the script on the other servers.
Is there another option out there that will hide inputs but display outputs/logging from the script?

Get execution time information for large number of bash scripts

Given a project that consists of large number of bash scripts that are being launched from crontab periodically how can one track execution time of each script?
There is straightforward approach to edit each of those file by adding date
But what I really want is some kind of daemon that could track execution time and submit results to somewhere several times a day.
So the question is:
Is it possible to gather information about execution time of 200 bash scripts without editing each of them?
time module considered as fallback solution, if nothing better could be found
Depending on your systems cron implementation you may define the log-levels of the cron daemon. For ubuntus default vixie-cron setting log-level will log start and end of a job-execution which can then be analyzed.
On current LTS Ubuntu it works defining the log-level in /etc/init/cron
appending the -L 3 option to the exec line letting it look like:
exec cron -L 3
You could change your cron to run your scripts under time?
time scriptname
And pipe output to you logs.

Resources