Taurus YAML Runtime Property Change for JMeter - jmeter

I am trying to invoke Jmeter script with user defined properties with YAML which I am able to change and execute with below configuration.
However if the test started and I need to increase the user on any thread - without stopping the test, how can I achieve that? Let's say the test was started with Thread1 value as 30 and now if I need to change it to 50 dynamically on runtime. I did not find a way myself. Please advise.
execution:
- #concurrency: ${__P(my_conc,3)} # use `my_conc` prop or default=3 if property isn't found
ramp-up: 1
hold-for: ${__P(my_hold,1)}
scenario: simple
modules:
jmeter:
properties:
my_hold: 60
scenarios:
simple:
variables:
#User DEFINED variable
Sampler1 : TestSampler1
Sampler2 : TestSampler2
script: SampleYAMLJMeter.jmx
properties:
#Thread Level variable
thread1 : 30
thread2 : 45

Add the following global JMeter properties:
modules:
jmeter:
properties:
beanshell.server.port: 9000
beanshell.server.file: ../extras/startup.bsh
Create the setconc.bsh file under .bzt/jmeter-taurus/x.x.x/lib folder and put the following code into it:
org.apache.jmeter.util.JMeterUtils.setProperty("my_conc", args[0]);
That's it, whenever you need to change the concurrency of your tests just execute the following command from the .bzt/jmeter-taurus/x.x.x/lib folder:
java -jar bshclient.jar localhost 9000 setconc.bsh 1234
replace 1234 with the actual concurrency you want to achieve
More information:
Beanshell server
How to Change JMeter´s Load During Runtime

Related

Jmeter || Error generating the report: java.lang.NullPointerException while running test from CMD

I am trying to execute a test from CMD but getting following error :
Command : jmeter -n -t D:\Users\load.test\Desktop\Performance\apache-jmeter-5.5\bin\UserServices.jmx -l D:\Users\load.test\Desktop\Performance\apache-jmeter-5.5\PerformanceData\MICR_Project\MICR_TestResults\DebugOOM\DebugOOM_1.csv -e -o D:\Users\load.test\Desktop\Performance\apache-jmeter-5.5\PerformanceData\MICR_Project\MICR_TestResults\DebugOOM\DebugOOM_1 -JURL=localhost -JPort=7058 -JUser=5 -JRampUp=1 -JDuration=900 -JRampUpSteps=1 -JOutputFileName=OutputOOM.jtl -JErrorFileName=ErrorOOM.jtl -JFlow1=100
What could be the possible reasons for this error as its not very informative.emphasized text
The NullPointerException is related to generating the HTML Reporting Dashboard, the dashboard generation fails because your test failed to execute at all - no Samplers were run.
The reason why no Samplers were run can be found in jmeter.log file, the most "popular" reasons are:
The number of threads in the Thread Group is 0
The test uses CSV Data Set Config for parameterization and the CSV file is not present.
The test uses a JMeter Plugin and the plugin is not installed

Parallel and distributed execution of multiple scenarios in taurus

I have following requirements to execute the performance test via Taurus.
Requirements:
1. A single jmx on multiple(distributed) jmeters
2. For every Jmeter an unique IP address to be passed at run time
3. For every Jmeter a set of unique .csv path has to be provided in .yml(as data source)
4. All Jmeters should run in parallel and report should be combined of all.
Tried with following, but unable to achieve. Let me know or share the sample .yml if any one done such kind of scenario.
execution:
- scenario:
# scenario1:
script: varTest.jmx
distributed:
- localhost:1099
variables:
host: "10"
- scenario:
# scenario2:
script: varTest.jmx
distributed:
- localhost:2010
variables:
host: "20"
In this,need to override hosts dynamically with option -o.
It doesn't contain csv datasource details, pls share the how to create a .yml for such requirement.
Thanks..
You could try something like:
---
execution:
- distributed:
- localhost:1099
scenario:
script: varTest.jmx
distributed:
variables:
host: "10"
- distributed:
- localhost:2010
scenario:
script: varTest.jmx
variables:
host: "20"
References:
Run JMeter in Distributed Mode
Running Taurus in Command Line
Taurus - Working with Multiple JMeter Tests

How do I mark the concourse ci build failed if the tests failed?

I am running some automated tests using concourse ci pipeline. My requirement is to mark the build as failed if any of the tests failed and email the results. Is there a way to do this in concourse? The email feature is working fine, but the build passes even with the test failures.
Under the assumption that the exit code is correct you will need to use the on failure step on concourse and add that to your job, it will look something like this
jobs:
- name: myBuild
plan:
- get: your-repo
passed: []
trigger: false
- task: run-tests
file: runMyTestsTask.yml
on_failure:
- put: send-an-email
params:
subject_text: "Your email subect i.e Failed Build"
body_text: "Your message when the build has failed"
on_success:
put: push-my-build
## Define your additional resources here
resources:
- name: send-an-email
type: email
source:
smtp:
host: smtp.example.com
port: "587" # this must be a string
username: a-user
password: my-password
from: build-system#example.com
to: [ "dev-team#example.com", "product#example.net" ] #optional if `params.additional_recipient` is specified
resource_types:
- name: email
type: docker-image
source:
repository: pcfseceng/email-resource
Additionally if you need to output some relevant information about the build, you can do so by including some environment vars that will wrap the concourse metadata and you can include that into the body of the email message, for more details on how to do this, please refer to documentation of the email resource here: https://github.com/pivotal-cf/email-resource.
If you're running JMeter in command-line non-GUI mode your command needs to return non-zero exit status code
Even if there are failures in your test the JMeter process has 0 exit code therefore any CI system will treat the execution as successful:
If you're looking for a JMeter-only solution you can add a JSR223 Assertion to your Test Plan and put the following code into "Script" area:
if (!prev.isSuccessful()) {
props.put('failure', 'true')
}
Then add tearDown Thread Group to your Test Plan and put JSR223 Sampler there with the following code:
if (props.get('failure').equals('true')) {
System.exit(1)
}
if any Sampler in the JSR223 Assertion Scope fails - the whole JMeter process will finish with exit code 1 which is treated as error by any upstream processing system.
Another option is considering using Taurus tool as the wrapper for your JMeter test, Taurus provides flexible pass/fail criteria subsystem which allows you possibility to define thresholds for considering test successful or not. If the thresholds are exceeded - Taurus will return non-zero exit code which should be "understood" by the concourse (or whatever else software)

Parsing concurrency '${addressThread}' in group 'ThreadGroup' failed, choose 1

I am trying to define a percentage of threads for each ThreadGroup in my load testing .jmx file, and pass the total number of threads from taurus config .yaml file.
However, taurus fails to parse the expression, even though when I try to debug it using jmeter I can see that the expressions works. ( I am setting the total number of users in user.property file in jmeter).
This is my yaml config file.
---
scenarios:
student_service:
script: ~/jmeter/TestPlan.jmx
variables:
addressThread: 100
think-time: 500ms
execution:
- scenario: student_service
hold-for: 5m
versions I am using:
Taurus CLI Tool
MacOs10.13.6
Jmeter 5.0
You're mixing properties and variables.
It should be:
---
scenarios:
student_service:
script: ~/jmeter/TestPlan.jmx
properties:
addressThread: 100
think-time: 500ms
execution:
- scenario: student_service
hold-for: 5m
And in JMeter, you should be using __P function:
${__P(addressThread)}
Still, there is a bug in current version of Taurus 1.13.2, so you need to wait for next version:
https://groups.google.com/d/msg/codename-taurus/QggRz9QDnO0/_FEGllDoGAAJ

Missing values from JMeter results file when run remotely

When running a test which makes use of the of the Jmeter-Plugins listener Response Times vs Threads or Active Threads Over Time remote running of the test plan produces a results file which contains missing results used to plot the actual graph, however when run locally all results are returned. E.g. when using the Response Times vs Threads:
Example of a local result:
1383659591841,59,Example 1,200,OK,Example 1 1-579,text,true,183,22,22,59
Example of a remote result:
1383659859149,43,Example 1,200,OK,Example 1 1-575,text,true,183,43
Note the last two fields are missing
I would check the script definition of the two server: maybe some configuration for the "Write results to file" controller has been changed.
Take the local jmx service and copy it to the remote server.
Also, look for differences in the "# Results file configuration" section of jmeter.properties file.
Make sure that on all of the slave/remote servers the jmeter.properties file within $JMETER_HOME/bin has the following setting
jmeter.save.saveservice.thread_counts=true
By default this is set to false (and commented out)
For more informtation:
JMeter Plugins Installation

Resources