Can JMeter Assert on Throughput? - maven

Is it possible to have a Maven/Jenkins build fail due to JMeter tests failing to achieve a specified throughput?
I'm tinkering with the jmeter-maven-plugin in order to do some basic performance tests as part of a continuous integration process in Jenkins.
One can set a Duration Assertion on a Sampler in the JMeter test to mark the sample as failed if a response time is over a certain threshold. What I'd like is to be able to fail a build (in Maven, or in the Jenkins job that triggers the Maven build) based on the total throughput for a test.
Is this possible with any existing plugins?

Yes its possible. You can use the Jenkins Text Finder plugin and the JMeter "aggregate report". With aggregate report you can write a CSV or XML file. You can search this file for your throughput with the Jenkins Text Finder Plugin and then you can mark the build as failed or unstable. Alternativly, you can use a BASH script to search the generated JMeter report file and return a non null return value. This will make your build fail.

Related

How do I read and compare jmeter results in CI/CD pipeline

I don't have any experience in non functional testing. But I have just written a jmeter test and hooked up in gitlab ci. I am generating a testresults.jtl and added in artifacts.
But I am not sure how to read the results and how to compare it with the previous results to see or get notified if there are any changes in performance.
What should I do?
You can consider using Taurus tool which:
Has JUnit XML Reporter producing JUnit-style XML result files which can be "understood" by GitLab CI
Has Pass/Fail Criteria subsystem where you can specify the error thresholds, if i.e. response time will be higher than the defined value Taurus will stop with non-zero exit status code so GitLab automatically will fail the build on getting non-zero exit code.

Jmeter + Jenkins Performace plugin

I have set thread properties as
${__P(threads,)}
for Number of user and
${__P(rampup,)}
for rampup period in jmeter GUI.
Then created a job in jenkins, chosen 'This project is parameterized' and included String parameter 'THREADS' & 'RAMPUP'. I have mentioned the right path for the execution, included -Jthreads=%THREADS% and -Jrampup=%RAMPUP% under Execute Windows batch command.
The right path was set for generating the performance report.
After choosing 'Build with parameters', assume the 'THREADS' as 10 and 'RAMPUP' as 0, the build is successful.
The issue is with the 'Performance report' as every time the 'HTTP Request' count is displayed as 20 by default irrespective of whatever value is provided at the time of build. The thread count of 10 is not being considered as a parameter. By default the HTTP Request count shows as '20' requests instead of the actual '10' requests in reports.
Performance Plugin doesn't know anything about Jenkins parameters, most probably you configured it badly so it consumes the same .jtl results file all the time and doesn't load any updates.
Further assistance is not possible without seeing the screenshot of the freestyle project or the code of your Jenkins JMeter pipeline.

Go-CD - How do you stop generating artifacts when JUNIT or JASMINE or Regression Test fails in Go-CD

We are actively using GO-CD. We get JUNIT JASMINE and other results, how ever the build artifacts are always published by go-cd which is picked by other agents to perform automated deployment.
We wish to set percentage value markers for JUNIT JASMINE etc, and if the observed value is lesser than the % marker, then we are interested to make go-cd not publish artifacts.
Any ideas?
Ideally after report creation another task kicks in which verifies the report results.
It could be e.g. a grep command inside a shell script looking for the words fail or error in the XML report files. As soon as the task finishes with a return code not equal to 0, GoCD considers the task to be failed.
Same applies for the percentage marker, a task is needed which calculates the percentage and then provides an appropriate return code. 0 when percentage goal is met or exceeded and different from 0 when the goal has not been met. This one could also be implemented as a custom task such as a shell script evaluating the reports.
The pipeline itself can be configured to not publish any artifacts in case the task fails or errors.

How Can I compare output of 2 different JMeter runs automatically?

I want to compare 2 different jmeter runs.
I have a jenkins job to do that which triggers jmx which in turn call Rest APIs.
Lets say, I executed Jmeter run for one time. I will refer this as run 1
Now, after 30 minutes, I will again run the same jenkins job and will again run the jmeter test. I will refer to it as run 2.
So, now i have 2 runs and i want to compare run1 and run2, specifically the response time.
How can I automate that so that everytime this happens, i can have the difference in response time of APIs ?
I tried searching, found few articles (Compare results from a previous test in jmeter) but it did not really help :(
Please let me know how can this be achieved ?
You can use MergeResults plugin in order to compare 2 test executions and plot execution chart and/or have differences in the CSV file.
If you want to run it in non-GUI mode from Jenkins job - go for JMeterPluginsCMD Command Line Tool which has MergeResults plugin to run it in command-line mode.
You can install both the plugins using JMeter Plugins Manager

How to automate SLA validation using Jmeter in Non-GUI mode, i.e. using command line jmeter utility

How can I do create SLA goals and validate SLAs using Jmeter from command line ? Please help, I want to be able to throw this command line script on Jenkins and fail/pas the build based on SLA validation, if SLA goals aren't met the build should just fail.
If you are using Jenkins Performance Plugin - you have the handful of options to mark the build as failed or unstable basing on response times.
If you need to fail the script based on other metrics, the easiest option is running JMeter via Taurus framework. It has flexible and powerful Pass/Fail Criteria subsystem and in case of test failure you will have non-zero command-line process exit code so Jenkins will automatically mark the step as failed.

Resources