How do I read and compare jmeter results in CI/CD pipeline - jmeter

I don't have any experience in non functional testing. But I have just written a jmeter test and hooked up in gitlab ci. I am generating a testresults.jtl and added in artifacts.
But I am not sure how to read the results and how to compare it with the previous results to see or get notified if there are any changes in performance.
What should I do?

You can consider using Taurus tool which:
Has JUnit XML Reporter producing JUnit-style XML result files which can be "understood" by GitLab CI
Has Pass/Fail Criteria subsystem where you can specify the error thresholds, if i.e. response time will be higher than the defined value Taurus will stop with non-zero exit status code so GitLab automatically will fail the build on getting non-zero exit code.

Related

Go-CD - How do you stop generating artifacts when JUNIT or JASMINE or Regression Test fails in Go-CD

We are actively using GO-CD. We get JUNIT JASMINE and other results, how ever the build artifacts are always published by go-cd which is picked by other agents to perform automated deployment.
We wish to set percentage value markers for JUNIT JASMINE etc, and if the observed value is lesser than the % marker, then we are interested to make go-cd not publish artifacts.
Any ideas?
Ideally after report creation another task kicks in which verifies the report results.
It could be e.g. a grep command inside a shell script looking for the words fail or error in the XML report files. As soon as the task finishes with a return code not equal to 0, GoCD considers the task to be failed.
Same applies for the percentage marker, a task is needed which calculates the percentage and then provides an appropriate return code. 0 when percentage goal is met or exceeded and different from 0 when the goal has not been met. This one could also be implemented as a custom task such as a shell script evaluating the reports.
The pipeline itself can be configured to not publish any artifacts in case the task fails or errors.

Quality Gate Failure in SonarQube does not fail the build in Teamcity

I set up a Build project in TeamCity and integrated Sonarqube with it. The project is getting build and even publish the report successfully in SonarQube console. But when the quality gate fails, it's not breaking the build. I searched and read about the build breaker, but its already supported with Sonarqube plugin of TeamCity as this document https://confluence.jetbrains.com/display/TW/SonarQube+Integration
Am I missing something to configure/or any gotcha? I tried to search a lot but didn't find any sort of proper documentation or lead on that.
Yeah I have to write a custom script using exit status to break the build. I used API to analyse the status of QG.
PROJECTKEY="%teamcity.project.id%"
QGSTATUS=`curl -s -u SONAR_TOKEN: http://SONAR_URL:9000/api/qualitygates/project_status?projectKey=$PROJECTKEY | jq '.projectStatus.status' | tr -d '"'`
if [ "$QGSTATUS" = "OK" ]
then
exit 0
elif [ "$QGSTATUS" = "ERROR" ]
then
exit 1
fi
I managed to fail the build based on Quality Gate settings using the sonar.qualitygate.wait=true parameter.
There's an example on their GitLab pipeline sample page: https://docs.sonarqube.org/latest/analysis/gitlab-cicd/
SonarQube plugin doesn't break the build when quality gate has failed. Why? Everything is described here: Why You Shouldn't Use Build Breaker
The main conclusion is:
[...] SonarSource doesn't want to continue the feature. [...]
Once we started using wallboards we stopped using the Build Breaker plugin, but still believed that using it was an okay practice. And then came SonarQube 5.2, which cuts the connection between the analyzer and the database. Lots of good things came with that cut, including a major change in architecture: analysis of source code is done on the analyzer side and all aggregate number computation is now done on the server side. Which means… that the analyzer doesn't know about the Quality Gate anymore. Only the server does, and since analysis reports are processed serially, first come first served, it can take a while before the Quality Gate result for a job is available.
In other words, from our perspective, the Build Breaker feature doesn't make sense anymore.
You have to verity quality gate status by your own. You can read how to do it here: Access quality gate status from sonarqube api
The answer to xpmatteo question:
Am I the only one that finds it difficult to understand what the quoted explanation means?
You have two tools. SonarScanner and SonarQube.
1) SonarScanner is executed on CI servers. It analyses source code and pushes analysis results to SonarQube sever.
2) SonarQube server processes data and knows if the new changes pass Quality Gates.
SonarScanner has no idea about the final result (pass or doesn't pass), so it cannot fail the build (it had such information before SQ 5.2, because it was processing all data and pushing only results to databases). It means the Build Breaker plugin has nonsense, because it won't work due to the current design. After executing the SonarScanner you have to poll the server and check the Quality Gates status. Then you may decide if the build should fail or not.
Follow below post that might help you.
https://docs.sonarqube.org/display/SONARQUBE45/Build+Breaker+Plugin
run your sonarqube task with the attribute "sonar.buildbreaker.skip".
eg: gradle clean build sonarqube publish -Dsonar.buildbreaker.skip=false
In my scenario CI is Github actions , irrespective of any CI tool sonar's status (Red/Green) of quality gates should be sent to your CI. you can browse the report status at this url http://:/api/ce/task?id= one report are generated .
you have to run this script after reports are generated to check the status and fail the job if SQ fail

How to automate SLA validation using Jmeter in Non-GUI mode, i.e. using command line jmeter utility

How can I do create SLA goals and validate SLAs using Jmeter from command line ? Please help, I want to be able to throw this command line script on Jenkins and fail/pas the build based on SLA validation, if SLA goals aren't met the build should just fail.
If you are using Jenkins Performance Plugin - you have the handful of options to mark the build as failed or unstable basing on response times.
If you need to fail the script based on other metrics, the easiest option is running JMeter via Taurus framework. It has flexible and powerful Pass/Fail Criteria subsystem and in case of test failure you will have non-zero command-line process exit code so Jenkins will automatically mark the step as failed.

Is it possible to display JMeter 'View Result In Table' listener data

I have one JMeter test plan with several test cases. Also,I use jmeter-maven-plugin.
If one of test cases fail (for 350 threads) it looks like
Tests Run: 1, Failures: 350, Errors: 0
So it not clear what test case is failed.
Is it possible to show more detail information about failed test case in Jenkins UI or in the console? Exactly like the 'View Result In Table' listener show it in the JMeter GUI.
Is there a plugin to show formatted output for resulting JTL-file (only about test case status and fail details) in the console or in Jenkins UI?
Give a try to this Jenkins Plugin: Performance
The code that checks for failures is just searching through your jtl file and looking for instances of failure, nothing more. It's really just there so that you can trigger a failure that maven can detect, the plugin does not do in depth analysis of the .jtl file.
There is also a jmeter-analysis-maven-plugin that will give you some more detailed information about the test results, if this doesn't meet your need feel free to add feature requests.

Can JMeter Assert on Throughput?

Is it possible to have a Maven/Jenkins build fail due to JMeter tests failing to achieve a specified throughput?
I'm tinkering with the jmeter-maven-plugin in order to do some basic performance tests as part of a continuous integration process in Jenkins.
One can set a Duration Assertion on a Sampler in the JMeter test to mark the sample as failed if a response time is over a certain threshold. What I'd like is to be able to fail a build (in Maven, or in the Jenkins job that triggers the Maven build) based on the total throughput for a test.
Is this possible with any existing plugins?
Yes its possible. You can use the Jenkins Text Finder plugin and the JMeter "aggregate report". With aggregate report you can write a CSV or XML file. You can search this file for your throughput with the Jenkins Text Finder Plugin and then you can mark the build as failed or unstable. Alternativly, you can use a BASH script to search the generated JMeter report file and return a non null return value. This will make your build fail.

Resources