How to Integrate Jmeter Test Results with TestRail - jmeter

How to Integrate Jmeter Test Results with TestRail, please anyone help me on this with small example.

You should use TestRail API, that's true. It's a simple, ReST one.
To me it looks you got two options here.
1) Implement the custom Beanshell/JSR223 listener that would peek up the Sample Result data and push them into TestRail as the test is going.
Although I won't recommend that: it would apparently slows JMeter down, consume resources & may influence the test result recordings through that.
So you better opt for
2) Keep your Test Plan lean in JMeter, record results (like, jmeter -n -t test.jmx -l test.jtl) - and implement the auxiliary util to ingest them, parse & push into TestRail.
That way seems to be more effective & reliable, for obvious reasons.
PS Basically, there exists the third way, especially if you need the results coming live: stream the results using Backend Listener.
Although TestRail seems not having the stream API out of the box, so you're going to write a kind of a proxy that would transform Graphite stream (for example) into sequence of TestRail calls.
That's gonna be interesting, but apparently the trickiest of all solutions.

Related

Testing http rest APIs that have long running queries that are polled for status

I'm struggling to find a framework to help me test the performance of a service I am writing, that has a long running process it fronts. A simplified description of the service is:
POST data to the service /start endpoint, it returns a token.
GET the status of the action at /status/{token}, poll this until it returns a status of completed.
GET the results from /result/{token}.
I've dabbled with Locust.io, which is fine for measuring the responsiveness of the API, but does little for measuring the overall end to end performance. What I would really like to do is measure how long all three steps take to complete, particularly when I run many in parallel etc. I should imagine my service back end falls over far sooner than the rest API does.
Can anyone recommend any tools / libraries / frameworks I can use to measure this please? I would like to integrate it with my build pipeline so I can measure performance as code is changed.
Many thanks
The easiest option I can think of is going for Apache JMeter, it provides Transaction Controller which generates an additional "transaction" holding its children cumulative response time (along with other metrics)
"Polling" can be implemented using While Controller
Example test plan outline with results:

In Jmeter how to make the jmeter fail a test based on the medium response time?

test plan screenshot:
For example instead of using 2000 as a threshold for each sample when do the assertion, I want to assert based on the medium response time. How can I do that?
You seem to be using the easiest solution already. Implementing assertion on average response time is possible, but a little bit tricky as you will need to use a JSR223 Assertion which will:
Store response time for each request into a JMeter Variable
On each request completion calculate arithmetic mean of all existing JMeter Variables and compare the result to the expected value.
The easier way would be running your JMeter test via Taurus tool. Taurus can execute existing JMeter tests providing some extra reporting and other features, particularly for your use case Pass/Fail Criteria Subsystem seems to be the perfect match

Plotting JMeter test results dynamically in HTML chart

I want to be able to run a JMeter test for thousands of users and plot the results dynamically using a JQuery based charting library like HighCharts i.e. the response from every virtual user must be plotted in near real time to show a stock ticker like chart which gets updated dynamically. I am OK running the test in Non-GUI mode.
I have tried the following,
- Run the JMeter test in non-GUI mode and write the response to a file. What I notice is that the results get written to the file in a buffered manner which means even if I have a program monitoring the file for new records, I wont get it in real time.
I am looking for suggestions on how this can be achieved
1. Do I need to write a custom JMeter plugin? In this case how will it work?
2. Is there some listener which can give me the desired data
3. Can this be done via post processor?
I have seen real time reporting being done on some cloud based load testing websites which use JMeter, so I'm sure it can be done, but how?
There is some buffering when writing to a file, but it shouldn't be more than a few seconds worth of data.
I'd go with the route of reading the log file into something like statsD using something like logstash.net and from there you can probably find an existing solution that pushes it to a chart.
You can disable buffering by adding this in user.properties file:
jmeter.save.saveservice.autoflush=true
This impacts slightly performances for test that have low or no pauses.
To do what you want you could use this kind of library:
http://www.chartjs.org/

JMeter: What is a good test structure for load testing REST APIs?

I am load testing (baseline, capacity, longevity) a bunch of APIs (eg. user service, player service, etc) using JMeter. Each of these services have several endpoints (eg. create, update, delete, etc). I am trying to figure out a good way to organize my test plans in JMeter so that I can load test all of these services.
1) Is it a good idea to create a separate JMeter Test Plan (jmx) for each of the APIs rather than creating one JMeter test plan and adding thread groups like "Thread Group for User Service", "Thread Group for Player Service", etc? I was thinking about adding one test plan per API, and then adding several Thread Groups for different types of load testing (baseline, capacity, longevity, etc).
2) When JMeter calculates the Sample Time (Response Time), does it also include the time taken by the BeanShell Processors?
3) Is it a good idea to put a Listener inside of each Simple Controller? I am using JMeter Plugins for reporting. I wanted to view the reports for each endpoint.
Answers to any or all of the questions would be much appreciated :)
I am using a structure like below for creating a test plan in JMeter.
1) I like a test plan to look like a test suite. JMeter has several ways of separating components and test requirements, so it can be hard to set a rule. One test plan is likely to be more efficient than several, and can be configured to satisfy most requirements. I find there can be alot of repetition between plans, which often means maintaining the same code in different places. Better to use modules and includes on the same plan to reduce code duplication, but includes are equivalent and can be used with test fragments to reduce duplication.
Threadgroups are best used as user groups, but can be used to separate tests any way you please. Consider the scaling you need for different pages/sites. ie User/Administrator tests can be done in different Thread Groups, so you can simulate say 50 users and 2 admins testing concurrently. Or you may distinguish front-end/back-end or even pages/sites.
2) It does not include beanshell pre- and post-processing times. (But if you use a beanshell sampler, it depends on the code)
3) listeners are expensive, so fewer is better. To separate the results, you can give each sampler a different title, and the listeners/graphs can then group these as required. You can include timestamps or indexes as part of your sampler title using variables, properties and ${__javaScript}, etc. This will cause more or less grouping depending on the implementation you choose.

Is it possible to use alternative Listeners in JMeter after the tests are done?

I'm familiarising myself with JMeter and I've thought of something that's either pretty cool or a very dumb idea.
Whilst reading about Listeners I noticed the following:
Note that all Listeners save the same data; the only difference is in the way the data is presented on the screen.
And this:
Graph Results MUST NOT BE USED during load test as it consumes a lot of resources (memory and CPU). Use it only for either functional testing or during Test Plan debugging and Validation.
So I was wondering: if all listeners receive the same data; why not save that data in a CSV or even XML file, and feed that to a listener afterwards? It would be very resource friendly to have the Graph Results Listener display a graph after the tests are done, instead of while testing.
Am I missing something, or is this a good possiblity?
Yes you can do that and i think most guys use it that way only. Instead of CSV and XML files use JTL file format to save the results. In normal scenario one uses command line to run the test and save the data in a file(preferably JTL). After the test is done you can use the JTL file to generate reports with JMeter UI or using other tools like this.

Resources