Is there a jmeter plugin to compare test duration against a predefined benchmark? - jmeter

I am a c# developer and my QA team implemented jmeter based test process, but I don't like their process. What I want is this:
Say I have 10 web api test cases and jmeter simulate 10 users.
When jmeter finishes each test, I want to compare the average duration against a predefined benhmark.
Test 1 is an easy one and I expect the average duration less than 1 second
Test 2 is a complicated one and I expect the average duration to be less than 5 second
Test x expect average duration to be less than N seconds
If any test's average duration is 10% higher than predefined number, the test result should be a fail.
My QA insists it can't be done in Jmeter, is it really the case? I am a c# developer and I can do above easily using nbomber with bit coding.

1-3 - Duration Assertion
4 - JSR223 Assertion along with any Sampler somewhere in tearDown Thread Group, there is TESTSTART.MS pre-defined variable which holds the time of test start. Getting current time, comparing it to the test start time and failing the request is not a big deal.
More information on the Assertions concept: How to Use JMeter Assertions in Three Easy Steps

Related

Simulate 100k transactions per second

In JMeter(Vr. 5.2.1) i have JSR223 sampler having a custom script to generate a payload to an endpoint. The end point is capable of receiving 100000 transactions per second. However, from jmeter i find that the max samples i can achieve is around 4000 requests per second per thread.
I tried to increase the thread limit to 10 then 100, however the result seems to stay the same i.e., jmeter is able achieve a max of 4k requests per second.
i tried to have two thread groups running in parallel to increase transaction rate, this did not increase the rate beyond the 4k per second, mark, no matter what i do.
Is there any way or method i can use to increase this request rate to 100k per second?
The test plan has one thread group within which a simple controller and one jsr223 sampler withnin it. This is followed by a summary report at test plan level.
I have followed all the best practices highlighted in some of the articles in stackoverflow.
Thank you.
If you would really follow JMeter Best Practices you would be using JMeter 5.5 as the very first one looks like:
16.1 Always use latest version of JMeter
The performance of JMeter is being constantly improved, so users are highly encouraged to use the most up to date version.
Given you applied all JMeter performance and tuning tips and did the same to the JVM and still cannot reach more than 4000 transactions per second you can consider the following options:
Switch from JSR223 Sampler to a Java Request sampler or create your own JMeter Plugin, in this case performance of your code would be higher
And consider switching to distributed mode of JMeter tests execution, if you can reach 4000 requests per second from one machine you will need 25 machines to reach 100000 requests per second.

JMeter Non GUI mode Test Execution number of samples mismatching

I'm executing multiple scripts for 1 hr in Non GUI Mode. I'm having couple of questions here.
Test Scripts:-
Script1
Script2
Script3
Number of Samples are differing with respect to the scenarios. I need equal distribution for all 3 scenarios. How to achieve this?
I'm saving all 3 scripts in one .jmx file( keeping 3 thread groups and assigning 20 Users per script). Is it correct approach.
I have added assertions for each request to check the response is valid or not.In loadrunner we will keep outside of Transactions but in Jmeter I'm not sure. Do we need to keep them during execution window.
I'm really looking forward to your suggestions.
It is difficult to achieve equal number of samples in all 3 scripts in Jmeter as response times for all 3 requests will be different. And there is no such thing as pacing in Jmeter as was there in Loadrunner. Only think time u can add as constant timer according to ur perception.
One reason of discrepancy could be the Ramp up time. You should give same ramp up period in all 3 thread groups. if ramp up time is different discrepancy in number of samples is expected.
I would be able to help if you could provide some more info like:
1. How much is the RT for each req
2. How much think time u r giving for each.
3. How much ramp up time u giving for each thread group.
4. How much startup delay u r giving.

Users distribution in Load Test Visual Studio

I created load test project in VS. There are 5 scenarios which are implemented as normal unit test.
Test mix model: Test mix percentage based on the number of tests started.
Scenario A: 10%
Scenario B: 65%
Scenario C: 9%
Scenario D: 8%
Scenario E: 8%
Load pattern: Step. Initial user count: 10. Step user count: 10. Step duration: 10sec. Maximum user count: 300.
Run Duration: 10 minutes.
I would like to know how the load is put on all the scenarios? How the users are distributed between the scenarios in time?
If I put 100 users as initial user count, then 10 virtual users (10% from 100) start replaying scenario A at one time? What happend when they finish? I would be really grateful if someone can explain me know the user distribution works.
Please use the correct terminology. Each "Scenario" in a load test has its own load pattern. This answer assumes that there are 5 test cases A to E.
The precise way load tests start test cases is not defined but the documentation is quite clear. Additionally the load test wizard used when initially creating a load test has good descriptions of the test mix models.
Load tests also make use of random numbers for think times and when choosing which test to run next. This tends to mean the final test results show counts of test cases executed that differ from the desired percentages.
My observations of load tests leads me to believe it works as follows. At various times the load test compares the number of tests currently executing against the number of virtual users that should be active. These times are when the load test's clock ticks and a step load pattern changes, also when a test case finishes. If the comparison shows more virtual users than test cases being executed then sufficient new tests are started to make the numbers equal. The test cases are chosen to match the desired test mix, but remember that there is some randomization in the choice.
Your step pattern is initially 10, step by 10 every 10 sec to a maximum of 300. Maximum users should be after (10 seconds per step)*(300 users)/(10 users per step) = 300 seconds = (5 minutes). The run duration of 10 minutes means 5 minutes ramp up then 5 minutes steady at max users.
For the final paragraph of your question. Given the same percentages but an constant user count of 100 then you would expect the initial number of each test case to be close to the percentages. Thus 10 of A, 65 of B, 9 of C, 8 of D and 8 of E. When any test case completes visual studio will choose a new test case attempting to follow the test mix model, but, as I said earlier, there is some randomization in the choice.

How to calculate throughput in a Jmeter test plan

I have a JMeter test plan using which I am running 10 - 500 threads. Each thread submits a job. I am basically collecting results like 10 jobs and measure the latency of each job. I know summary report gives a nice report on throughput, but that report is not suitable for my test because my test plan has 1 POST plus 11 GET calls in it and the summary report gives throughout of each those calls. But I need to measure throughput for each 10 threads, 50 and 100 threads respectively. Could someone let me know how should I go this in JMeter or do I have to calculate manually? Note: I'm allowing 10sec ramp up time for 10 threads.
You are right: summary report gives you throughput for each call. To measure multiple calls at once, add them under Transaction controller. For example, say you want to measure overall throughput of all GETs at once, put them all under Transaction controller, but not the POST request, so summary graph will contain measurement for each of the GET requests, plus separate line which includes all of them at once.
Another (non-interactive) option is to save results as CSV file, including label and latency, and calculate throughput using Excel (or awk) from the file.
To measure for different number of users, you need to run test multiple times with that number of concurrent threads/users.

JMeter: Aggregate report analysis

I want to test the capacity of the web app that can handle without break.how to get the average requests per second from the aggregate report.
Is throughput is equal to average requests per second ?
I don't really need Apache definition.please make it simple.
Number of threads : 25
ramp-up :1
Loop:10
I have 3 slaves.
samples:250
Avg:1594
Throughput:10.4
Your question contains the answer: Throughput:10.4
According to JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So my assumption is that you generate the load of 10 requests per second (it may be per minute however, I need to know how long did your 250 samples execute to tell for sure)
If you're interested in "requests per second over time" you can utilize Server Hits Per Second listener which is available via JMeter Plugins project.
Alternative option of visualizing your load test results is using Loadosophia.org cloud where you can upload your test result and see graphs, distributions, export load test report as PDF, etc.

Resources