In JMeter(Vr. 5.2.1) i have JSR223 sampler having a custom script to generate a payload to an endpoint. The end point is capable of receiving 100000 transactions per second. However, from jmeter i find that the max samples i can achieve is around 4000 requests per second per thread.
I tried to increase the thread limit to 10 then 100, however the result seems to stay the same i.e., jmeter is able achieve a max of 4k requests per second.
i tried to have two thread groups running in parallel to increase transaction rate, this did not increase the rate beyond the 4k per second, mark, no matter what i do.
Is there any way or method i can use to increase this request rate to 100k per second?
The test plan has one thread group within which a simple controller and one jsr223 sampler withnin it. This is followed by a summary report at test plan level.
I have followed all the best practices highlighted in some of the articles in stackoverflow.
Thank you.
If you would really follow JMeter Best Practices you would be using JMeter 5.5 as the very first one looks like:
16.1 Always use latest version of JMeter
The performance of JMeter is being constantly improved, so users are highly encouraged to use the most up to date version.
Given you applied all JMeter performance and tuning tips and did the same to the JVM and still cannot reach more than 4000 transactions per second you can consider the following options:
Switch from JSR223 Sampler to a Java Request sampler or create your own JMeter Plugin, in this case performance of your code would be higher
And consider switching to distributed mode of JMeter tests execution, if you can reach 4000 requests per second from one machine you will need 25 machines to reach 100000 requests per second.
Related
I am using Jmeter for load testing and I'm new to this. I have an API where I want to send around 36000 requests in a given time, which is- 5 minutes. What should be the configuration of threads, ramp-up time, loop-count, and constant throughput timer for this scenario?
I am using the following configurations, but I am unable to reach the decided RPS-
Thread- 1000
Ramp-up- 5 Minute
loop-count 36
constant throughput timer- 7200
Where is my configuration wrong?
You can try to reduce the ramp-up period to be close to zero and increase the number of loops to "infinite", the total number of requests can be limited using Throughput Controller
In general there could be 2 main reasons of not being able to conduct the required load:
JMeter cannot produce the desired number of hits per second. Things to try:
Make sure to follow JMeter Best Practices
Increase number of threads in Thread Group
Consider switching to Distributed Testing mode
Application cannot handle that many requests per second. Things to try:
Inspect configuration and make sure it's suitable for high loads
Inspect CPU, RAM, Disk, etc. usage during the load test, it might be simply lack of resources, it can be done using JMeter PerfMon Plugin
Re-run your test with profiler tool telemetry enabled
Raise a ticket as it is a performance bottleneck
I have created a performance test script as below. I am running 4 Thread Groups in parallel (Even though there are 14 Thread Groups, only 4 are enabled and I am running only those enabled 4 Thread Groups). I have used default Thread Groups.
I have used Flow Control Action to simulate the user think time and set it as 3 seconds.
My requirement is to achieve Throughput as 6.6/seconds. What is the best way to achieve it? Also, does user think time creates any impact on Throughput?
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So basically throughput is the number of requests which JMeter was able to within test duration time frame. If you introduce artificial delay of 3 seconds - it will make the overall throughput lower
Other (and the main factor) is your application response time because JMeter waits for previous request to finish before executing new one so there could be 2 options:
The amount of threads in your Thread Groups is not sufficient to create the desired throughput. If this is the case - just add more threads (virtual users) in Thread Group(s)
The amount of threads in your Thread Groups is too high and you're getting higher throughput that you expect. If this is the case you can pause JMeter even more by adding Constant Throughput Timer and specifying the desired number of the requests per minute you want to have. If you need 6.6 requests per second it will mean 396 requests per minute
Also remember that JMeter itself must be able to send requests fast enough so make sure to follow JMeter Best Practices
My suggestion is to consider using the Arrivals Thread Group. This TG will allow you to configure the desire average throughput (ATP); the TG will instantiate the required threads needed to achieve the ATP goal.
As for the Think Time (TT), it should be a business requirement that emulates the pauses that users take when using the application in the real world.
I want to understand the actual JMeter throughput behavior achieved at runtime.
Scenario - I'm increasing the JMeter throughput at runtime using Constant Throughput Timer and beanshell script as described here - https://www.blazemeter.com/blog/how-to-change-jmeters-load-during-runtime.
Test plan - Along with CTT as described above, simple threadgroup with fixed #threads and infinite loop iterations is configured. HTTP Sampler for GET call is used. No other timer or plugin is added in the test plan.
As I keep on increasing the target throughput of JMeter at runtime, I noted that the actual achieved throughput value is limited by mainly 2 factors -
The threads in my thread group.
The performance bottleneck of target app.
I have questions regarding both the limitations -
Once the highest throughput is achieved using all threads in current threadgroup (assuming there're no errors from the target app yet), is there a way to increase the #threads at this point dynamically at runtime to achieve a higher JMeter throughput?
Now as I keep on increasing JMeter throughput, it can't be increased further due to errors from the target app. How does the JMeter identify performance bottleneck of my target app and react to it? Does it add any delay or kill threads or apply any such mechanism to reduce it's throughput to the max that the target app can sustain?
In continuation with point #2, if JMeter identifies and reacts to the performance bottleneck by any method, what are the factors (like error rate, response latency etc.) that control it's throughput to keep it within the max limit of the target app? Are these factors configurable or extensible?
You can play the same trick with the number of threads in the thread group, just define it using __P() function and you will be able to manipulate it using Beanshell server. Another option is using a JSR223 Test Element and Groovy language to add the new thread(s) where/when required like:
ctx.getThreadGroup().addNewThread(0, ctx.getEngine())
JMeter doesn't "identify" anything, it just tries to execute Samplers as fast as it can and the number of requests per second depends on 2 factors:
your application needs to be able to respond fast enough
JMeter itself needs to be able to send requests fast enough, i.e. you need to follow JMeter Best Practices
there are no "mechanisms" which detect the application under test behaviour, the closest solution is Auto Stop Listener
See point 2
I have single API to test and I want achieve 30K TPS with this API. Can any body explain me fully how to test this with Jmeter ?
It may be with 20 , 40 or 60 Users but I want to acheive this using Jmeter UI mode?
Your question is way too broad.
Anyway, the simple and straight answer is that you may not be able to achieve this in the UI mode and you may require multiple slaves. If your API has an SLA of 1 second, a single thread executing with 1 second pacing will result in 3600 transactions in an hour (1 TPS). So, you will require 30,000 threads in JMeter to achieve 30,000 TPS.
JMeter GUI isn't for Load Testing rather the test plan debugging purpose only. GUI is very resource intensive and consumes a lot of memory. It is better and highly suggested to perform load testing in Non-GUI mode.
When you run your JMeter in GUI mode you will see this:
As 30K TPS is too high and to test this you have to give enough number of threads. To calculate how many threads you need for this test, you also need to know the API's max response time.
Here is the formula for thread number calculations:
TPS* max response time in second
For example, if you have response time 1 second, then to generate 30K TPS you need 30K threads. If API's response time is 500ms, then you need at least 15K threads.
To generate this high load, you also need to go for Distributed Testing, cause you won't be able to generate such a high load from a single machine. To determine how many threads you can generate from a single machine, you also need to test this yourself. Try to gradually increase the thread number and monitor the machine's health. If it consumes 70-80% of your machine's health (CPU, Memory, Disk), then stop the test and mark the number of threads. Now you can assume how many machines you need to test this scenario.
Finally, I suggest you to double check your test requirements because 30K TPS (108000000 requests per hour) is too high!
See Apache JMeter's Best Practices for more.
I'm trying to understand a significant performance increase in my Jmeter test.
In a multi-tenancy database environment, I have a single RESTful service test containing a Thread Group with a single HTTP Request sampler posting an XML payload. The XML payload is then evaluated via stored procedures, and a response is received stating if the claim was qualified. I run this test from a .bat file (non-gui mode) in an Apache 7 environment with a single JVM running.
Test Thread Group Properties
# of Threads: ${__P(test.threads,200)}
Ramp-Up Period: ${__P(test.rampup,1)}
Loop Count: Forever
Delay Thread: Enabled
Scheduler: Enabled
Duration: ${__P(test.duration,1800)}
HTTP Request
Method: POST
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
When I duplicate the HTTP Request sampler within the TG and change the tenant ID within the URL, for some reason the performance seems to increase by > 55%. (i.e., the # of claims/second is increased by 55%) It appears the test did not fail, so I cannot attribute the performance increase to an increased error rate.
I would have expected an increase if I had enabled another JVM to let the Load Balancer perform optimization, but this is not the case. (still using only 1 JVM)
HTTP Request 1
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
HTTP Request 2
https://serverName:port/database/.../${__P(tenant,2222)}/Claim/${__property(contractId)}
The theory going around here is that Jmeter generates a workload at a higher rate for multiple requests than for a single request. I'm skeptical, but haven't found anything "solid" to support my skepticism.
Is this theory true? If so, why would two HTTP Requests increase the performance?
In short: it's OK.
Longer version:
Here is how JMeter works:
JMeter starts all the threads during ramp-up period
Each thread starts executing samplers upside down (or according to the Logic Controllers)
When request doesn't have more samplers to execute and no more loops to iterate it's being shut down.
So how does number of virtual users correlate with the "performance". When you increase virtual users number of requests number for a load test it affects Throughput
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So if you increase load on well-behaved system throughput should increase by the same factor or linearly.
When you increase load but throughput does not increase, such situation is known as "saturation point" when you get the maximum performance from the system. Further load increasing will lead to throughput going down.
References:
Apache JMeter Glossary
An extended Glossary version
And how do you messure your performance? According to your "theory" your messurements includes jmeter overhead and this would be wrong. More over, is the response the same for both cases? What I mean, is the backend doing the same work work both cases?
Maybe first request returns different output then the other one. Maybe it is more expensive to generate output in one of the request. That is why you will notice "incresed" performance as normally you would do N x Heavy task in X seconds, and in second case G x heavy tasks + H X light tasks in the same time where G < N/2 - more requests in the same time? Sure! Incresed perfomance? Nope.
So to completly investigate what is happening, you need to review your measurement method. I would start with comparing the the actual time for both requests.