Why jmeter doesn't reach the expected rate? - jmeter

I've configured my jmeter as can be seen in the screen shot below-
However, when I examine the logs I can see that we only got to a rate of 37 requests per second as can be seen in the logs:
2021-10-18 03:20:30,005 INFO o.a.j.r.Summariser: summary = 3510096 in 26:26:03 = 36.9/s Avg: 67 Min: 16 Max: 69589 Err: 61 (0.00%)
Am I missing something? How can I increase the rate?

What rate is "expected"?
1 user will generate 1 hit per second only if application response time will be 1000 milliseconds.
If response time will be 2000 milliseconds you will have 0.5 requests per second
If response time will be 500 milliseconds you will have 2 requests per second.
There could be 2 explanations for the throughput lower than expected:
JMeter cannot send requests fast enough the reasons are in:
The number of virtual users is too low, just increase the number of threads in the Thread Group
JMeter cannot send requests fast enough because it's overloaded. Make sure to follow JMeter Best Practice and if it's still the case consider going for Distributed Testing
Your application cannot respond fast enough. In your case I can see response times as high as 69589 milliseconds so most probably that is the reason, you need to ensure that the application has enough headroom to operate in terms of CPU, RAM, etc. using an APM tool, check its logs, check its configuration, perform code profiling, etc.

Related

Unable to reach the requested number of requests while running jmeter distributed

I am working with 1 master, 50 slaves. Number of threads: 20
I have 1 thread and there are 3 samples in it. I'm starting the tests, but as a result of the test, I see that you are going 500-700 requests.
1000 requests from each should go. But it doesn't go.
You have 20*50=1000 threads in total.
You can reach 1000 requests per second only if response time is 1 second sharp.
If response time will be 0.5 seconds - you will get 2000 requests per second
If response time will be 2 seconds - you will get 500 requests per second.
If you need to simulate X requests per second you need to add more threads (or alternatively go for Concurrency Thread Group and Throughput Shaping Timer combination)
But remember that the application under test must be able to respond fast enough because if it doesn't - any JMeter tweaks won't make any difference.
By the way, 1000 requests can be simulated from a modern mid-range laptop without any issues (unless your requests are "very heavy"), just make sure to follow JMeter Best Practices

How to send 36000 requests within 5 minutes at an RPS of 120 requests using Jmeter?

I am using Jmeter for load testing and I'm new to this. I have an API where I want to send around 36000 requests in a given time, which is- 5 minutes. What should be the configuration of threads, ramp-up time, loop-count, and constant throughput timer for this scenario?
I am using the following configurations, but I am unable to reach the decided RPS-
Thread- 1000
Ramp-up- 5 Minute
loop-count 36
constant throughput timer- 7200
Where is my configuration wrong?
You can try to reduce the ramp-up period to be close to zero and increase the number of loops to "infinite", the total number of requests can be limited using Throughput Controller
In general there could be 2 main reasons of not being able to conduct the required load:
JMeter cannot produce the desired number of hits per second. Things to try:
Make sure to follow JMeter Best Practices
Increase number of threads in Thread Group
Consider switching to Distributed Testing mode
Application cannot handle that many requests per second. Things to try:
Inspect configuration and make sure it's suitable for high loads
Inspect CPU, RAM, Disk, etc. usage during the load test, it might be simply lack of resources, it can be done using JMeter PerfMon Plugin
Re-run your test with profiler tool telemetry enabled
Raise a ticket as it is a performance bottleneck

Unable to achieve expected throughput by running Jmeter scripts as expected throughput is more however getting very less

Unable to achieve expected throughput by running Jmeter scripts as expected throughput is more however getting very less.
Running Jmeter script by targeting 1000 requests per second (Business SLA) hence used 'Constant Throughput Timer' or 'Throughput shaping timer' as suggested in below queries.
Constant Throughput Timer:
Target - 60,000/min (60 secs) - all active threads, Threads (Users) - 200 with Ramp up - 1 sec, Duration: 1 hour.
or Users - 2000 or tried with 10,000 users as well.
Result: Ended the execution with throughput showing 50/sec with average response time 50 seconds.
Jmeter: Scenario to test 5 users with ramp up 1 hour to trigger 10 thousand requests
unable to increase average throughput in jmeter
Throughput shaping timer: Reflecting setup as above.
In both the cases getting 'Throughput' around 50/sec with average response around 30 secs.
When looked at server metrics - CPU and memory consumption is very less around 3% only.
Hence expected 'Throughput' is high as the server resources are not used much, if 'Throughput' is low and continuously sending requests with Forever to see if can get 500 response code errors, it just increases the average response time and reduces Throughput but not getting 500 response code errors.
After some time getting socket exceptions, connection reset issues where Jmeter is running but no failures seen at server side.
Gone through similar queries here however unable to understand when server resources not used much and as per Server platform SLAs, it supports 1000 RPS but unable to achieve through Jmeter.
As per CTT calculation: RPS * / 1000
1000 * 50000/1000 = 50000 (Should give threads upto 50K? however our SLA is only for 200 users).
It might be the case your server isn't capable of responding fast enough. Low CPU and Memory consumption means that the server has enough headroom, however the application server configuration can be incorrect so the server doesn't fully utilise it's hardware resources. Another reason could be inefficient algorithms used in your application code, you can use a profiler tool to see what's going on when you're conducting the load
It might be the case JMeter is not capable of sending requests fast enough. Make sure that the machine where JMeter is running is not overloaded, that you run your JMeter test in non-GUI mode and in general follow JMeter Best Practices. You can also try running JMeter in distributed mode in case if one machine is not capable of creating the required load.

Is the throughput value related to the response time of requests in JMeter?

I'm getting the following results, where the throughput does not have a change, even when I increase the number of threads.
Scenario#1:
Number of threads: 10
Ramp-up period: 60
Throughput: 5.8/s
Avg: 4025
Scenario#2:
Number of threads: 20
Ramp-up period: 60
Throughput: 7.8/s
Avg: 5098
Scenario#3:
Number of threads: 40
Ramp-up period: 60
Throughput: 6.8/s
Avg: 4098
The my JMeter file consists of a single ThreadGroup that contains a single GET.
When I perform the request for an endpoit where the response time faster (less than 300 ms) I can achieve throughput greater than 50 requests per seconds.
Can you see the bottleneck of this?
Is there a relationship between response time and throughput?
It's simple as JMeter user manual states:
Throughput = (number of requests) / (total time)
Now assuming your test contains only a single GET then Throughput will be correlate average response time of your requests.
Notice Ramp-up period: 60 will start to create threads over 1 minute, so it will add to total time of execution, you can try to reduce it to 10 or equal to Number of threads.
But you may have other sampler/controllers/component that may effect total time.
Also in your case especially in Scenario 3, maybe some requests failed then you are not calculating Throughput of successful transactions.
In ideal world if you increase number of threads by factor of 2x - throughput should increase by the same factor.
In reality the "ideal" scenario is hardly achievable so it looks like a bottleneck in your application. The process of identifying the bottleneck normally looks as follows:
Amend your test configuration to increase the load gradually so i.e. start with 1 virtual user and increase the load to i.e. 100 virtual users in 5 minutes
Run your test and look into Active Threads Over Time, Response Times Over Time and Server Hits Per Second listeners. This way you will be able to correlate increasing load with increasing response time and identify the point where performance starts degrading. See What is the Relationship Between Users and Hits Per Second? for more information
Once you figure out what is the saturation point you need to know what prevents your application from from serving more requests, the reasons could be in:
Application simply lacks resources (CPU, RAM, Network, Disk, etc.), make sure to monitor the aforementioned resources, this could be done using i.e JMeter PerfMon Plugin
The infrastructure configuration is not suitable for high loads (i.e. application or database thread pool settings incorrect)
The problem is in your application code (inefficient algorithm, large objects, slow DB queries). These items can be fetched using a profiler tool
Also make sure you're following JMeter Best Practices as it might be the case JMeter is not capable of sending requests fast enough due to either lack of resources on JMeter load generator side or incorrect JMeter configuration (too low heap, running test in GUI mode, using listeners, etc)

Can someone please explain what these ApacheBench results mean?

i'm trying to figure out how to use ApacheBench and benchmark my website. I installed the default site project (it's ASP.NET MVC but please don't put stop reading if u're not a .NET person).
I didn't change anything. Add new project. Set confuration to RELEASE. Run without Debug. (so it's in LIVE mode). Yes, this is with the built in webserver, not the production grade IIS or Apache or whatever.
So here's the results :-
C:\Temp>ab -n 1000 -c 1 http://localhost:50035/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: ASP.NET
Server Hostname: localhost
Server Port: 50035
Document Path: /
Document Length: 1204 bytes
Concurrency Level: 1
Time taken for tests: 2.371 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 1504000 bytes
HTML transferred: 1204000 bytes
Requests per second: 421.73 [#/sec] (mean)
Time per request: 2.371 [ms] (mean)
Time per request: 2.371 [ms] (mean, across all concurrent requests)
Transfer rate: 619.41 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.1 0 16
Processing: 0 2 5.5 0 16
Waiting: 0 2 5.1 0 16
Total: 0 2 5.6 0 16
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 16
95% 16
98% 16
99% 16
100% 16 (longest request)
C:\Temp>
Now, i'm not sure exactly what I should be looking at.
Firstly, I after the number of requests a second. So if we have a requirement to handle 300 reqs/sec, then is this saying it handles and average of 421 req's a sec?
Secondly, what is the reason for adding more concurrent? As in, if i have 1000 hits on 1 concurrent, how does that differ to 500 on 2 concurrent? Is it to test if there's any code that blocks other requests?
Lastly, is there anything important I've missed from the results which I should take note of?
Thanks :)
what is the reason for adding more
concurrent? As in, if i have 1000 hits
on 1 concurrent, how does that differ
to 500 on 2 concurrent? Is it to test
if there's any code that blocks other
requests?
It's a bit about that, yes : your application is probably doing things where concurrency can bring troubles.
A couple of examples :
a page is trying to access a file -- locking it in the process ; it means if another page has to access the same file, it'll have to wait until the first page has finished working with it.
quite the same for database access : if one page is writing to a database, there is some kind of locking mecanisms (be it table-based, or row-based, or whatever, depending on your DBMS)
Testing with a concurrency of one is OK... As long as your website will never have more than one user at the same time ; which is quite not realistic, I hope for you.
You have to think about how many users will be on site at the same time, when it's in production -- and adjust the concurrency ; just remember that 5 users at the same time on your site doesn't mean you have to test with a concurrency of 5 with ab :
real users will wait a couple of seconds between each request (time to read the page, click on a link, ...)
ab doesn't wait at all : each time a page is loaded (ie, a request is finished), it launches another request !
Also, two other things :
ab only tests for one page -- real users will navigate on the whole website, which could cause concurrency problems you would not have while testing only one page
ab only loads one page : it doesn't request external resources (think CSS, images, JS, ...) ; which means you'll have lots of other requests, even if not realy costly, when your site is in production.
As a sidenote : you might want to take a look at other tools, which can do far more complete tests, like siege, Jmeter, or OpenSTA : ab is really nice when you want to measure if something you did is optimizing your page or not ; but if you want to simulate "real" usage of your site, those are far more adapted.
Yes, if you want to know how many requests per second your site is able to serve, look at the "Requests per second" line.
In your case it's really quite simple since you ran ab with concurrency of 1. Each request, on average took only 2.371ms. 421 of those, one after the other, take 1 second.
You really should play with the concurrency a little bit, to accurately gauge the capacity of your site.
Up to a certain degree of concurrency you'd expect the throughput to increase, as multiple requests get handled in parallel by IIS.
E.g. if your server has multiple CPUs/cores. Also if a page relies on external IO (middle tier service, or DB calls) the cpu can work on one request, while another is waiting for IO to complete.
At a certain point requests/sec will level off, with increasing concurrency, and you'll see latency increase. Increase concurrency even more and you'll see your throughput (req/sec) decrease, as the server has to devote more resources to juggling all these concurrent requests.
All that said, the majority of your requests return in about 2ms. That's pretty darn fast, so I am guessing there is not much going on in terms of DB or middle tier calls, and your system is probably maxed out on cpu when the test is running (or something is wrong, and failing really fast. Are you sure ab gets the response page you intend it to? I.e. is the page you think you are testing 1204 bytes large?).
Which brings up another point: ab itself consumes cpu too, especially once you up the concurrency. So you want to run ab on another machine.
Also, should your site make external calls to middle tier services or DBs, you want to adjust your machine.config to optimize the number of threads IIS allocates:http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
And just a little trivia: the time taken statistics is done in increments of ~16ms, as that appears to be the granularity of the timer used. I.e. 80% of your responses did not take 0ms, they took some time <16ms.

Resources