Request per second in jemeter do not exceed 4 per second - jmeter

I am searching how to increase the number of sent requests per second .
I used 80 threads and set a Constant Throughput Timer and Variable throughput timer and i set 10 transactions per second and i when i get Jmeter logs i see max 4 transactions per second .
I also changed the heap variable to this and i have a virtual machine with 32 GO
but when i supervise the conssumed RAM for the jmeter process , i find only 1.4 % .

JMeter waits for getting the response before sending the next request therefore the throughput (number of transactions per unit of time) mainly depends on the system under test response time.
One possible situation I can think of is that you have too few loops (iterations) defined on Thread Group level
if you don't have sufficient number of iterations to cover the ramp-up period and "plateau" you may run into the situation when some threads had already have finished their work and had been shut down and some were not started yet. So double check that you have anticipated concurrency using i.e. Active Threads Over Time listener as it is possible that you have much less than 80 online virtual users. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more comprehensive explanation if needed.
If you follow JMeter Best Practices and ensure that JMeter has enough headroom to operate in terms of CPU, RAM, network, etc. - it might be the case that the application under test cannot process more than 4 requests per second so whatever you do on JMeter side won't have any impact. So I'd rather check your application logs, CPU, RAM usage, look at profiler tool output, etc.

Related

How to send 36000 requests within 5 minutes at an RPS of 120 requests using Jmeter?

I am using Jmeter for load testing and I'm new to this. I have an API where I want to send around 36000 requests in a given time, which is- 5 minutes. What should be the configuration of threads, ramp-up time, loop-count, and constant throughput timer for this scenario?
I am using the following configurations, but I am unable to reach the decided RPS-
Thread- 1000
Ramp-up- 5 Minute
loop-count 36
constant throughput timer- 7200
Where is my configuration wrong?
You can try to reduce the ramp-up period to be close to zero and increase the number of loops to "infinite", the total number of requests can be limited using Throughput Controller
In general there could be 2 main reasons of not being able to conduct the required load:
JMeter cannot produce the desired number of hits per second. Things to try:
Make sure to follow JMeter Best Practices
Increase number of threads in Thread Group
Consider switching to Distributed Testing mode
Application cannot handle that many requests per second. Things to try:
Inspect configuration and make sure it's suitable for high loads
Inspect CPU, RAM, Disk, etc. usage during the load test, it might be simply lack of resources, it can be done using JMeter PerfMon Plugin
Re-run your test with profiler tool telemetry enabled
Raise a ticket as it is a performance bottleneck

How to configure Jmeter to achieve desired throughput

I have created a performance test script as below. I am running 4 Thread Groups in parallel (Even though there are 14 Thread Groups, only 4 are enabled and I am running only those enabled 4 Thread Groups). I have used default Thread Groups.
I have used Flow Control Action to simulate the user think time and set it as 3 seconds.
My requirement is to achieve Throughput as 6.6/seconds. What is the best way to achieve it? Also, does user think time creates any impact on Throughput?
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So basically throughput is the number of requests which JMeter was able to within test duration time frame. If you introduce artificial delay of 3 seconds - it will make the overall throughput lower
Other (and the main factor) is your application response time because JMeter waits for previous request to finish before executing new one so there could be 2 options:
The amount of threads in your Thread Groups is not sufficient to create the desired throughput. If this is the case - just add more threads (virtual users) in Thread Group(s)
The amount of threads in your Thread Groups is too high and you're getting higher throughput that you expect. If this is the case you can pause JMeter even more by adding Constant Throughput Timer and specifying the desired number of the requests per minute you want to have. If you need 6.6 requests per second it will mean 396 requests per minute
Also remember that JMeter itself must be able to send requests fast enough so make sure to follow JMeter Best Practices
My suggestion is to consider using the Arrivals Thread Group. This TG will allow you to configure the desire average throughput (ATP); the TG will instantiate the required threads needed to achieve the ATP goal.
As for the Think Time (TT), it should be a business requirement that emulates the pauses that users take when using the application in the real world.

JMeter sending less requests than expected

I'm using jmeter to generate a performance test, to keep things short and straight i read the initial data from a json file, i have a single thread group in which after reading the data i randomize certain values to prevent data duplication when i need it, then i'm passing the final data to the endpoint using variables, this will end up in a json body that is recieved by the endpoint and it will basically generate a new transaction in the database. Also i added a constant timer to add a 7 seconds delay between requests, with a test duration of 10 minutes and no ramp up, i calculated the requests per second like this:
1 minute has 60 seconds and i have a delay of 7 seconds per request then it's logical to say that every minute i'm sending approximately 8.5 requests per minute, this is my calculation (60/7) = 8.5 now if the test lasts for 10 minutes then i multiply (8.5*10) = 85 giving me a total of 85 transactions in 10 minutes, so i should be able to see that exact same amount of transactions created in the database after the test completes.
This is true when i'm running 10-20-40 users, after the load test run i query the db and i get the exact same number of transaction however, as i increase the users in the thread group this doesn't happen anymore, for example if i set 1000 users i should be able to generate 8500 transactions in 10 minutes, but this is not the case, the db only creates around 5.1k transactions.
What is happening, what is wrong? Why it initially works as expected and as i increase the users it doesn't? I can provide more information if needed. Please help.
There could be 2 possible reasons for this:
You discovered your application bottleneck. When you add more users the application response time increases therefore throughput decreases. There is a term called saturation point which stands for the maximum performance of the system, if you go beyond this point - the system will respond slower and you will get less TPS than initially. From the application under test side you should take a look into the following areas:
It might be the case your application simply lacks resources (CPU, RAM, Network, etc.), make sure that it has enough headroom to operate using i.e. JMeter PerfMon Plugin
Your application middleware (application server, database, load balancer, etc.) are not properly set up for the high loads. Identify your application infrastructure stack and make sure to follow performance tuning guidelines for each component
It is also possible that your application code needs optimization, you can detect the most time/resource consuming functions, largest objects, slowest DB queries, idle times, etc. using profiling tools
JMeter is not sending requests fast enough
Just like for the application under test check that JMeter machine(s) have enough resources (CPU, RAM, etc.)
Make sure to follow JMeter Best Practices
Consider going for Distributed Testing
Can you please check once CPU and Memory utilization(RAM and java heap utilization) of jmeter load generator while running jemter for 1000 users? If it is higher or reaching to max then it may affect requests/sec. Also just to confirm requests/sec from Jmeter side, can you please add listener in Jmeter script to track Hit/sec or TPS?
This will also be true(8.5K requests in 10 mins test duration) if your API response time is 1 second and also you have provided enough ramp-up time for those 1000 users.
So possible reason is:
You did not provide enough ramp-up time for 1000 users.
Your API average response time is more than 1 second while you performing tests for 1000 users.
Possible workarounds:
First, try to measure the API response time for 1 user.
Then calculate accordingly that how many users you need to reach 8500 requests in 10 mins. Use this formula:
TPS* max response time in second
Give proper ramp-up time for 1000 users. Check this thread to understand how you should calculate ramp-up time.
Check that your load generator is able to generate 1000 users without any memory or health (i.e CPU usage) issues. If requires, try to use distributed architecture.

Is the throughput value related to the response time of requests in JMeter?

I'm getting the following results, where the throughput does not have a change, even when I increase the number of threads.
Scenario#1:
Number of threads: 10
Ramp-up period: 60
Throughput: 5.8/s
Avg: 4025
Scenario#2:
Number of threads: 20
Ramp-up period: 60
Throughput: 7.8/s
Avg: 5098
Scenario#3:
Number of threads: 40
Ramp-up period: 60
Throughput: 6.8/s
Avg: 4098
The my JMeter file consists of a single ThreadGroup that contains a single GET.
When I perform the request for an endpoit where the response time faster (less than 300 ms) I can achieve throughput greater than 50 requests per seconds.
Can you see the bottleneck of this?
Is there a relationship between response time and throughput?
It's simple as JMeter user manual states:
Throughput = (number of requests) / (total time)
Now assuming your test contains only a single GET then Throughput will be correlate average response time of your requests.
Notice Ramp-up period: 60 will start to create threads over 1 minute, so it will add to total time of execution, you can try to reduce it to 10 or equal to Number of threads.
But you may have other sampler/controllers/component that may effect total time.
Also in your case especially in Scenario 3, maybe some requests failed then you are not calculating Throughput of successful transactions.
In ideal world if you increase number of threads by factor of 2x - throughput should increase by the same factor.
In reality the "ideal" scenario is hardly achievable so it looks like a bottleneck in your application. The process of identifying the bottleneck normally looks as follows:
Amend your test configuration to increase the load gradually so i.e. start with 1 virtual user and increase the load to i.e. 100 virtual users in 5 minutes
Run your test and look into Active Threads Over Time, Response Times Over Time and Server Hits Per Second listeners. This way you will be able to correlate increasing load with increasing response time and identify the point where performance starts degrading. See What is the Relationship Between Users and Hits Per Second? for more information
Once you figure out what is the saturation point you need to know what prevents your application from from serving more requests, the reasons could be in:
Application simply lacks resources (CPU, RAM, Network, Disk, etc.), make sure to monitor the aforementioned resources, this could be done using i.e JMeter PerfMon Plugin
The infrastructure configuration is not suitable for high loads (i.e. application or database thread pool settings incorrect)
The problem is in your application code (inefficient algorithm, large objects, slow DB queries). These items can be fetched using a profiler tool
Also make sure you're following JMeter Best Practices as it might be the case JMeter is not capable of sending requests fast enough due to either lack of resources on JMeter load generator side or incorrect JMeter configuration (too low heap, running test in GUI mode, using listeners, etc)

Need to 2350 users per second in Jmeter

I have a task to load test on application which needs to respond 2350 users per second. For that I have set us something like this in Jmeter:
I have added a Thread group. In that I have set:
Number of threads(users): 2350
Ramp-up period: 1 Second
Loop Count: 1
Will it solve my purpose to load test of application with 2350 users??
It will. But only if response time for each virtual user will be equal 1 second.
There are 2 common load patterns, for implementing both you will need Timers
Actually it might be the case you don't need as much as 2350 thread to simulate 2350 users as real life users don't hammer the server non-stop, they need some time to think between requests. Besides page loading time also needs to be considered.
Let's imagine you have 2350 users. Each user performs an action each 15 seconds. Page loading time is 5 seconds. So each user will be able to hit the server 3 times per minute. So 2350 users will produce 7050 requests in minute which stands for 117.5 requests per second only. If this is what you're looking for consider adding Constant Timer or Uniform Random Timer
If you need to simulate 2350 requests per second, not users, you need to handle it a little bit differently. There are 2 Timers which are designed to set exact "throughput" - a number of requests per time unit. They are:
Constant Throughput Timer
Throughput Shaping Timer - an advanced version of the Constant Throughput Timer, available via JMeter Plugins project.
Remember that above timers can only pause the threads, they won't kick off new virtual users if you don't provide enough on Thread Group level so make sure you have at least as much as you try to simulate, or it's better to have 2x more in your virtual pocket just in case. Also check out JMeter tuning tips from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure as JMeter default configuration isn't something you can use to create such a load.
The setup you described just creates 2350 parallel users (2350 separate connections), but doesn't guarantee that every thread will be completed exactly in 1 second. Ramp-up period - how fast all threads will start to send requests. So, in the case you described Jmeter will do the following:
Create 2350 separate threads.
The difference between 1st and last thread start will be 1 second (approximately).
Every thread will be implemented only once (loop count).
For real scenarios when you need specific throughput for continuous period of time it's better to use Constant Throughput Timer. It controls the amount of requests sent by thread(s) and changes delay between requests when it's necessary to meet the value you defined. So, your real throughput doesn't really depend on the total amount of threads (users). Sometimes less users can send requests faster (depends on your application).
To control your throughput while running, just add Summary Report to your test plan.
Moreover, for this specific scenario (2350 users) it can be potentially difficult to generate so many requests in 1 second. In this case, you need to use distributed load with some Jmeter slaves and 1 master.

Resources