Backend Listener and reduced throughput - jmeter

Test Summary -
Jmeter Version - 2.13
Jmeter Machines - 10 AWS EC2 m4.4xlarge instances
Number of threads on each instance 72 hence in total 720 threads in distributed mode
Test is executed in non GUI mode
I was experimenting with Backend listener as described here and came across drastic reduction in throughput against a static html file. These are the results are received for 5 minutes test -
Throughput with backend listener - 5000/sec
Throughput without backend listener - 9800/sec
I have repeated the test over a period of one week and test results have been consistent.
I did not see any significant difference in load avg or cpu utilization on load agents with or without backend listener.
Is JMeter performance degradation a known issue with Backend listener?

Hmm, interesting. Could it be that the additional time required to do the backend write means that 1 iteration takes a bit longer to complete, which in turn would mean a drop in throughput per thread? Since you're keeping the number of threads constant, your overall throughput would, as a result, drop a bit.
Here's an experiment I would conduct: Turn off backend listener, but put a 500ms constant timer in your test threads. Does it result in an overall throughput drop?

Did you try playing with Async Queue Size 5000 might be very short for a test that has no timer?
The difference can be explained by many factors:
Network quality between JMeter and the InfluxDB or Graphite server
Number of Samples that you track
The queue size, if you test is at high throughput (no timer), then the Async Queue Size will slow down compared to no using backend listener.
Bear in mind also that Increasing Async Queue Size will increase memory footprint

Related

How to achieve consistent throughput using Jmeter and Concurrency thread group

We are attempting to stress test our system using Jmeter and we hit a wall with inconsistent throughput, so we installed the Jmeter plugin Concurrency Thread Group with the Throughput Shaping Timer. We're currently running it on a single endpoint, to make sure we can achieve a consistent throughput. we set:
Throughput Shaping Timer to 30 RPS for 60 seconds, which is scoped directly under the Concurrency Thread Group
Target Concurrency using the tstFeedback function as tstFeedback(tst,13,50,10), since the max response time was around 1000
we initially achieved the expected RPS, it would reach close to 30 RPS every time we attempted the test:
However, after a couple of hours with no change to the configuration or the system, the RPS no longer reached the expected, we would consistently get lower RPS (ranging between 20 and 27 RPS):
We tried increasing the Target Concurrency, but it made no difference to the RPS and it was constantly being less than the expected. We attempted this using the GUI and CLI from a local machine and CLI on a remote instance with Jmeter installed, we are running the latest Jmeter version (5.5 at this time) locally and on the remote instance but it made no difference:
Increasing the HEAP didn’t make a difference either
When we increased the logging level on the local machine to diagnose the problem, this error would appear frequently:
o.a.j.p.h.s.HTTPHC4Impl: IOException
java.net.SocketException: Socket closed
Test instance specs
The remote instance is set up on an AWS m6i.8xlarge instance
Local machine has 16GB Memory with intel i7 8th gen processor
All tests were run on Ubuntu machines
Problem/Question
We need to simulate our customer behavior as accurately as possible, so it's critical that we're able to achieve consistent throughput in our requests. How can we do that?
Looking at your Throughput Shaping Timer configuration I can see that it's set up to kick off 50 maximum threads.
You can reach 30 requests per second with 50 threads only if your response time will be less than 1.6 seconds or less.
Looking into response times from your reports I can see that average response time is above 2 seconds and this perfectly explains the fact you're not able to reach the throughput.
Things to check:
Make sure to follow JMeter Best Practices
Make sure that JMeter has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using JMeter PerfMon Plugin
If you cannot conduct required load from the single JMeter instance - consider going for Distributed Testing
And finally it might be your application which cannot handle that many requests per second so it worth checking its logs, telemetry, and so on.

Request per second in jemeter do not exceed 4 per second

I am searching how to increase the number of sent requests per second .
I used 80 threads and set a Constant Throughput Timer and Variable throughput timer and i set 10 transactions per second and i when i get Jmeter logs i see max 4 transactions per second .
I also changed the heap variable to this and i have a virtual machine with 32 GO
but when i supervise the conssumed RAM for the jmeter process , i find only 1.4 % .
JMeter waits for getting the response before sending the next request therefore the throughput (number of transactions per unit of time) mainly depends on the system under test response time.
One possible situation I can think of is that you have too few loops (iterations) defined on Thread Group level
if you don't have sufficient number of iterations to cover the ramp-up period and "plateau" you may run into the situation when some threads had already have finished their work and had been shut down and some were not started yet. So double check that you have anticipated concurrency using i.e. Active Threads Over Time listener as it is possible that you have much less than 80 online virtual users. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more comprehensive explanation if needed.
If you follow JMeter Best Practices and ensure that JMeter has enough headroom to operate in terms of CPU, RAM, network, etc. - it might be the case that the application under test cannot process more than 4 requests per second so whatever you do on JMeter side won't have any impact. So I'd rather check your application logs, CPU, RAM usage, look at profiler tool output, etc.

Unable to achieve expected throughput by running Jmeter scripts as expected throughput is more however getting very less

Unable to achieve expected throughput by running Jmeter scripts as expected throughput is more however getting very less.
Running Jmeter script by targeting 1000 requests per second (Business SLA) hence used 'Constant Throughput Timer' or 'Throughput shaping timer' as suggested in below queries.
Constant Throughput Timer:
Target - 60,000/min (60 secs) - all active threads, Threads (Users) - 200 with Ramp up - 1 sec, Duration: 1 hour.
or Users - 2000 or tried with 10,000 users as well.
Result: Ended the execution with throughput showing 50/sec with average response time 50 seconds.
Jmeter: Scenario to test 5 users with ramp up 1 hour to trigger 10 thousand requests
unable to increase average throughput in jmeter
Throughput shaping timer: Reflecting setup as above.
In both the cases getting 'Throughput' around 50/sec with average response around 30 secs.
When looked at server metrics - CPU and memory consumption is very less around 3% only.
Hence expected 'Throughput' is high as the server resources are not used much, if 'Throughput' is low and continuously sending requests with Forever to see if can get 500 response code errors, it just increases the average response time and reduces Throughput but not getting 500 response code errors.
After some time getting socket exceptions, connection reset issues where Jmeter is running but no failures seen at server side.
Gone through similar queries here however unable to understand when server resources not used much and as per Server platform SLAs, it supports 1000 RPS but unable to achieve through Jmeter.
As per CTT calculation: RPS * / 1000
1000 * 50000/1000 = 50000 (Should give threads upto 50K? however our SLA is only for 200 users).
It might be the case your server isn't capable of responding fast enough. Low CPU and Memory consumption means that the server has enough headroom, however the application server configuration can be incorrect so the server doesn't fully utilise it's hardware resources. Another reason could be inefficient algorithms used in your application code, you can use a profiler tool to see what's going on when you're conducting the load
It might be the case JMeter is not capable of sending requests fast enough. Make sure that the machine where JMeter is running is not overloaded, that you run your JMeter test in non-GUI mode and in general follow JMeter Best Practices. You can also try running JMeter in distributed mode in case if one machine is not capable of creating the required load.

Is the throughput value related to the response time of requests in JMeter?

I'm getting the following results, where the throughput does not have a change, even when I increase the number of threads.
Scenario#1:
Number of threads: 10
Ramp-up period: 60
Throughput: 5.8/s
Avg: 4025
Scenario#2:
Number of threads: 20
Ramp-up period: 60
Throughput: 7.8/s
Avg: 5098
Scenario#3:
Number of threads: 40
Ramp-up period: 60
Throughput: 6.8/s
Avg: 4098
The my JMeter file consists of a single ThreadGroup that contains a single GET.
When I perform the request for an endpoit where the response time faster (less than 300 ms) I can achieve throughput greater than 50 requests per seconds.
Can you see the bottleneck of this?
Is there a relationship between response time and throughput?
It's simple as JMeter user manual states:
Throughput = (number of requests) / (total time)
Now assuming your test contains only a single GET then Throughput will be correlate average response time of your requests.
Notice Ramp-up period: 60 will start to create threads over 1 minute, so it will add to total time of execution, you can try to reduce it to 10 or equal to Number of threads.
But you may have other sampler/controllers/component that may effect total time.
Also in your case especially in Scenario 3, maybe some requests failed then you are not calculating Throughput of successful transactions.
In ideal world if you increase number of threads by factor of 2x - throughput should increase by the same factor.
In reality the "ideal" scenario is hardly achievable so it looks like a bottleneck in your application. The process of identifying the bottleneck normally looks as follows:
Amend your test configuration to increase the load gradually so i.e. start with 1 virtual user and increase the load to i.e. 100 virtual users in 5 minutes
Run your test and look into Active Threads Over Time, Response Times Over Time and Server Hits Per Second listeners. This way you will be able to correlate increasing load with increasing response time and identify the point where performance starts degrading. See What is the Relationship Between Users and Hits Per Second? for more information
Once you figure out what is the saturation point you need to know what prevents your application from from serving more requests, the reasons could be in:
Application simply lacks resources (CPU, RAM, Network, Disk, etc.), make sure to monitor the aforementioned resources, this could be done using i.e JMeter PerfMon Plugin
The infrastructure configuration is not suitable for high loads (i.e. application or database thread pool settings incorrect)
The problem is in your application code (inefficient algorithm, large objects, slow DB queries). These items can be fetched using a profiler tool
Also make sure you're following JMeter Best Practices as it might be the case JMeter is not capable of sending requests fast enough due to either lack of resources on JMeter load generator side or incorrect JMeter configuration (too low heap, running test in GUI mode, using listeners, etc)

JMeter: More HTTP Requests Result in Increased Performance?

I'm trying to understand a significant performance increase in my Jmeter test.
In a multi-tenancy database environment, I have a single RESTful service test containing a Thread Group with a single HTTP Request sampler posting an XML payload. The XML payload is then evaluated via stored procedures, and a response is received stating if the claim was qualified. I run this test from a .bat file (non-gui mode) in an Apache 7 environment with a single JVM running.
Test Thread Group Properties
# of Threads: ${__P(test.threads,200)}
Ramp-Up Period: ${__P(test.rampup,1)}
Loop Count: Forever
Delay Thread: Enabled
Scheduler: Enabled
Duration: ${__P(test.duration,1800)}
HTTP Request
Method: POST
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
When I duplicate the HTTP Request sampler within the TG and change the tenant ID within the URL, for some reason the performance seems to increase by > 55%. (i.e., the # of claims/second is increased by 55%) It appears the test did not fail, so I cannot attribute the performance increase to an increased error rate.
I would have expected an increase if I had enabled another JVM to let the Load Balancer perform optimization, but this is not the case. (still using only 1 JVM)
HTTP Request 1
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
HTTP Request 2
https://serverName:port/database/.../${__P(tenant,2222)}/Claim/${__property(contractId)}
The theory going around here is that Jmeter generates a workload at a higher rate for multiple requests than for a single request. I'm skeptical, but haven't found anything "solid" to support my skepticism.
Is this theory true? If so, why would two HTTP Requests increase the performance?
In short: it's OK.
Longer version:
Here is how JMeter works:
JMeter starts all the threads during ramp-up period
Each thread starts executing samplers upside down (or according to the Logic Controllers)
When request doesn't have more samplers to execute and no more loops to iterate it's being shut down.
So how does number of virtual users correlate with the "performance". When you increase virtual users number of requests number for a load test it affects Throughput
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So if you increase load on well-behaved system throughput should increase by the same factor or linearly.
When you increase load but throughput does not increase, such situation is known as "saturation point" when you get the maximum performance from the system. Further load increasing will lead to throughput going down.
References:
Apache JMeter Glossary
An extended Glossary version
And how do you messure your performance? According to your "theory" your messurements includes jmeter overhead and this would be wrong. More over, is the response the same for both cases? What I mean, is the backend doing the same work work both cases?
Maybe first request returns different output then the other one. Maybe it is more expensive to generate output in one of the request. That is why you will notice "incresed" performance as normally you would do N x Heavy task in X seconds, and in second case G x heavy tasks + H X light tasks in the same time where G < N/2 - more requests in the same time? Sure! Incresed perfomance? Nope.
So to completly investigate what is happening, you need to review your measurement method. I would start with comparing the the actual time for both requests.

Resources