jMeter - performance degrading with higher loop count - jmeter

I need a little help on how to debug the matter. My current jMeter scenario seems to run fine as long as I keep the loop count at 1, when I add more loops the performance starts to degrade a lot.
I have a thread group with 225 threads, 110s ramp up, loop count 1 - my total response time is ca. 8-9secs. I run this several times to confirm, each run shows similar response times.
Now, I did the same test , just changed the loop count to 3, all other parameters unchanged, and the performance went south, total response time is ca. 30-40s.
I was under the impression that 3x 1 loop runs would be, more or less, equivalent to 1x 3 loops run. It seems that is not the matter. Anyone could explain to me why is that?
Or, if this should be equivalent, any idea where to search for the culprit of degrading performance?

What you're saying is that the response times degrade if you increase the throughput (as in requests per second).
Based on 225 threads making a single request with a rampup of 110 seconds your throughput is going to be in the region of 2 requests every second. Increasing the loop count to 3 is going to up that by around a factor of 3 to 6 requests a second (assuming no timers). Except of course if the response times are increasing then you will not reach this level of throughput which is you problem.
Given that this request is already taking 8-9 seconds, which is not especially fast, it could be assumed that there is some heavy thinking going on behind the scenes and that you have simply hit a bottleneck, somewhere...
Try using less threads and a longer rampup and then monitor the response times and the throughput rate. At some point, as the load increases, you will see response times start to degrade and then at this point you need to roll up your sleeves and have a look at what is happening in your AUT.
Note. 3 x 1 loop is not the same as 1 x 3 loops. The delay between iterations will cause one thread with multiple iterations to have a different throughput vs. more threads with one iteration where the throughput is decided by the rampup, not the delay. That said, this is not what you describe in your question - you mention that the number of threads is consistent.

In addition to the answer from Oliver: try to use custom listener like Active Threads Over Time Listener - to monitor your load-scenario.
You can also retry both your scenarios described above, with this listener - sure, you'll see the difference in graphs.

Related

I cannot increase the throughput to the number I want

I am trying to stress test my server.
To do so I am using Jmeter and here is my set up:
I use
my Setup
Thread: 1000
schedule for 3 mins
So as you see I keep going with 1000 thread for a period of 3 mins.
But when I look at the throughput I only get around 230 per second
results
So what should I do to increase the through put to for example 1000000 per second? How come increasing the thread which I assume means more load does not increase throughput?
According to JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
Throughput explicitly relies on the application response time. Looking into your results, the average response time is 3.5 seconds therefore you will not get more than 1000 / 3.5 = 285 requests per second
Theoretically you could use Throughput Shaping Timer and Concurrency Thread Group combination, this way JMeter will kick off extra threads if the current amount is not enough to reach/maintain the desired throughput, however looking into 8.5% error rate and maximum response time for your application > 2 minutes my expectation is that you will not be able to get more throughput because most probably your application is overloaded and cannot respond faster.
Throughput measures the number of transactions or requests that can be made in a given period of time. basically, it lists the number of requests server managed to serve in a given time period. Throughput value depends on lot of factors and maybe your application under test not able to cater the expected load.
So with 1000 threads, you can't expect a 1000 throughput.
It's up to you to find out how much throughput your application can handle. For that maybe you need to do different optimizations on your side like optimize your script, distribute load via JMeter execution, increase theard count,...etc

Jmeter tps adjustment

Do we need to adjust Throughput given by jmeter, to find out the actual tps of the system
For eg : I am getting 100 tps for concurrent 250 users. This ran for 10 hrs. Can I come to a conclusion like my software can handle 100 transactions per second. Or else do I need to do some adjustment and need to get a value. Why i am asking this because when load started, system will take sometime to perform in adequate level (warm up time). If so how to do this. Please help me to understand this.
By default JMeter sends requests as fast as it can, the main factor which are affecting TPS rate are:
number of threads (virtual users) - this you can define in Thread Group
your application response time - this is not something you can control
Ideally when you increase number of threads the number of TPS should increase by the same factor, i.e. if you have 250 users and getting 100 tps you should get 200 tps for 500 users. If this is not the case - these 500 users are beyond saturation point and your application bottleneck is somewhere between 250 and 500 users (if not earlier).
With regards to "warm up" time - the recommended approach of conducting the load is doing it gradually, this way you will allow your application to get prepared to increasing load, warm up caches, let JIT compiler/optimizer to go their work, etc. Moreover this way you will be able to correlate the increasing load with increasing/decreasing throughput, response time, number of errors, etc. while having 250 users released at once doesn't tell the full story. See
The system warmup period varies from one system to the other. Warm up period is where configurations are cached, different libraries are initialized (eg. Builder.init()) and other initial functions that usually don't happen for subsequent calls. If you study results of the load test, there is a slow period at the very beginning. For most systems, it could be as small as 5 to 10 minutes. These values could be even negligible if the test is as long as 10 hours. But then again, average calculation can be effected if the results give extremely low values at the start (it always depend on the jump from initial warming up period to normal operations).
As per jmeter configurations this thread may explain the configuration. How to exclude warmup time from JMeter summary?

Transactions per second in Jmeter not showing the exact value

I have written a Jmeter script with 10 threads and 100 loops. So basically, it's 1000 samples. If you look at the Transactions per second PNG, it's approximately 1 transaction in 1 second.
Looking at the Bytes per Second, it's a big figure. Why is that?
What's the relation between these two images? I am bit confused here. Can someone put some light here?
You have 1000 samples in 15 minutes, 15 minutes == 900 seconds so it is absolutely expected that throughput would be something like 1 transaction per second.
Bytes per second is basically the size of response data you are getting, looking into the graph it appears that you are receiving around 300 kilobytes for each request.
If you need to apply more load - increase number of virtual users (threads) and decrease ramp-up time, you should be seeing higher throughput (given your server can handle this load, otherwise it will be the bottleneck).
See JMeter Test Results: Why the Actual Users Number is Lower than Expected article to get more understanding on how does JMeter work in terms of applying the load.

Jmeter: How to set up the stepping thread group?

How to set up the stepping thread group? If my application gives average response time as 2sec for 100VU by using "ThreadGroup".
Actually, it depends of your performance test goals. Stepping Thread Group won't allow parameters smaller than 1 second. You have to deal with this limitation.
According to JMeter documentation:
Ramp-up needs to be long enough to avoid too large a work-load at the
start of a test, and short enough that the last threads start running
before the first ones finish (unless one wants that to happen).
Defining the ramp-up time is a very important step of your performance test. In your case, I recommend start with 1 thread per second using these parameters:
This group will start 100 threads;
First, wait for 0 seconds;
Then start 1 threads;
Next, add 1 threads every 1 seconds using ramp-up 0 seconds,
Then hold the load for 900 seconds.
You can choose to stop all threads at once then. It is up to you.
Why am I suggesting to run a test for almost 20 minutes? Because you are interested in the performance running with 100 threads and you want to maximize the number of samples with such load level. On the suggested setup, you'll have approximately 90% of your test time running with the ideal number of threads.
Once you have those numbers, you can experiment by starting more than 1 thread per second and decrease the overall ramp-up time. Always look at the resource usage (e.g. CPU utilization, memory available, etc.) to understand the system limits.

Why does using a higher Loop Count improve site performance in JMeter?

I've noticed that when load testing with JMeter, if I do a single loop I get a fairly long average time for my test. If I have say a Loop Count of 10, my average time peaks early on and then drops way down. For example if I setup a test on a simple get request for a page with the following settings:
Number of Threads (users): 500
Ramp-up Period(in seconds): 5
Loop Count: 1
My average time is about 4 seconds. If I change it to 10 loops:
Number of Threads (users): 500
Ramp-up Period(in seconds): 5
Loop Count: 10
I get an average time of 1.4 seconds.
Apache's documentation states that the Loop Count is:
The number of times the subelements of this controller will be
iterated each time through a test run.
Is it possible that this means the first request will actually do something on the server and the subsequent 9 requests will be pulling from cache?
How exactly is the Loop Count being used that would cause the results I'm seeing?
Yes, Remaining 9 requests must be pulling from cache.
Loop controller is simple loop executor doing nothing magic inside.
Improved performance is because of use of cached results on server.
If you want one thing you can try, use the loop controller but use different substituted parameters so that every
time different requests will be sent to server (I know that loop controller is for repeating same values, but this is just to confirm the effect of caching).
then compare the results.
I hope this clears the doubt :)

Resources