Gatling performance Testing: TPS is much lower than Jmeter's TPS - jmeter

I am currently using Jmeter for API Performance Testing, but recently I started to look into Gatling as a potential replacement of Jmeter. Below is the PoC I'm doing for Gatling but I notice the performance result is very different.
Setup:
where we hit a https endpoint with a concurrent user of 10 for 60 seconds.
Results
Jmeter: 10 threads (no ramp up), 60 seconds
Result: 150 TPS
Gatling: 10 concurrent users, also 60 seconds
Result: 27 TPS(cnt/s?)
Question:
first I want to confirm the terminology of Gatling; in Gatling result chart, I see a column named "mean cnt/s" I hovered over it and it says "count of event per second", I imagine that's the same thing as Jmeter's TPS?
Jmeter:
summary + 2386 in 00:00:16 = 153.1/s Avg
Gatling:
Mean cnt/s: 26.652
if above assumption is correct, can someone share some insight on why Gatling's number is much lower than Jmeter's?
Thank You!

Gatling: 10 concurrent users, also 60 seconds
Do you understand what this does?
This is going to spawn a new user every time an existing one finishes, and hence create new connections. Assuming it takes 100ms for a virtual user to complete the scenario, you're going to spawn 101060 = 6,000 virtual users and as many connections.
Is that really what you want and is it the same thing as you do with JMeter?
If you actually want the same 10 users to loop for 60 seconds, you have to inject atOnceUsers(10) and add a during(60) loop in your scenario.
https://gatling.io/docs/gatling/reference/current/core/injection/#open-model
https://gatling.io/docs/gatling/reference/current/core/scenario/#loop-statements

Many things can cause deviations.
I assume you use the same setup for both in terms of load generator/target insance. You can start with fixed number of requests first.
Use loops in Jmeter and repeat in Gatling.
Sending for example 60 x 10 = 600 requests in total.
Gatling will be able to generate much higher load than Jmeter if properly used.

Related

Does the Constant Timer added in my HTTP Request affect the results in the Summary Report?

I have a HTTP Request in my Thread Grpup that takes around 20 to 30 seconds to complete with a single user, so when I added 50 users I get a 500/Internal Server Error or 503/Server has been shutdown sometimes.
I want to add a Constant Timer with 40 seconds (in miliseconds) under the HTTP Request so maybe the application will have some time to process it. I am going to the rigth way?
If I add the Constant Timer will it be calculate as well in the Summary Report?
I need that the Jmeter give the time to the API (My aplication) complete the process (need at least 30 seconds) and it may be or not affetct my Summary Report
PreProcessors, Post-Processors and Timers are not counted in the Elapsed time. so response time will not be impacted.
However Throughout (the number of requests for the test duration) will be lower.
See JMeter Glossary for more information on the above metrics.
With regards to "right way" - real users don't "hammer" application non-stop, they need some time to "think" between operations so if you're simulating a real user you should have non-zero think time, however 40 seconds it kind of too much for me. Take a look at How to make JMeter behave more like a real browser article for more tips on properly configuring your JMeter test.

How much load it is?

I have tried but have a doubt that whether the below-mentioned specification is equivalent to 4000load or not.
the number of threads-100,
ramp-up period-10 secs,
loop count- 40, then
which is equal to how much load??
You are loading 100 concurrent threads, the loops just adds more execution time.
So it isn't equivalent to 4000 concurrent threads hitting your server
I don't know what do you mean by 4000load, your test will send 4000 requests per each Sampler which is in your Thread Group as fast as it can. The actual test duration will depend on your application response time but will not be less than 10 seconds.
You might want to take a look at Transactions per Second and Server Hits per Second charts to see how many requests your configuration delivers, both charts can be installed using JMeter Plugins Manager
Also you can generate HTML Reporting Dashboard which will have consolidated aggregate view of your test results.

How can I do performance testing on application for 20,000 user-request per second?

I want to put the load of 20000 user requests on my server to test can it handle this number of requests in a second.
I have put the Number of threads(user) - 20000
Ramp-up period (seconds) - 1
Loop count - 1
Output - It is showing 20 seconds Avg time after completing the script.
Interpreting the output:
It is stating that the average server response time is 20 sec.
To check whether your server can handle 20000 users per sec,
run the test plan from non-GUI mode (Generating HTML dashboard report). Because 20000 is a huge number.
Go get the best of the non-GUI mode add the JMeter plugin manager. Add the required reports needed by you.
If I am not wrong what you are trying to achieve is a throughput of 20000 per sec. The same can be found using transactions per sec graph from the HTML dashboard report generated in non-GUI mode.
This plugin might help your use case: Throughput Shaping Timer
To achieve the desired 20000 users per sec you need to experiment with ramp-up and loop counts based on the output. The end goal should be that your transactions per sec graph should show 20000 or more at one point in time and at the same time the error should be zero then your server is able to handle 20000 users per sec. But average server response time is also an important metric to watch out for. Because no user wants a slow response from the server.
You can also consider the Concurrency thread group. This thread group is specifically designed to generate the desired number of concurrent users.
If you tried to do a Spike Test, like access application with 20 000 users for 1 second - your configuration is fine and and application failed the test so you can raise an issue.
However I'm under impression that your test is kind of short and doesn't tell the full story
20 000 users will produce 20 000 requests per second only if your application response time is 1 second sharp. If your application response time is 0.5 seconds - the load will be 40 000 requests per second, if response time is 2 seconds - the load will be 10 000 requests per second, if response time is 20 seconds - the load will be 1000 requests per second, etc. See What is the Relationship Between Users and Hits Per Second? article for more details if needed.
Normally you should increase the load gradually, this way you will have possibility to correlate increasing load with other metrics (response time, throughput, number of errors, etc) so I would recommend going for ramp-up -> plateau -> ramp-down setting and check the number of requests per second during this "plateau" phase
20000 virtual users is quite a number so make sure to follow JMeter Best Practices and it might be the case you will have to go for Distributed Testing

How to uniformly distribute JMeter threads throughtout the entire run of the test

I have the following
but when I run this with 60 threads I don't see the threads being fired off every second. Instead it seems like the requests in the transaction controller are fired off at a rate of 60 requests per min.
Is there a way I can have JMeter create threads at the rate of 60 per minute without a limit on requests?
You need to use Rampup duration in that case. Again that is one time activity, once threads are created they will execute the request and will die / repeat if you need.
Ultimate Threadgroup plugin is also helpful here.
Generally everybody is interested how much throughput(req per unit time) can server take. So in your case its doing correct job i.e. sending requests 60 per sec.
In practice creating multiple threads per sec is bad practice and IMO such testing is not useful (Why it is required?) because client threads in JMeter is not important but requests sent to server by threads (can be 1-10 or 100) many times and how server responds is important.
If your use case is different then share it in detail and we'll discuss. Hope this helps.
If you want JMeter to kick off one thread per second you need to specify Ramp-Up Period equal to the number of threads (virtual users) under Thread Group, if you want 60 virtual users - go for 60 seconds ramp-up.
Also make sure you allow your Thread Group to loop forever as if you don't - you will run into a situation when some threads have already done their job and some haven't been started yet.
Example configuration:
Example output:
More information: JMeter Ramp-Up - The Ultimate Guide

Loop count and Ramp Up period in JMeter

I have created a set and just confused with the Loop Count and Ramp Up period.
I have a test set with the following parameters.
Threads = 30
Ramp Up Period = 30
Loop Count = 100
As per the page on
quora.
I suppose:
a) If Loop count is Zero, then each of the 30 threads will be starting every second. As per the shared the web page, I guess 30/30 * 100 ie 100 threads/requests will be hitting the server every second. Please correct me if I am wrong.
b) As per the above parameters, there will be a total of 30 * 100 threads/requests. Does this mean all the 3000 threads/requests will be sent within 30 seconds [ Ramp Up period ]
Assuming you have 30 users and 30 seconds ramp-up
JMeter will start each virtual user each second
Each virtual user will start executing samplers upside down (or according to the logic controllers, if any) as fast as it can (if you don't use timers) so the delivered load can be either more or less than 30 requests/second, it depends on how fast JMeter is executing requests and on your application response time as JMeter will wait for response from previous sampler before starting new one
When virtual user finishes executing all the samplers defined in test plan it will start over and do point 2 for 99 more iterations
When virtual user won't have more samplers to execute and loops to iterate it will shut down
A couple of tips:
You can use Server Hits Per Second listener to see how many requests per second you are actually making given your test plan configuration
You can control the number of requests per second via Constant Throughput Timer
Consider upgrading to JMeter 3.2 as newer JMeter versions normally contain new features, performance improvements and bug fixes
according to jmeter manual ramp up is:
How long JMeter should take to get all the threads started.
If there are 10 threads and a ramp-up time of 100 seconds, then each
thread will begin 10 seconds after the previous thread started, for a
total time of 100 seconds to get the test fully up to speed.
So if your goal is to reach 3000 request within 30 seconds the above wont do, it might take more than that depending on how much it takes to finish the requests you are sending.
If you are looking for Throughput you can add an aggregate report listener which calculates the throughput for you and depending on the results you can configure your thread properties to reach your goal.
Reference :
Jmeter user manual

Resources