Need help on response time - jmeter

Need help on JMeter response result from the image
My scenario: Am calculating Min/Max/Average response time on Api creating a user account.
1.Login to site
2.Using API request creating a user account - (creating 100 users account using API)
3.Logout.
Observation :
Total elapsed time is 32 mins (which is there in the image).
Response time for 100 users is 90852.
I need to understand how the response time units are measured here.
does 90852 milliseconds mean approximately 90secs.
So is it like a single user account is created in 90 secs by the API?.
So, please tell me how response time is working here when it compared with total response time?
Thanks :)

The average creation of a user took your API 908 ms (the entry with 100 samples ending with /api/users).
Since the line (where the name of the transaction is not in the screenshot) has the sample count 1 and the response time resembles 100*908ms I would guess that you have a Transaction Controller that holds the Loop Controller.
The same hierarchy that you use to organize your test plan also applies to transaction controllers and samplers. So if you group several samplers - and/or transaction controllers - under a parent transaction controller, that parent transaction controller will have the combined response time of all its children.

Response time for 100 users is 90852. - No, only for 1 user. Looking at your image it appears that only 1 sample was collected during 32 mins. So this response time is for that 1 sample not for all 100 users. JMeter only shows you completed responses. Assuming you have a thread group of 100 users, the rest didn't complete / were waiting for the api to respond.
Does 90852 milliseconds mean approximately 90secs. - yes. In your api you seem to be using once only controller for login and authenticate and everything else seems to be running sequentially. So if you are load testing have a slow api response then you won't be able to measure other throughput for the rest of the apis correctly as the slowest api will hold up the thread for a long time.
Hope this helps.

It is hard to provide comprehensive analysis without seeing your Test Plan.
When it comes to your questions:
Total elapsed time is 32 mins (which is there in the image).
this looks a little bit high for me, given you create 100 user accounts and average response time is 908 milliseconds I would expect that your test will finish in 90.8 seconds which is 1.5 minutes.
does 90852 milliseconds mean approximately 90secs.
it rather looks like a sum of all 100 response times most probably you got it from the Transaction Controller
Average Response time is basically arithmetic mean, to wit sum of all response times divided by their count.
First of all you need to understand why does you test take that long.
You seem to be creating 100 user accounts using 1 thread (virtual user) in loop, you might want to consider doing it with multiple threads instead
You should be using JMeter GUI only for tests development and/or debugging, when it comes to test execution you should be running your JMeter tests in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.jtl

Related

Does the Constant Timer added in my HTTP Request affect the results in the Summary Report?

I have a HTTP Request in my Thread Grpup that takes around 20 to 30 seconds to complete with a single user, so when I added 50 users I get a 500/Internal Server Error or 503/Server has been shutdown sometimes.
I want to add a Constant Timer with 40 seconds (in miliseconds) under the HTTP Request so maybe the application will have some time to process it. I am going to the rigth way?
If I add the Constant Timer will it be calculate as well in the Summary Report?
I need that the Jmeter give the time to the API (My aplication) complete the process (need at least 30 seconds) and it may be or not affetct my Summary Report
PreProcessors, Post-Processors and Timers are not counted in the Elapsed time. so response time will not be impacted.
However Throughout (the number of requests for the test duration) will be lower.
See JMeter Glossary for more information on the above metrics.
With regards to "right way" - real users don't "hammer" application non-stop, they need some time to "think" between operations so if you're simulating a real user you should have non-zero think time, however 40 seconds it kind of too much for me. Take a look at How to make JMeter behave more like a real browser article for more tips on properly configuring your JMeter test.

Gatling performance Testing: TPS is much lower than Jmeter's TPS

I am currently using Jmeter for API Performance Testing, but recently I started to look into Gatling as a potential replacement of Jmeter. Below is the PoC I'm doing for Gatling but I notice the performance result is very different.
Setup:
where we hit a https endpoint with a concurrent user of 10 for 60 seconds.
Results
Jmeter: 10 threads (no ramp up), 60 seconds
Result: 150 TPS
Gatling: 10 concurrent users, also 60 seconds
Result: 27 TPS(cnt/s?)
Question:
first I want to confirm the terminology of Gatling; in Gatling result chart, I see a column named "mean cnt/s" I hovered over it and it says "count of event per second", I imagine that's the same thing as Jmeter's TPS?
Jmeter:
summary + 2386 in 00:00:16 = 153.1/s Avg
Gatling:
Mean cnt/s: 26.652
if above assumption is correct, can someone share some insight on why Gatling's number is much lower than Jmeter's?
Thank You!
Gatling: 10 concurrent users, also 60 seconds
Do you understand what this does?
This is going to spawn a new user every time an existing one finishes, and hence create new connections. Assuming it takes 100ms for a virtual user to complete the scenario, you're going to spawn 101060 = 6,000 virtual users and as many connections.
Is that really what you want and is it the same thing as you do with JMeter?
If you actually want the same 10 users to loop for 60 seconds, you have to inject atOnceUsers(10) and add a during(60) loop in your scenario.
https://gatling.io/docs/gatling/reference/current/core/injection/#open-model
https://gatling.io/docs/gatling/reference/current/core/scenario/#loop-statements
Many things can cause deviations.
I assume you use the same setup for both in terms of load generator/target insance. You can start with fixed number of requests first.
Use loops in Jmeter and repeat in Gatling.
Sending for example 60 x 10 = 600 requests in total.
Gatling will be able to generate much higher load than Jmeter if properly used.

How to Baseline a Web application using Jmeter

I want to Baseline my application .It is having two transactions "Place Order" and "Add A Product To Favorites".. Both of these transactions follow the same navigation route mostly. For the - Place Order Business Transaction the steps are Land on Login Page .. Login..Add To Cart..Checkout..Place Order. For the - Add A Product to favorites Business Transaction, the steps are : Land on Login page..Login....Add To Cart...Add to favorites. I want to run Test and see the consistent response time the application gives at a particular throughput. I have the below queries.
If I create Add to Favorites scenario as a Thread Group with 4 samplers and Place Order Scenario with 5 samplers(as given by the steps mentioned ), should I add a throughput Shaping Timer separately for the individual Thread Group and if so what would be the Throughput parameters that I should give like RPS settings.
My application has the following Max response time and avg response time for a normal Load(i.e with no queue wait time since I collected the response times by just running a single thread so that the Wait time component is Zero) .N.B L I have not added the think time for simplicity sake and also the response times are bit too costly since the backend ERPs are legacy systems.
So, the Thread Group that I will create for "Place Order" has http samplers below with their respective response times for a 1 user load:
Land on Login page - 6074 ms (avg 4492 ms)
Login using Credentials - 2549 ms (avg is 631 ms)
Add To Cart - 1553 ms (avg is 304 ms)
Land on Cart page: 47044 ms (Avg: 15901 ms)
Place Order : 19126 ms (avg is 17110 ms)
Logout : 4801 ms (avg is 2706 ms)
Below are my queries:
With the above response timings what is the max throughput i.e Place Order transactions that I can achieve so that I can set that as the TPS parameter in the Throughput shaping graph plugin and then run a load test. Please can somebody explain the calculation process to arrive at that value.
The same samplers (i.e Land on Login Page and Login and add to cart ) is used in multiple Thread Groups like the Place order Business Scenario above and the Add A Product To Favorite Business transaction . So my question is, if different Thread groups are going to pound the same Login Controller Servlet/Add to Cart Servlet (my app is a J2EE app) so how to take that into consideration so that the queue generated for the Login sampler will not affect the TPS to be achieved by place order Business scenario and the other Business scenarios that use the login sampler transaction since we will be running both the Thread groups.(One for Place Order and One thread Group for Add to favorites) while running the Load test
How to set the concurrency for a Thread group; say the Place Order Transaction .I have this question since we need to know the max response time. Hence should I add the entire response times of all the samplers in the Thread group and multiply by the TPS that we calculated and divide by 1000 .Please explain the logic in this case too.
Nobody apart from you can answer, from your numbers we can state that 1 user is capable of executing 1 Place Order request in 19 seconds or 3 Place Order requests per minute. If you add one more user there could be 2 cases:
Response time remains the same. In this case you will be able to execute 6 requests per minute with 2 users, 9 requests per minute with 3 users, etc.
Respons time increases. In this case you will NOT be able to execute 6 request per minute with 2 users due to performance bottleneck.
Check out What is the Relationship Between Users and Hits Per Second? article for more details.
I don't think you should be measuring various business use cases separately, well-behaved load test should represent real-live application usage as close as possible and in reality it is more than possible than one user creates and order while other users are logging in. However if you explicitly need to test order creation separately you can perform login in setUp Thread Group and then pass the authentication context (in majority of cases it is a set of Cookies) to the main Thread Group where the order creation takes place using i.e. Inter-Thread Communication Plugin
You don't need to know response time, you need to provide enough virtual users in order to conduct the required load (given the application is capable of handling it), consider using Concurrency Thread Group which can kick off extra threads if the current amount is not enough for maintaining the desired throughput. It can be connected with the Throughput Shaping Timer via Feedback function.

Concurrent Thread and Ultimate Thread Group and Performance Bench mark

While understanding concept of Concurrent thread and ultimate thread group, i am confused to understand result of summary/aggregate report while running concurrent thread or ultimate thread group .For example if i have 200 user and ramp up time 60 sec then i didn't see all sampler request as 200 sample after completing execution successfully but only few sampler request have 200 sample. when i use normal thread group then i always got thread count same for each sampler request after completing execution .
for realistic load testing with more user , could you please suggest me which thread group should prefer.
Could you please provide valuable guidance with some valuable link/book and also share me standard performance bench mark criteria or key parameter detail while doing load testing.(if any bench load parameter value does not meet standard then we can say that there is a performance issue)
Thanks for giving valuable time in advance.
Thanks
amit
This is due to the fact that:
Your application response time is too high
Your test duration is too low
For example I can see response times > 80 seconds:
it means that if a single virtual user has cumulative response time for 2 samplers > 160 seconds and the test duration is 120 seconds it will not be able to execute all the requests. Just increase your test duration to be i.e. 10 minutes and you should see more virtual users capable of executing all the Samplers you defined in the test plan.
Also given first user is capable of executing all the requests successfully and in time it looks like that your application gets overloaded hence cannot respond fast enough when the number of concurrent users reaches some "critical threshold", you can add listeners like Active Threads Over Time and Response Times vs Threads, this way you will be able to correlate increasing load with the increasing response time.
If also makes sense to collect:
Baseline health metrics of your application (CPU, RAM, Network, Disk usage, etc.), it can be done using JMeter PerfMon Plugin.
Lower level details like slowest methods, largest objects, heaviest database queries, etc. This form of information can be obtained using profiling tools specific to your application programming language(s).

JMeter Test Plan Validation

I am creating a JMeter test plan and need some validation to verify I'm going about it the right way.
I have the following GA data for our busiest hour.
Hour: 10
Average session duration: 00:02:56
Avg. Page Load Time (sec): 1.57
Sessions: 2441
Page Views: 8361
Number of threads (users):
I've calculated this using the following formula:
2441 (Hourly Sessions) x 176 (Average Session Duration (in seconds)) / 3600
Which gives me 119.
1) Is this the correct approach?
Getting average page load time
I'm attempting to bench mark against the average page load time as reported by GA. So I have created currently the following test plan:
Thread Group:
- HTTP Request (Main Request)
- Aggregate graph
1) This will request (main request) 119 times should I add more pages so that requests total 8361 as per the pages views from GA?
2) I'm unclear about how I should get the test plan to run over an hour as the GA data is over an hour, currently the 119 requests get executed within a few minutes or is it even necessary to run over an hour to get a rough idea of capacity?
3) Is it correct to use the average response time from the aggregate graph and compare that against the Avg. Page Load Time from GA?
1.1) Seems like that - but only if you stick to mimic-ing the actual "average user" way to interact with your service: do some chain of requests (let's call it session) during 176 sec.
Then, yes: if inside one thread, you'd stretch your chain of requests along 176 sec, 1 thread could serve ~20.5 sessions per hour.
Which turns into ~119 threads to meet desired ~2440 requests per hour.
The other approach would be to stick to Page views (8361).
That's if maintaining the "session" and particular request sequence doesn't matter, while load does.
Then it comes to ~2.3 rps flat.
As soon as response time is expected to be around 1.5 sec, you would need at least 3 threads to keep the pace, more would be better to have some room to stretch.
But you won't need a lot of them, because they'd be hanging blocked with I/O most of the time.
Checking the actual throughput value JMeter yields during initial runs, you could adjust the number of threads to optimal.

Resources