JMeter Test Plan Validation - jmeter

I am creating a JMeter test plan and need some validation to verify I'm going about it the right way.
I have the following GA data for our busiest hour.
Hour: 10
Average session duration: 00:02:56
Avg. Page Load Time (sec): 1.57
Sessions: 2441
Page Views: 8361
Number of threads (users):
I've calculated this using the following formula:
2441 (Hourly Sessions) x 176 (Average Session Duration (in seconds)) / 3600
Which gives me 119.
1) Is this the correct approach?
Getting average page load time
I'm attempting to bench mark against the average page load time as reported by GA. So I have created currently the following test plan:
Thread Group:
- HTTP Request (Main Request)
- Aggregate graph
1) This will request (main request) 119 times should I add more pages so that requests total 8361 as per the pages views from GA?
2) I'm unclear about how I should get the test plan to run over an hour as the GA data is over an hour, currently the 119 requests get executed within a few minutes or is it even necessary to run over an hour to get a rough idea of capacity?
3) Is it correct to use the average response time from the aggregate graph and compare that against the Avg. Page Load Time from GA?

1.1) Seems like that - but only if you stick to mimic-ing the actual "average user" way to interact with your service: do some chain of requests (let's call it session) during 176 sec.
Then, yes: if inside one thread, you'd stretch your chain of requests along 176 sec, 1 thread could serve ~20.5 sessions per hour.
Which turns into ~119 threads to meet desired ~2440 requests per hour.
The other approach would be to stick to Page views (8361).
That's if maintaining the "session" and particular request sequence doesn't matter, while load does.
Then it comes to ~2.3 rps flat.
As soon as response time is expected to be around 1.5 sec, you would need at least 3 threads to keep the pace, more would be better to have some room to stretch.
But you won't need a lot of them, because they'd be hanging blocked with I/O most of the time.
Checking the actual throughput value JMeter yields during initial runs, you could adjust the number of threads to optimal.

Related

How can I do performance testing on application for 20,000 user-request per second?

I want to put the load of 20000 user requests on my server to test can it handle this number of requests in a second.
I have put the Number of threads(user) - 20000
Ramp-up period (seconds) - 1
Loop count - 1
Output - It is showing 20 seconds Avg time after completing the script.
Interpreting the output:
It is stating that the average server response time is 20 sec.
To check whether your server can handle 20000 users per sec,
run the test plan from non-GUI mode (Generating HTML dashboard report). Because 20000 is a huge number.
Go get the best of the non-GUI mode add the JMeter plugin manager. Add the required reports needed by you.
If I am not wrong what you are trying to achieve is a throughput of 20000 per sec. The same can be found using transactions per sec graph from the HTML dashboard report generated in non-GUI mode.
This plugin might help your use case: Throughput Shaping Timer
To achieve the desired 20000 users per sec you need to experiment with ramp-up and loop counts based on the output. The end goal should be that your transactions per sec graph should show 20000 or more at one point in time and at the same time the error should be zero then your server is able to handle 20000 users per sec. But average server response time is also an important metric to watch out for. Because no user wants a slow response from the server.
You can also consider the Concurrency thread group. This thread group is specifically designed to generate the desired number of concurrent users.
If you tried to do a Spike Test, like access application with 20 000 users for 1 second - your configuration is fine and and application failed the test so you can raise an issue.
However I'm under impression that your test is kind of short and doesn't tell the full story
20 000 users will produce 20 000 requests per second only if your application response time is 1 second sharp. If your application response time is 0.5 seconds - the load will be 40 000 requests per second, if response time is 2 seconds - the load will be 10 000 requests per second, if response time is 20 seconds - the load will be 1000 requests per second, etc. See What is the Relationship Between Users and Hits Per Second? article for more details if needed.
Normally you should increase the load gradually, this way you will have possibility to correlate increasing load with other metrics (response time, throughput, number of errors, etc) so I would recommend going for ramp-up -> plateau -> ramp-down setting and check the number of requests per second during this "plateau" phase
20000 virtual users is quite a number so make sure to follow JMeter Best Practices and it might be the case you will have to go for Distributed Testing

How to Baseline a Web application using Jmeter

I want to Baseline my application .It is having two transactions "Place Order" and "Add A Product To Favorites".. Both of these transactions follow the same navigation route mostly. For the - Place Order Business Transaction the steps are Land on Login Page .. Login..Add To Cart..Checkout..Place Order. For the - Add A Product to favorites Business Transaction, the steps are : Land on Login page..Login....Add To Cart...Add to favorites. I want to run Test and see the consistent response time the application gives at a particular throughput. I have the below queries.
If I create Add to Favorites scenario as a Thread Group with 4 samplers and Place Order Scenario with 5 samplers(as given by the steps mentioned ), should I add a throughput Shaping Timer separately for the individual Thread Group and if so what would be the Throughput parameters that I should give like RPS settings.
My application has the following Max response time and avg response time for a normal Load(i.e with no queue wait time since I collected the response times by just running a single thread so that the Wait time component is Zero) .N.B L I have not added the think time for simplicity sake and also the response times are bit too costly since the backend ERPs are legacy systems.
So, the Thread Group that I will create for "Place Order" has http samplers below with their respective response times for a 1 user load:
Land on Login page - 6074 ms (avg 4492 ms)
Login using Credentials - 2549 ms (avg is 631 ms)
Add To Cart - 1553 ms (avg is 304 ms)
Land on Cart page: 47044 ms (Avg: 15901 ms)
Place Order : 19126 ms (avg is 17110 ms)
Logout : 4801 ms (avg is 2706 ms)
Below are my queries:
With the above response timings what is the max throughput i.e Place Order transactions that I can achieve so that I can set that as the TPS parameter in the Throughput shaping graph plugin and then run a load test. Please can somebody explain the calculation process to arrive at that value.
The same samplers (i.e Land on Login Page and Login and add to cart ) is used in multiple Thread Groups like the Place order Business Scenario above and the Add A Product To Favorite Business transaction . So my question is, if different Thread groups are going to pound the same Login Controller Servlet/Add to Cart Servlet (my app is a J2EE app) so how to take that into consideration so that the queue generated for the Login sampler will not affect the TPS to be achieved by place order Business scenario and the other Business scenarios that use the login sampler transaction since we will be running both the Thread groups.(One for Place Order and One thread Group for Add to favorites) while running the Load test
How to set the concurrency for a Thread group; say the Place Order Transaction .I have this question since we need to know the max response time. Hence should I add the entire response times of all the samplers in the Thread group and multiply by the TPS that we calculated and divide by 1000 .Please explain the logic in this case too.
Nobody apart from you can answer, from your numbers we can state that 1 user is capable of executing 1 Place Order request in 19 seconds or 3 Place Order requests per minute. If you add one more user there could be 2 cases:
Response time remains the same. In this case you will be able to execute 6 requests per minute with 2 users, 9 requests per minute with 3 users, etc.
Respons time increases. In this case you will NOT be able to execute 6 request per minute with 2 users due to performance bottleneck.
Check out What is the Relationship Between Users and Hits Per Second? article for more details.
I don't think you should be measuring various business use cases separately, well-behaved load test should represent real-live application usage as close as possible and in reality it is more than possible than one user creates and order while other users are logging in. However if you explicitly need to test order creation separately you can perform login in setUp Thread Group and then pass the authentication context (in majority of cases it is a set of Cookies) to the main Thread Group where the order creation takes place using i.e. Inter-Thread Communication Plugin
You don't need to know response time, you need to provide enough virtual users in order to conduct the required load (given the application is capable of handling it), consider using Concurrency Thread Group which can kick off extra threads if the current amount is not enough for maintaining the desired throughput. It can be connected with the Throughput Shaping Timer via Feedback function.

Need help on response time

Need help on JMeter response result from the image
My scenario: Am calculating Min/Max/Average response time on Api creating a user account.
1.Login to site
2.Using API request creating a user account - (creating 100 users account using API)
3.Logout.
Observation :
Total elapsed time is 32 mins (which is there in the image).
Response time for 100 users is 90852.
I need to understand how the response time units are measured here.
does 90852 milliseconds mean approximately 90secs.
So is it like a single user account is created in 90 secs by the API?.
So, please tell me how response time is working here when it compared with total response time?
Thanks :)
The average creation of a user took your API 908 ms (the entry with 100 samples ending with /api/users).
Since the line (where the name of the transaction is not in the screenshot) has the sample count 1 and the response time resembles 100*908ms I would guess that you have a Transaction Controller that holds the Loop Controller.
The same hierarchy that you use to organize your test plan also applies to transaction controllers and samplers. So if you group several samplers - and/or transaction controllers - under a parent transaction controller, that parent transaction controller will have the combined response time of all its children.
Response time for 100 users is 90852. - No, only for 1 user. Looking at your image it appears that only 1 sample was collected during 32 mins. So this response time is for that 1 sample not for all 100 users. JMeter only shows you completed responses. Assuming you have a thread group of 100 users, the rest didn't complete / were waiting for the api to respond.
Does 90852 milliseconds mean approximately 90secs. - yes. In your api you seem to be using once only controller for login and authenticate and everything else seems to be running sequentially. So if you are load testing have a slow api response then you won't be able to measure other throughput for the rest of the apis correctly as the slowest api will hold up the thread for a long time.
Hope this helps.
It is hard to provide comprehensive analysis without seeing your Test Plan.
When it comes to your questions:
Total elapsed time is 32 mins (which is there in the image).
this looks a little bit high for me, given you create 100 user accounts and average response time is 908 milliseconds I would expect that your test will finish in 90.8 seconds which is 1.5 minutes.
does 90852 milliseconds mean approximately 90secs.
it rather looks like a sum of all 100 response times most probably you got it from the Transaction Controller
Average Response time is basically arithmetic mean, to wit sum of all response times divided by their count.
First of all you need to understand why does you test take that long.
You seem to be creating 100 user accounts using 1 thread (virtual user) in loop, you might want to consider doing it with multiple threads instead
You should be using JMeter GUI only for tests development and/or debugging, when it comes to test execution you should be running your JMeter tests in command-line non-GUI mode like:
jmeter -n -t test.jmx -l result.jtl

Loop count and Ramp Up period in JMeter

I have created a set and just confused with the Loop Count and Ramp Up period.
I have a test set with the following parameters.
Threads = 30
Ramp Up Period = 30
Loop Count = 100
As per the page on
quora.
I suppose:
a) If Loop count is Zero, then each of the 30 threads will be starting every second. As per the shared the web page, I guess 30/30 * 100 ie 100 threads/requests will be hitting the server every second. Please correct me if I am wrong.
b) As per the above parameters, there will be a total of 30 * 100 threads/requests. Does this mean all the 3000 threads/requests will be sent within 30 seconds [ Ramp Up period ]
Assuming you have 30 users and 30 seconds ramp-up
JMeter will start each virtual user each second
Each virtual user will start executing samplers upside down (or according to the logic controllers, if any) as fast as it can (if you don't use timers) so the delivered load can be either more or less than 30 requests/second, it depends on how fast JMeter is executing requests and on your application response time as JMeter will wait for response from previous sampler before starting new one
When virtual user finishes executing all the samplers defined in test plan it will start over and do point 2 for 99 more iterations
When virtual user won't have more samplers to execute and loops to iterate it will shut down
A couple of tips:
You can use Server Hits Per Second listener to see how many requests per second you are actually making given your test plan configuration
You can control the number of requests per second via Constant Throughput Timer
Consider upgrading to JMeter 3.2 as newer JMeter versions normally contain new features, performance improvements and bug fixes
according to jmeter manual ramp up is:
How long JMeter should take to get all the threads started.
If there are 10 threads and a ramp-up time of 100 seconds, then each
thread will begin 10 seconds after the previous thread started, for a
total time of 100 seconds to get the test fully up to speed.
So if your goal is to reach 3000 request within 30 seconds the above wont do, it might take more than that depending on how much it takes to finish the requests you are sending.
If you are looking for Throughput you can add an aggregate report listener which calculates the throughput for you and depending on the results you can configure your thread properties to reach your goal.
Reference :
Jmeter user manual

Incorrect graph generate by jmeter listener Hits per Seconds and Composite Graph

learning using jmeter and getting problem when reading graph listener output
creating Thread group with number thread 8, ram-up 1 and loop forever
adding listener active threads over time, hits per seconds, response times over times
result:
a. in Active Threads Over Time getting correct result with maximum 8 thread
b. in Hits per Second, graph result is really weird, there is 148 number of hist/sec
trying to debug and change thread to 1, Hits per Second still generate weird graph with 20 hits/sec
any idea why this happening?
i use latest release from jmeter 3.0
As I had clarified here, jp#gc - Hits per Second, this listener shows the total number of requests sent to the server per second. Per Second is by default - it can be changed in the settings tab.
When you have 1 user, JMeter sends 18-20 requests / second (Loop forever will keep sending the requests for the user as soon as the user gets the response). So, The user was able to make 19 requests in a second. When you have 8 users, the test plan sends around 133 requests. It seems to work fine and nothing weird here.
When you have 8 users, JMeter would not have any issues in sending the first 8 requests (first request for each thread).But the subsequent requests for each thread will be sent only if the response is received for the previous request. (if you have any timers to simulate user think time,then the user will wait for the duration to send the next request after the response is received).
If 1 user is able to make 19 requests (or server processed 19 requests per second), then 8 users should be able to send 152 requests. But, when you increase the user load/increase the number of requests sent to the server, It's throughput (number of requests the server can process / unit time) will also increase gradually as shown in the picture. If you keep increasing the user, at one point, you would see the server's throughput (number of hits / second) gets saturated / would not increase beyond this point. So, may be, here the server got saturated at 133 requests / second. That is why we could not see 152 requests for 8 users. To understand the behavior, you need to increase user (ramp up) slowly.
Check here for few tips on JMeter

Resources