how to do load test using JMETER to find the maximum users that the application can handle? I had done load testing for the gaming application using JMETER I have a summary report , aggregate report and graph are generated using available plugins. How to find the load capacity with these reports
Load testing is not about finding the maximum users the application can handle, load testing is about checking whether the application can support the anticipated amount of users.
If you want to identify the maximum, you need to go for stress testing to wit start with the anticipated number of users (or just from 1 virtual user) and gradually increase the load until response time starts exceeding the acceptable value or errors start occurring.
At the beginning you will see that the number of users increases, throughput increases and response time remains the same
At some point you will see that throughput doesn't increase although the number of users is increasing, it means you passed the saturation point - the point of maximum performance, this is probably what you're looking for
If you continue increasing the number of virtual users you will see that response time is increasing and throughput is either remains the same or decreases - this indicates the bottleneck
More information: Why ‘Normal’ Load Testing Isn’t Enough
Related
For stress test, which is the better approach- to start from higher load and reduce , or to gradually increase the load? I have seen that if we start from a very high load, already the damage would be made and that will have repercussions on the low volumes too. But want a second opinion in this case
Starting with the high load sounds more like a spike test to me
Stress Test is about finding the saturation point and the first bottleneck, if you start with high load you will be only able to state that your system doesn't support "high" number of users, but you won't be able to answer the following questions:
what is the number of users which is supported by the system without issues (when throughput increases and response time remains the same)
what is the maximum system performance (number of users and/or requests per second which can be served)
when errors start occurring and what is the root cause or the slowest component of the integrated system
does system get back to normal when load decreases
etc.
So I would recommend increasing the load gradually by defining reasonable ramp-up period
I am testing a scenario with 400 threads. Although I am almost getting no errors, I have very high average response. What can bring about this problem? Seems like server gives no time-out but gives response so late. I've addded the summary report. It is as follows:
This table doesn't tell the full story, if response time seems "so high" to you - this is definitely the bottleneck and you can report it already.
What you can do to localize the problem is:
Consider using a longer ramp-up period, i.e. start with 1 user and add 1 more user every 5 seconds (adjust these numbers according to your scenario) so you would have arrival phase, the "plateau" and the load decrease phase. This approach will allow you to correlate increasing load and increasing response time by looking at Active Threads Over Time and Response Times Over Time charts. This way you will be able to state that:
response time remains the same up to X concurrent users
after X concurrent users it starts growing so throughput is going down
after Z concurrent users response time exceeds acceptable threshold
It would also be good to see CPU, RAM, etc. usage on the server side as increased response time might be due to lack of resources, you can use JMeter PerfMon Plugin for this
Inspect your server configuration as you might need to tune it for high loads (same applies to JMeter, make sure to follow JMeter Best Practices)
Use a profiler tool on server side during the next test execution, it will show you the slowest places in your application code
Load is not getting increased in AppD when we increase the threads count in JMeter. For example, if we are able to achieve 100k calls/min for 500 users with 20 ms avg response time. Load remains at 135 k with 25ms avg resp time when we give 1200 as threads in JMeter. Even we increase the load thrice, load is not going to the application. Didn't observe errors in JMeter as well. We are giving host entries in all the load generators. Is that could be a reason for controlling the load from JMeter? Request help to resolve this issue please
If you increase the number of threads and the throughput is not increasing there could be 2 possible reasons for this:
The throughput is not increasing as response time is increasing which indicates your application performance bottleneck, look at Response Times vs Threads and the point where the response time starts increasing will match the maximum number of users your application can support
JMeter is not capable of sending requests fast enough due to lack of resources or improper configuration, make sure to follow JMeter Best Practices and if it doesn't help - consider allocating more load generator machines and switch to Distributed Mode of JMeter tests execution
As depicted in the following graph, the relationship between the load (Uvsers) and KPIs such as Throughput (TP) is non-linear. At certain point, increasing the load will not result on an increased Throughput that is proportional/linear (the Heavy Load zone).
In Starting of script sample time is less and then it starts increasing as the load increasing, is it the correct way to do load testing for website?
Please help, which is the correct way to do load testing for website
Not really, in ideal world response time should remain the same as the load increases like:
1 user - response time 1 second - throughput 1 request per second
100 users - response time 1 second - throughput 100 requests per second
200 users - response time 1 second - throughput 200 requests per second
etc.
The situation when response time doesn't start increasing is called saturation point - it is the maximum throughput your application can support.
The situation when response time starts increasing as you start more threads (virtual users) is known as the bottleneck and the question is: whether performance is still acceptable for that number of users that is defined in NFR and/or SLA. If yes - you're good to go, if not - you need to report this issue (it would be beneficial if you could try to determine reason for this)
The correct way of load testing the website is simulating end users activity as close as possible including workload model. Remember to increase the load gradually, this way you will be able to correlate increasing load with metrics like response time, throughput, number of errors. It is also good to decrease the load gradually as well to see whether your website recovers when the load gets back to normal/zero.
What is the best practice for Max Wait (ms) value in JDBC Connection Configuration?
JDBC
I am executing 2 types of tests:
20 loops for each number of threads - to get max Throupught
30min runtime for each number of Threads - to get Response time
With Max Wait = 10000ms I can execute JDBC request with 10,20,30,40,60 and 80 Threads without an error. With Max Wait = 20000ms I can go higher and execute with 100, 120, 140 Threads without an error. It seems to be logical behaviour.
Now question.
Can I increase Max Wait value as desired? Is it correct way how to get more test results?
Should I stop testing and do not increase number of Threads if any error occur in some Report? I got e.g. 0.06% errors from 10000 samples. Is this stop for my testing?
Thanks.
Everything depends on what your requirements are and how you defined performance baseline.
Can I increase Max Wait value as desired? Is it correct way how to get more test results?
If you are OK with higher response times and the functionality should be working, then you can keep max time as much as you want. But, practically, there will be the threshold to response times (like, 2 seconds to perform a login transaction), which you define as part of your performance SLA or performance baseline. So, though you are making your requests successful by increasing max time, eventually it is considered as failed request due to high response time (by crossing threshold values)
Note: Higher response times for DB operations eventually results in higher response times for web applications (or end users)
Should I stop testing and do not increase number of Threads if any error occur in some Report?
Same applies to error rates as well. If SLA says, some % error rate is agreed, then you can consider that the test is meeting SLA or performance baseline if the actual error rate is less that that. eg: If requirements says 0% error rate, then 0.1% is also considered as failed.
Is this stop for my testing?
You can stop the test at whatever the point you want. It is completely based on what metrics you want to capture. From my knowledge, It is suggested to continue the test, till it reaches a point where there is no point in continuing the test, like error rate reached 99% etc. If you are getting error rate as 0.6%, then I suggest to continue with the test, to know the breaking point of the system like server crash, response times reached to unacceptable values, memory issues etc.
Following are some good references:
https://www.nngroup.com/articles/response-times-3-important-limits/
http://calendar.perfplanet.com/2011/how-response-times-impact-business/
difference between baseline and benchmark in performance of an application
https://msdn.microsoft.com/en-us/library/ms190943.aspx
https://msdn.microsoft.com/en-us/library/bb924375.aspx
http://searchitchannel.techtarget.com/definition/service-level-agreement
This setting maps to DBCP -> BasicDataSource -> maxWaitMillis parameter, according to the documentation:
The maximum number of milliseconds that the pool will wait (when there are no available connections) for a connection to be returned before throwing an exception, or -1 to wait indefinitely
It should match the relevant setting of your application database configuration. If your goal is to determine the maximum performance - just put -1 there and the timeout will be disabled.
In regards to Is this stop for my testing? - it depends on multiple factors like what application is doing, what you are trying to achieve and what type of testing is being conducted. If you test database which orchestrates nuclear plant operation than zero error threshold is the only acceptable. And if this is a picture gallery of cats, this error level can be considered acceptable.
In majority of cases performance testing is divided into several test executions like:
Load Testing - putting the system under anticipated load to see if it capable to handle forecasted amount of users
Soak Testing - basically the same as Load Testing but keeping the load for a prolonged duration. This allows to detect e.g. memory leaks
Stress testing - determining boundaries of the application, saturation points, bottlenecks, etc. Starting from zero load and gradually increasing it until it breaks mentioning the maximum amount of users, correlation of other metrics like Response Time, Throughput, Error Rate, etc. with the increasing amount of users, checking whether application recovers when load gets back to normal, etc.
See Why ‘Normal’ Load Testing Isn’t Enough article for above testing types described in details.