JMeter Performance bench mark criteria - jmeter

Could any team assist me for sharing standard all J-Meter performance mark criteria while performing J-Meter Performance testing for min 1000 user .it means that how we can decide what parameter(or threshold) caused for deciding load /performance testing with route cause and proposed solution after generating html report with non-GUI with completed scenario.
Thanks
Amit G

There is no "standard", the acceptance criteria normally are being dictated by the business requirements which might differ depending on the nature of the application
for real-time trading a couple of milliseconds delay is critical, companies invest into locating their highly performing servers physically close to exchange servers because even light speed matters for their scenarios
for normal applications used by people (i.e. news portals, e-commerce websites, etc.) the accepted load time would be 2-3 seconds, if people have to wait more - most probably they will switch somewhere else and never return. Moreover, search engines tend to rank slow websites lower
for internal applications which are being used inside the company response time doesn't really matter cause people will have to use particular this application, but you still might report large response times like: "if a person who earns $18 per hour has to wait for 10 seconds for each operation and the number of operations per day is 100 and number of personnel is 3000 the organization looses $15 000 every single day or $5 475 000 a year"
So I would recommend taking the following steps:
Check for existing SLAs or NFRs, it might be the case the document, you got these 1000 users contains some information regarding maximum response time or minimum throughput (requests per unit of time)
If there are no formal acceptance criteria defined you could go for other performance testing types:
Soak testing - putting your application under the prolonged load to see whether it performs consistently and there are no obvious memory leaks
Stress testing - starting with low number of users and gradually increasing the load until errors start occurring. This way you will be able to report correlation between increasing number of users and increasing response time/number of errors, identify saturation point (the maximum number of users which application can efficiently support), bottleneck (breaking point), etc.

Related

How to load test 10k requests per second using jmeter?

I need to load test my website with 10k req/sec for 1 hour using JMeter. I am confused with the values of loop count, number of thread, ramp-up period and duration.
Also will my laptop (i5 8GB) be able to do that? If not what is the alternative.
PS: I checked every question/answer on stackoverflow for this but I couldn't find any help. Please dont mark it repeated question.
You can use "Constant Throughput Timer" and define target throughput and select throughput based on "all active threads".
Define maximum number of users count in your script so that it will be enough for 10K req/sec.
Also if you are using windows machine then I think you will face this issue "https://www.baselogic.com/2011/11/23/solved-java-net-bindexception-address-use-connect-issue-windows/"
I will recommend to use distributed testing or use more than 1 machine.
The easiest way of configuring JMeter to send X requests per second is using either Precise Troughput Timer or Throughput Shaping Timer in combination with the Concurrency Thread Group. The number of threads needs to be sufficient, the exact number mainly depends on your application response time, if response time is 1 second - you will need 10k threads, if it's 500ms - you will need 5k threads, if it is 2 seconds - you will need 20k threads, etc.
Only you can answer whether your laptop can kick off the required number of virtual users as there are too many factors to consider: nature of the test, the size of the requests/responses, number of pre/post processors and assertions, etc. Make sure to follow JMeter Best Practices and monitor CPU, RAM, Network, etc. usage using i.e. JMeter PerfMon Plugin as if your laptop will be overloaded - JMeter won't be able to send requests fast enough and you will not be able to conduct 10k requests per second even if the server supports it. If your laptop hardware specifications are too low for the test scenario - you will have to go for Distributed Testing
You have a number of issues in play
test design. Use more than one load generator. In fact, use no fewer than three, evenly matched in hardware. Take one and load only one user of each type. This is your control set. If this set degrades at the same rate as your other load generators then you have a common issue, likely the site. If the control set does not degrade, but the other load generators do, then you likely have an overloaded generator. On the commercial test tool side of the fence, generating all load from one host have never been considered a good practice in performance testing.
10K requests per second. This is substantial. I have worked on some top 20 eCommerce sites and I can tell you that even they do not receive this type of traffic to the origin servers. Why? Cache! Either this his a Content Delivery Network where the load is spread across the county, OR there is a cache node directly in front of the load balancer(S) for the site (thing varnishcache of equivalent), OR both for a multi-staged cache. You might want to look for an objective reference in production to pin this to as a validation poinnt, if and only if (IFF) your goal is to represent end user behavior. Running a count of requests grouped by second from the HTTP access logs should be able to validate this number. Also, check the cache plan for fixed assets - it could be poorly managed and load would drop significantly just by better managing the sites cache settings to the client. If your goal is simply to saturate a SOAP/REST interface to the point of destruction then you might have a better path.
If you are looking to take a particular SOAP or REST set of remote procedure calls to the point of destruction, consider a classical stress test. Start your test at zero load, increase with the smallest step interval possible over the longest possible period of time. The physical analogy to this would be the classical hospital style stress test where a nurse comes around every minute and increases the speed OR the incline on the treadmill OR both until some end of test condition is achieved. For a hospital style test that is moving into Oxygen debt, an inability to keep pace, etc... For your application/interface it could be the doubling of response times from what is acceptable, a saturation of resources in the finite resource pool (CPU, DISK, MEMORY, NETWORK) on the back end hosts, etc...

Importance of baseline testing over SLA's

Why we are comparing Performance test results with base line if we already have SLA's?
How they will be related-
For example:
Transaction response time in main test is 3 seconds
SLA for same transaction is 5 second
Baseline for that transaction was 2 seconds
How to compare these?
If time is over SLA - you have a critical production issue that need to be address.
If time is over baseline - your server suffer from performance degradation and it need to be analyse,but in lesser importance
Read more in testingperformance:
Any user action where the response time seems to be higher than expected can be traced, monitored and checked to determine if their are any inefficiencies.
As the workload is increased, the performance tester can look to see how the response times of transactions deviate from the baseline as the workload increases.
This is a difficult question to answer - are you the recipient of an SLA (as in your system uses an external system with an SLA) or do you have to guarantee an SLA?
Typically people use "baseline" to mean the application as it is now, running in typical conditions and under typical load.
Typically, a response time SLA includes upper limits on load, or some kind of commercial ladder - guaranteeing a response time for unlimited traffic is often impossible without additional financial resources.
If your first performance test suggests that the actual response time is higher than baseline, it suggests that either you disagree about "typical" conditions, or that you've exceeded those typical conditions, or that the application's performance has deteriorated since the baseline was established. That's important information.
In general terms, response times and load do not have a linear relationship - if response time is 1 second with 100 users, it's usually not 10 seconds with 1000 users. Instead, response time tends to rise very slowly with load until you hit a bottleneck, at which point it rises very steeply.
I typically use performance testing to explore those bottlenecks, so I can decide how they fit with my desired performance characteristics, and work out how to move that bottleneck further away.
It's also worth noting that most systems have multiple bottlenecks, and the slowest element determines the overall performance characteristics. So even if you have an SLA for 5 second transactions in one part of your architecture, there may be other parts that are slower (or reach their bottleneck sooner).
So, why do you compare your load tests with base lines, even if you have an SLA?
Make sure that the baseline is still valid.
Make sure you understand the overall performance characteristics and can exceed the SLA in other parts of the system.
Verify you can reach the SLA

Concurrent users projected to actual users

I need to provide the business with a report estimating number of users (devices in this case) the system can cope with without extensive delays and errors.
Assuming each device polls-communicates with the server every 5 seconds or so would it be acceptable to multiple the number of concurrent users I stress test with by 5 to get the figure required by the business?
In general what are the best means of answering such a question considering the above factors?
I am guessing that the collision rate (making them concurrent) may well be over the ratio of 5 (the seconds it takes for the device before it asks to communicate with the server).
Any advice?
I am using JMeter to produce concurrent user/device throughput.
Edit as requested to explain further:
From an analytics point of view if each device will attempt to connect and communicate with the server every 5 seconds and we wish to receive a response within the time it is ready to re-communicate (in other words in next 4 seconds), the collision chances literally for other devices running the same software is calculated on the elapsed time between the two calls no?
I am looking for statistical analysis methodology really to find a percent to multiply the concurrent test results to a real environment.
I know it is a general question without a specific / explicit answer but more the methodology, if there is one, of how can one project the number of "active" users the system can cope with from the known "concurrent" users. I would have though that given the frequency of calls is known and that each call takes 300ms in average one could somehow project the actual users (maybe by an industry standard multiplier?)

How to Calculate average case after doing HTTP benchmark

If i do a benchmark, and for example i found the following:
With 1 concurrent user, The api give 150 req/s. (9000 req/minute)
With more than 300 concurrent user, The api start throwing exception.
An app is doing request 1 every 30 minute.
Is it correct if I say:
the best cases is that the api could handle (30 * 9000 = 270.000 user). That is under 30 minute, there would be 270.000 sequential request and each are coming from different user
The worst cases would be when there is 300 user posting request at the same time.
And if it's true, would there any way to calculate the average case ?
Is is the same as calculating worst case, average case complexity of an algorithm ?
One theoretical tool to answer these questions is http://en.wikipedia.org/wiki/Queueing_theory. It says that you are very unlikely to get the level of performance that you are assuming, because the load applied to the system fluctuates, so that there are busy periods and quiet periods. If the system has nothing to do in quiet periods it is forced into idleness that you haven't accounted for. In busy periods, on the other hand, it will typically build up long queues of pending work, until the queues get so long that customers walk away, or the queues become longer than the system can support and it collapses, or both.
The graph at figure 1 page 3 of http://pages.cs.wisc.edu/~dsmyers/cs547/lecture_12_mm1_queue.pdf shows a graph of response time vs applied load for what is probably the most optimistic even vaguely realistic situation. You can see that response time gets very large as you approach maximum load.
By far the most sensible thing to do is to run tests which apply a realistic load to your application - this is important enough for people to build things like http://jmeter.apache.org/. If you want a rule of thumb I'd say don't plan to stress the system at more than 50% of theoretical capacity as you originally calculated.

How to decide on what hardware to deploy web application

Suppose you have a web application, no specific stack (Java/.NET/LAMP/Django/Rails, all good).
How would you decide on which hardware to deploy it? What rules of thumb exist when determining how many machines you need?
How would you formulate parameters such as concurrent users, simultaneous connections, daily hits and DB read/write ratio to a decision on how much, and which, hardware you need?
Any resources on this issue would be very helpful...
Specifically - any hard numbers from real world experience and case studies would be great.
Capacity Planning is quite a detailed and extensive area. You'll need to accept an iterative model with a "Theoretical Baseline > Load Testing > Tuning & Optimizing" approach.
Theory
The first step is to decide on the Business requirements: how many users are expected for peak usage ? Remember - these numbers are usually inaccurate by some margin.
As an example, let's assume that all the peak traffic (at worst case) will be over 4 hours of the day. So if the website expects 100K hits per day, we dont divide that over 24 hours, but over 4 hours instead. So my site now needs to support a peak traffic of 25K hits per hour.
This breaks down to 417 hits per minute, or 7 hits per second. This is on the front end alone.
Add to this the number of internal transactions such as database operations, any file i/o per user, any batch jobs which might run within the system, reports etc.
Tally all these up to get the number of transactions per second, per minute etc that your system needs to support.
This gets further complicated when you have requirements such as "Avg response time must be 3 seconds etc" which means you have to figure in network latency / firewall / proxy etc
Finally - when it comes to choosing hardware, check out the published datasheets from each manufacturer such as Sun, HP, IBM, Windows etc. These detail the maximum transactions per second under test conditions. We usually accept 50% of those peaks under real conditions :)
But ultimately the choice of the hardware is usually a commercial decision.
Also you need to keep a minimum of 2 servers at each tier : web / app / even db for failover clustering.
Load testing
It's recommended to have a separate reference testing environment throughout the project lifecycle and post-launch so you can come back to run dedicated performance tests on the app. Scale this to be a smaller version of production, so if Prod has 4 servers and Ref has 1, then you test for 25% of the peak transactions etc.
Tuning & Optimizing
Too often, people throw some expensive hardware together and expect it all to work beautifully. You'll need to tune the hardware and OS for various parameters such as TCP timeouts etc - these are published by the software vendors, and these have to be done once the software are finalized. Set these tuning params on the Ref env, test and then decide which ones you need to carry over to Production.
Determine your expected load.
Setup a machine and run some tests against it with a Load testing tool.
How close are you if you only accomplished 10% of the peak load with some margin for error then you know you are going to need some load balancing. Design and implement a solution and test again. Make sure you solution is flexible enough to scale.
Trial and error is pretty much the way to go. It really depends on the individual app and usage patterns.
Test your app with a sample load and measure performance and load metrics. DB queries, disk hits, latency, whatever.
Then get an estimate of the expected load when deployed (go ask the domain expert) (you have to consider average load AND spikes).
Multiply the two and add some just to be sure. That's a really rough idea of what you need.
Then implement it, keeping in mind you usually won't scale linearly and you probably won't get the expected load ;)

Resources