Concurrent User Calculation - performance

I am trying to calculate the average concurrent user using the below formula
Average Concurrent Users = Visits per hour / (60 min/hour / average visit)
Visit Per Hour is 750
Average Visit is 1.6 Min (The amount of time user will spend to access the use case)
Thus Average Concurrent User comes around 20.
Now I made some performance improvement and the Average Visit comes down to 1.2 minutes. Thus I again use the formula to calculate the Average Concurrent Users, which comes around 15.
Logically when we do any performance improvement the concurrent users should increase rather than decrease. Is there any problem with the calculation.

You are correct. Concurrent user sessions will decrease if the average session time decreases and all else remains the same. This can be a good thing, if users are able to do their business more quickly and get on with their lives.
For performance tuning and capacity planning, measuring concurrent sessions is much less useful than raw requests per second (throughput) and average or median response time (latency).
Think about it this way: when a user is reading a web page they downloaded, the server isn't doing anything. While 1,000 users are reading pages, the server still isn't doing anything. The only parts of a user session that matter are between the click and the response.

Related

I have got an requirement for load testing . i was given both avg. and peak TPH. also avg and peak user load

I have to do performance testing of an ecommerce application where i got the details needed like Avg TPH and peak TPH . also Avg User and Peak User.
for e.g., an average of 1000 orders/hour, the peak of 3000 orders/hour during the holiday season, expected to grow to 6000 orders/hour next holiday season.
I was afraid which value to be considered for current users and TPH for performing load test for an hour.
also what load will be preferable foe stress testing and scalability testing.
It would be a great helpful not only for the test point of view but also will help me in understanding the conceptually which would help me in a great deal down the lane.
This is a high business risk endeavor. Get it wrong and your ledger doesn't go from red to black on the day after thanksgiving, plus you have a high probability of winding up with a bad public relations event on Twitter. Add to that greater than 40% of people who hit a website failure will not return.
That being said, do your skills match the risk to the business. If not, the best thing to do is to advise your management to acquire a higher skilled team. Then you should shadow them in all of their actions.
I think it helps to have some numbers here. There are roughly 35 days in this year's holiday shopping season. This translates to 840 hours.
#$25 avg sale, you are looking at revenue of $21 million
#$50 avg sale, ...42 Million
#100 avg sale, ...84 Million
Numbers based upon the average of 1000 sales per hour over 840 hours.
Every hour of downtime at peak costs you
#$25 avg sale, ...$75K
#$50 avg sale, ...$150K
#$100 avg sale, ...$300K
Numbers based upon 3000 orders per hour at peak. If you have downtime then greater than 40% of people will not return based upon latest studies. And you have the Twitter affect where people complain loudly and draw off potential site visitors.
I would advise you to bring in a team. Act fast, the really good engineers are quickly being snapped up for Holiday work. These are not numbers to take lightly nor is it work to press someone into that hasn't done it before.
If you are seriously in need and your marketing department knows exactly how much increased conversion they get from a faster website, then I can find someone for you. They will do the work upfront at no charge, but they will charge a 12 month residual based upon the decrease in response time and the increased conversion that results
Normally Performance Testing technique is not limited to only one scenario, you need to run different performance test types to assess various aspects of your application.
Load Testing - which goal is to check how does your application behave under anticipated load, in your case it would be simulating 1000 orders per hour.
Stress Testing - putting the application under test under maximum anticipated load (in your case 3000 TPH). Another approach is gradually increasing the load until response time starts exceeding acceptable thresholds or errors start occurring (whatever comes the first) or up to 6000 TPH if you don't plan to scale up. This way you will be able to identify the bottleneck and determine what will be the component which fails which could be in
lack of hardware power
problems with database
inefficient algorithms used in your application
You can also consider executing a Soak Test - putting your application under prolonged load, this way you will be able to catch the majority of memory leaks

Jmeter tps adjustment

Do we need to adjust Throughput given by jmeter, to find out the actual tps of the system
For eg : I am getting 100 tps for concurrent 250 users. This ran for 10 hrs. Can I come to a conclusion like my software can handle 100 transactions per second. Or else do I need to do some adjustment and need to get a value. Why i am asking this because when load started, system will take sometime to perform in adequate level (warm up time). If so how to do this. Please help me to understand this.
By default JMeter sends requests as fast as it can, the main factor which are affecting TPS rate are:
number of threads (virtual users) - this you can define in Thread Group
your application response time - this is not something you can control
Ideally when you increase number of threads the number of TPS should increase by the same factor, i.e. if you have 250 users and getting 100 tps you should get 200 tps for 500 users. If this is not the case - these 500 users are beyond saturation point and your application bottleneck is somewhere between 250 and 500 users (if not earlier).
With regards to "warm up" time - the recommended approach of conducting the load is doing it gradually, this way you will allow your application to get prepared to increasing load, warm up caches, let JIT compiler/optimizer to go their work, etc. Moreover this way you will be able to correlate the increasing load with increasing/decreasing throughput, response time, number of errors, etc. while having 250 users released at once doesn't tell the full story. See
The system warmup period varies from one system to the other. Warm up period is where configurations are cached, different libraries are initialized (eg. Builder.init()) and other initial functions that usually don't happen for subsequent calls. If you study results of the load test, there is a slow period at the very beginning. For most systems, it could be as small as 5 to 10 minutes. These values could be even negligible if the test is as long as 10 hours. But then again, average calculation can be effected if the results give extremely low values at the start (it always depend on the jump from initial warming up period to normal operations).
As per jmeter configurations this thread may explain the configuration. How to exclude warmup time from JMeter summary?

JMeter throughput results differ although average is similar

Ok so I ran some stress tests on an application of mine and I came across some weird results compared to last time.
The Throughput was way off although the averages are similar.
The number of Samples did vary, however as I understood the Throughput is calculated by dividing the number of samples by the time it took.
In my understanding if the average time was similar the throughput should be similar even though the samples varied...
This is what I have:
PREVIOUS
RECENT
As you can see the throughput difference is pretty substantial...
Can somebody please explain me if my logic is correct or point me on why that is not the case?
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.
The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
Throughput =(number of requests) / (total time).
Average: This is the Average (Arithmetic mean μ = 1/n * Σi=1…n xi) Response time of your total samples.It is the arithmetic mean of all the samples response time.
Response time is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client.
So these are two different things.
Think of a trip to Disney or your favorite amusement park. Let's
define the capacity of the ride to be the number of people that can
sit on the ride per turn (think roller coaster). Throughput will
be the number of people that exit the ride per unit of time. Let's
define service time -the amount of time you get to sit on the ride.
Response time to be your time queuing for the ride
plus service time.

Estimating maximum users that an application can support

I am analyzing a web application and want to predict the maximum users that application can support. Now i have the below numbers out of my load test execution
1. Response Time
2. Throughput
3. CPU
I have the application use case SLA
Response Time - 4 Secs
CPU - 65%
When i execute load test of 10 concurrent users (without Think Time) for a particular use case the average response time reaches 3.5 Seconds and CPU touches 50%. Next I execute load test of 20 concurrent users and response time reaches 6 seconds and CPU 70% thus surpassing the SLA.
The application server configuration is 4 core 7 GB RAM.
Going by the data does this suggests that the web application can support only 10 user at a time? Is there any formula or procedure which can suggest what is the maximum users the application can support.
TIA
"Concurrent users" is not a meaningful measurement, unless you also model "think time" and a couple of other things.
Think about the case of people reading books on a Kindle. An average reader will turn the page every 60 seconds, sending a little ping to a central server. If the system can support 10,000 of those pings per second, how many "concurrent users" is that? About 10,000 * 60, or 600,000. Now imagine that people read faster, turning pages every 30 seconds. The same system will only be able to support half as many "concurrent users". Now imagine a game like Halo online. Each user will be emitting multiple transactions / requests per second. In other words, user behavior matters a lot, and you can't control it. You can only model it.
So, for your application, you have to make a reasonable guess at the "think time" between requests, and add that to your benchmark. Only then will you start to approach a reasonable simulation. Other things to think about are session time, variability, time of day, etc.
Chapter 4 of the "Mature Optimization Handbook" discusses a lot of these issues: http://carlos.bueno.org/optimization/mature-optimization.pdf

How to Calculate average case after doing HTTP benchmark

If i do a benchmark, and for example i found the following:
With 1 concurrent user, The api give 150 req/s. (9000 req/minute)
With more than 300 concurrent user, The api start throwing exception.
An app is doing request 1 every 30 minute.
Is it correct if I say:
the best cases is that the api could handle (30 * 9000 = 270.000 user). That is under 30 minute, there would be 270.000 sequential request and each are coming from different user
The worst cases would be when there is 300 user posting request at the same time.
And if it's true, would there any way to calculate the average case ?
Is is the same as calculating worst case, average case complexity of an algorithm ?
One theoretical tool to answer these questions is http://en.wikipedia.org/wiki/Queueing_theory. It says that you are very unlikely to get the level of performance that you are assuming, because the load applied to the system fluctuates, so that there are busy periods and quiet periods. If the system has nothing to do in quiet periods it is forced into idleness that you haven't accounted for. In busy periods, on the other hand, it will typically build up long queues of pending work, until the queues get so long that customers walk away, or the queues become longer than the system can support and it collapses, or both.
The graph at figure 1 page 3 of http://pages.cs.wisc.edu/~dsmyers/cs547/lecture_12_mm1_queue.pdf shows a graph of response time vs applied load for what is probably the most optimistic even vaguely realistic situation. You can see that response time gets very large as you approach maximum load.
By far the most sensible thing to do is to run tests which apply a realistic load to your application - this is important enough for people to build things like http://jmeter.apache.org/. If you want a rule of thumb I'd say don't plan to stress the system at more than 50% of theoretical capacity as you originally calculated.

Resources