question about Littles Law - algorithm

I know that Little's Law states (paraphrased):
the average number of things in a system is the product of the average rate at which things leave the system and the average time each one spends in the system,
or:
n=x*(r+z);
x-throughput
r-response time
z-think time
r+z - average response time
now i have question about a problem from programming pearls:
Suppose that system makes 100 disk accesses to process a transaction (although some systems require fewer, some systems will require several hundred disk access per transaction). How many transactions per hour per disk can the system handle?
Assumption: disk access takes 20 milliseconds.
Here is solution on this problem
Ignoring slowdown due to queuing, 20 milliseconds (of the seek time) per disk operation gives 2 seconds per transaction or 1800 transactions per hour
i am confused because i did not understand solution of this problem
please help

It will be more intuitive if you forget about that formula and think that the rate at which you can do something is inversely proportional to the time that it takes you to do it. For example, if it takes you 0.5 hour to eat a pizza, you eat pizzas at a rate of 2 pizzas per hour because 1/0.5 = 2.
In this case the rate is the number of transactions per time and the time is how long a transaction takes. According to the problem, a transaction takes 100 disk accesses, and each disk access takes 20 ms. Therefore each transaction takes 2 seconds total. The rate is then 1/2 = 0.5 transactions per second.
Now, more formally:
Rate of transactions per seconds R is inversely proportional to the transaction time in seconds TT.
R = 1/TT
The transaction time TT in this case is:
TT = disk access time * number of disk accesses per transaction =
20 milliseconds * 100 = 2000 milliseconds = 2 seconds
R = 1/2 transactions per second
= 3600/2 transactions per hour
= 1800 transactions per hour

Related

Why is Average response time is reducing when we are increasing the number of users?

I am using J-Meter to run a performance test with different number of users. With 1 user, the avg response time is 1.4 seconds, but with more number of users, it's logical that the avg response time will go up, but instead it is reducing. Can anyone explain why? The test scenario is that I am interacting a few times (2-3 interactions) with a chat bot.
Please help me understand this confusing results below:
1 user - 30 seconds - 1.3 seconds (average response time)
5 users - 60 seconds - 0.92 seconds (average response time)
10 users - 60 seconds - 0.93 seconds (average response time)
20 users - 120 seconds - 0.92 seconds (average response time)
First iteration of first user often involves some overhead on client side (most commonly DNS resolution), and can have some overhead on server side (server "warmup"). That overhead is not required in the following iterations or users.
Thus what you see as reduction in average time is actually reduction of the impact of the slower "first user first iteration" execution time on overall outcome. This is why it's important to provide a sufficient sample, so that such local spike does not matter that much anymore. My rule of thumb is at least 10000 iterations before looking at any averages, although level of comfort is up to every tester to set.
Also when increasing number of users, you should not expect average to be worse, unless you reached a saturation point: it should be stable rather. So if you expect your app to be able to support not more than 20 users, than your result is surprising, but if you expect application to support 20000 users, you should not have any average degradation at 20 users.
To test if this is what happens, try to run 1 user, but for much longer, so that total number of iterations is similar to running 20 users for example. Roughly you need to increase duration of test with 1 user to 20 min to get to similar number of iterations (i.e. same length of test would be 120 sec, but also x20 iterations with 20 users, giving you rough number of 20 min total for 1 user)

How do I achieve the expected throughput in JMeter for a given scenario?

I have about 300 users (configured in the thread group) who would perform an activity (e.g.: run an e-learning course) twice. That would mean I need to expect about 600 iterations i.e 300 users performing an activity twice.
My thread group contains the following transaction controllers:
Login
Dashboard
Launch Course
Complete Course
Logout
As I need 600 iterations per 5400 seconds i.e 3600 + 900 + 900 seconds (1 hour steady state + 15 mins ramp-up and 15 mins ramp-down), and the sum of sampler requests within the total thread group are 18, would I be correct to say I need about 2 RPS?
Total number of iterations * number of requests per iteration = Total number of requests
600 * 18 = 10800
Total number of requests / Total test duration in seconds = Requests per second
10800 / 5400 = 2
Are my calculations correct?
In addition, what is the best approach to achieve the expected throughput?
Your calculation looks more or less correct. If you need to limit your test throughput to 2 RPS you can do it using Constant Throughput Timer or Throughput Shaping Timer.
However 2 RPS is nothing more than statistical noise, my expectation is that you need much higher load to really test your application performance, i.e.
Simulate the anticipated number of users for a short period. Don't care about iterations, just let your test to run i.e. for an hour with the number of users you expect. This is called load testing
Do the same but for longer period of time (i.e. overnight or weekend). This is called soak testing.
Gradually increase the number of users until you will see errors or response time will start exceeding acceptable thresholds. This is called stress testing.

unable to increase average throughput in jmeter

I have set the number of threads and ramp-up time to 1/1 and I am iterating my 1000 records from data.csv for 1800 seconds.
Now given the numbers, I have set the CTT, constant time throughput to 2000 every minute and I expected the average throughput to be 2000/60 = 33.3/sec, but I get 18.7/sec, when I increased the throughput to 4000/60, I still get 18 or 19/sec.
Constant Throughput Timer cannot force threads to execute faster, it can only pause threads to limit JMeter's throughput to the defined value.
Each JMeter thread executes samplers as fast as it can, however next iteration won't start until previous is finished so given you use 1 thread - the throughput won't be higher than application response time.
Also be aware that Constant Throughput Timer is accurate enough on minute level so you can rather manipulate "requests per minute" rather than "requests per second", if your test is shorter than 1 minute - consider using Throughput Shaping Timer
So I would recommend increasing the number of virtual users to i.e. 50.
See How to use JMeter's Constant Throughput Timer for more details.
I guess your application average response time is around 50ms. Which means a single thread can only perform about 20 hits/sec (1 sec / 0.05 sec per hit = 20 hits/sec).
You have 2 solutions:
increase the number of threads to parallelize requests sent,
or make you app response faster (obviously harder).
At some point, when you application can't handle more load, you should see the hits/sec dropping and the average response time increase.
The graph below shows you an example of application which has a steady response time with up to 20 concurrent threads.

How Throughput is calculate and display in Sec,Minute and Hours in Jmeter?

I have one observation and want to get knowledge on Throughput calculation,Some time Throughput is displaying in seconds,some times in minutes and some times in Hours,please any one provide exact answer to calculate throughput and when it will display in Seconds,Minutes and Hours in Jmeter Summary Report
From JMeter Docs:
Throughput is calculated as requests/unit of time. The time is
calculated from the start of the first sample to the end of the last
sample. This includes any intervals between samples, as it is supposed
to represent the load on the server. The formula is: Throughput =
(number of requests) / (total time).
unit time varies based on the throughput values.
examples:
In 10 seconds, 10 requests are sent, then throughput is 10/10 = 1/sec
In 10 seconds, 1 requests are sent, then throughput is 1/10 = 0.1/sec = 6/min (showing 0.1/sec in decimals will be automatically shown in next higher unit time)
If you understand, it is to avoid small values (like, 0.1, 0.001 etc). In such cases, higher unit time is more friendly in understanding, while all unit times are correct. It is a matter of usability.
so,
1/sec = 60/min = 3600/hour = SAME

Concurrent User Calculation

I am trying to calculate the average concurrent user using the below formula
Average Concurrent Users = Visits per hour / (60 min/hour / average visit)
Visit Per Hour is 750
Average Visit is 1.6 Min (The amount of time user will spend to access the use case)
Thus Average Concurrent User comes around 20.
Now I made some performance improvement and the Average Visit comes down to 1.2 minutes. Thus I again use the formula to calculate the Average Concurrent Users, which comes around 15.
Logically when we do any performance improvement the concurrent users should increase rather than decrease. Is there any problem with the calculation.
You are correct. Concurrent user sessions will decrease if the average session time decreases and all else remains the same. This can be a good thing, if users are able to do their business more quickly and get on with their lives.
For performance tuning and capacity planning, measuring concurrent sessions is much less useful than raw requests per second (throughput) and average or median response time (latency).
Think about it this way: when a user is reading a web page they downloaded, the server isn't doing anything. While 1,000 users are reading pages, the server still isn't doing anything. The only parts of a user session that matter are between the click and the response.

Resources