Recently in an interview I was asked the question, to find Vusers using throughput and response time.
Question.
Find Vusers for throughput of 1260 bits per second and response time of 2 Milli seconds. The duration of the test we have run to achieve these results is 1 hour.
When I asked he said No thinktime or pacing, so it's zero.
So, As per Littles law, i calculated it as response time * throughput
1260*(0.002)=2.52 or 3..He said it's wrong..
Is there anything iam missing here? If yes then please let me know.. As per the response time as 2 Milli seconds which is rare I think 3 user should be ok..But if iam wrong then what is the correct calculation..
You do not want to work for this person.
In collapsing the time between requests to zero your interviewer has collapsed the client-server model, which is predicated upon a delay between requests from any singular client where requests from other clients are to be addressed.
Related
My background is more from the Twitter side where all stats are recorded minutely so you might have 120 request per minute. Inside twitter someone had the bright idea to divide by 60 so most graphs(except some teams who realize dividing by 60 is NOT the true rps at all since in a minute, that will fluctuate). So instead of 120 request per minute, many graphs report out 2 request per second. In google, seems like they are doing the same EXCEPT the math is not showing that. In twitter, we could multiply by 60 and the answer was always a whole integer of how many requests occurred in that minute.
In Google however, we see 0.02 requests / second which if we multiply by 60 is 1.2 request per minute. IF they are a minute granularity, they are definitely counting it wrong or something is wrong with their math.
This is from cloudrun metrics as we click into the instance itself
What am I missing here? AND BETTER yet, can we please report on request per minute. request per second is really the average req/second for that minute and it can be really confusing to people when we have these discussions of how you can get 0.5 request / second.
I AM assuming that this is not request per second 'at' the minute boundary because that would be VERY hard to calculate BUT would also be a whole number as well...ie. 0 requests or 1, not 0.2 and that would be quite useless to be honest.
EVERY cloud run instance creates this chart so I assume it's the same for everyone but if I click 'view in metrics explorer' it then give this picture on how 'google configured it'....
As it is available on the Metrics from Cloud Run Documentation, the Request Count metric is sampled every 60 seconds and it excludes from the count requests that are not reaching your container instances, the examples given are unauthorized requests or request sent after the maximum number of instances are reached, which obviously are not your case but again, something to be consider.
Assuming that the calculation of the request count is wrong, I did some digging on Google's IssueTracker system for the monitoring and cloud run components to check if there are any bugs opened that are related to that but could not find any, I would advice that you create a bug in their system so that Google can address it and that you are notified once that is fixed.
I ran a JMeter test for 193 samples
where I could see my average response time as 5915ms and Throghput as 1.19832.
I just want to know how are they exactly related
All the answers are in JMeter Glossary
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
The relationship is: higher response time - lower throughput and vice versa.
You can use charts like Transactions per Second for throughput and Response Times Over Time for response times to get them plotted on your test timeline and Composite Graph to put them together. This way you will be able to track the trends.
All 3 charts can be installed using JMeter Plugins Manager
TL;DR
No, but yes.
Both aren't related directly, but when increasing Throughput, it will probably affect server response time due to load/stress on server.
If there are timeout errors response time will probably increase.
But for validation or firewall errors - response time will probably decrease.
There's a long explanation in JMeter archive, last is using Disney to demonstrate:
Think of your last trip to disney or your favorite amusement park. Lets define capacity of the ride to be the number of people that can sit on the ride per turn (think roller coaster). Throughput will be the number of people that exit the ride per unit of time. Lets define service time the the amount of time you get to sit on the ride. Lets define response time or latency to be your time queuing for the ride (dead time) plus service time.
In terms of load/Performance testing. Throughput and Response times are inversely proportional. i.e
With increase in response time throughput should decrease.
With increase in Throughput response time should decrease.
You can get more detailed definitions in this blog:
https://nirajrules.wordpress.com/2009/09/17/measuring-performance-response-vs-latency-vs-throughput-vs-load-vs-scalability-vs-stress-vs-robustness/
Throughout increases to some extent and remains stable when all the resources becomes busy. Now, if user requests increases further at this point response time would increase. But if response time increase is only because of internal queuing then due to the fact that system is taking more requests in at the same time response time is also increasing, throughout doesn't change. When queues are full more requests should fail. If response increase is due to some delay in processing or serving the request, for example running a query on database then due to the fact that system is not accepting more request and at the same time response time is also increasing, consequently throughout would drop.
Just a general explaination.
Respose Time : It is the time calculated when user send the request till request gets finnished.
Throughput : It is server property that number of transaction or request can be made during certain amount of time. here 1.19832 /minute means server cand hadle 1.19832 sample per minute.
As Respose Time increses Throughput increases.
Ok so I ran some stress tests on an application of mine and I came across some weird results compared to last time.
The Throughput was way off although the averages are similar.
The number of Samples did vary, however as I understood the Throughput is calculated by dividing the number of samples by the time it took.
In my understanding if the average time was similar the throughput should be similar even though the samples varied...
This is what I have:
PREVIOUS
RECENT
As you can see the throughput difference is pretty substantial...
Can somebody please explain me if my logic is correct or point me on why that is not the case?
Throughput is the number of requests per unit of time (seconds, minutes, hours) that are sent to your server during the test.
The throughput is the real load processed by your server during a run but it does not tell you anything about the performance of your server during this same run. This is the reason why you need both measures in order to get a real idea about your server’s performance during a run. The response time tells you how fast your server is handling a given load.
The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
Throughput =(number of requests) / (total time).
Average: This is the Average (Arithmetic mean μ = 1/n * Σi=1…n xi) Response time of your total samples.It is the arithmetic mean of all the samples response time.
Response time is the elapsed time from the moment when a given request is sent to the server until the moment when the last bit of information has returned to the client.
So these are two different things.
Think of a trip to Disney or your favorite amusement park. Let's
define the capacity of the ride to be the number of people that can
sit on the ride per turn (think roller coaster). Throughput will
be the number of people that exit the ride per unit of time. Let's
define service time -the amount of time you get to sit on the ride.
Response time to be your time queuing for the ride
plus service time.
I am running a 1000 user test. and some of the flows have 25 users with the expected throughput as 0.000011574 per second.
The client is suggesting that I run it with about 1800 second think time.
Using Little's law I am getting the Think Time value to be 2160000.
I am suggesting that we just use 1 user and give a 600 second think time, even though calculations give me 86400 seconds think time. Since the flow has to be tested while under load.
What would be the correct approach? Go with client or Go with my assumption?
Let me know your valuable thoughts.
0.000011574 of what per second?
This reads like a requirement from a server admin and not from "the business."
I use JMeter to test my webapp application, I have aggregate graph with some score values, but I actually don't know what they mean...
Aggregate graph shows for example:
average
median
min
max
I don't know about what refer that values.
For what refer 90% line?
I also don't know what's the unit of throughput per second (bytes?).
Anybody knows?
JMeter documentation shows only general information about reports and listeners.
This link contains a helpful explanation for Jmeter Usuage, Results, Tips, Considerations and Deployment.
Good Luck!
Throughput - means number of requests per one second. So if two users open your website at the same time throughput will be 2/s - 2 requests in one second.
How it can be useful: check your website analytics and you see number of hosts and hits per one day. Throughput stands for hits per day. If analytics shows 200 000 hits per day this means: 200 000 / 86400 (seconds in one day) = 2,31 hits/s.
Average - the average of response time. I think you know what is response time - it's time between sending request and getting response from server. To get the average response time you should sum all samplers response time and devide to number of samplers. Sampler means user, request, hit, the meaning is the same.
Min - the minimal response time in all samplers. Differently we may say the fastest response.
Max - opposite of Min, the slowest response.
The throughput is generally measured in requests/second in things Jmeter.
As far as knowing what requests are within the 90% line, not really a way to do it with this listener. This is representing aggregate information, so it only reports on information about all the tests, not specific results.
For some different methods and ideas on getting useful information out of the responses, take a look at this jmeter wiki on log analysis.
If you don't already have it, jmeter plugins has a lot of useful controllers and listeners that can make understanding the results easier as well.