I write a JMeter test and use 1000 threads, and get a throughput of 330 requests per second. What was the average response time?
same test in number 2 and I use 100 threads and get a throughput of 330 requests per second. What was the average response time?
I think it has to do with little law, but I have no idea how to solve it? Any help, thanks.
We don't know, in order to determine the average response time we need to know your test duration
JMeter calculates average response time as arithmetic mean or the all response times for individual samplers, it can be observed in i.e. Aggregate Report listener.
Also the fact you have the same throughput for 100 and 1000 users looks utterly suspicious, for well-behaved application you should get 10x times more throughput for 1000 users than for 100 users.
The reasons could be in:
Your application cannot handle more than 330 requests per second which indicates a performance bottleneck
JMeter cannot produce more than 330 requests per second, make sure to follow JMeter Best Practices or consider Distributed Testing if your load generator hardware specifications are too low to produce the required load.
Recently in an interview I was asked the question, to find Vusers using throughput and response time.
Question.
Find Vusers for throughput of 1260 bits per second and response time of 2 Milli seconds. The duration of the test we have run to achieve these results is 1 hour.
When I asked he said No thinktime or pacing, so it's zero.
So, As per Littles law, i calculated it as response time * throughput
1260*(0.002)=2.52 or 3..He said it's wrong..
Is there anything iam missing here? If yes then please let me know.. As per the response time as 2 Milli seconds which is rare I think 3 user should be ok..But if iam wrong then what is the correct calculation..
You do not want to work for this person.
In collapsing the time between requests to zero your interviewer has collapsed the client-server model, which is predicated upon a delay between requests from any singular client where requests from other clients are to be addressed.
How can I understand that my server is doing fine?
I did some performance testing and the result was like:
No Of Sample: 750
Latest Sample: 3317
Average: 601
Deviation : 1152
Throughput: 2613.24
Median: 386
what are these parameters mean?
how can I give correct inputs and expect correct result?
I believe that JMeter Glossary can explain all the terms.
Just in case if it goes away:
No of Sample - total number of samples executed.
Latest sample - self-explanatory
Average - Arithmetic_mean of all samplers execution time: sum of all samplers duration divided by the "No of Sample"
Throughput - number of requests per time unit, like hits per second
Median and Deviation - are statistical terms
In regards to whether your server behavior is acceptable or not - it depends on what it is doing. 601 ms average response time sounds very good for i.e. online shop, but it may be not acceptable for finance operations, medical equipment or NASA spaceships. Besides it is quite unclear how many concurrent users were involved into load test as it were 2-5 virtual users the application under test may behave good, and in case of 20-50 concurrent users response time will get 60 seconds - that would be bad.
See Performance Metrics for Websites guide to learn about the most common measures which need to be done during performance testing.
At jameslist.com we can see the following times it takes from request to completed pageview;
Server processing a request: (php, memcached, db, sphinx + internal network latency): 150ms
Time spent in network: 650ms
Time spent in DOM: 1200ms
Time spent render page: 1650ms
That is in total about 3.7 seconds from request to fully loaded webpage. In average, is this good, ok or perhaps bad?
When it comes to breakdown of the above points, what could be expected of sites with similar content?
I would suggest google's search times are good for simple pages. I just did a search which took 130 ms and that sounds fine.
The more complex the page, the longer the time which is acceptable. e.g. a site which gets you insurance quotes from dozens of suppliers could reasonably take 10 seconds.
The rest sounds pretty lengthy to me but I know more about high frequency trading where 1 ms is pretty poor. ;)
Time spent in network: 650ms
That's a hell of a network, you could send a request around the world in this time.
Time spent in DOM: 1200ms
Time spent render page: 1650ms
I would be wondering why this is significantly higher than the "real" work which is about 150 ms.
A request from London to New York and back should be about 100 ms. My guess is 150 ms (request) + 150 ms (parsing and rendering) + 100 ms (internet) is good.
3.7 seconds end to end is pretty decent - on the fast side of average, I'd say.
I'm assuming your network time is the total time - it's not terrible, and mostly determined by file size and bandwidth. I've had a quick look at your site, and nothing out of the ordinary seems to be going on.
DOM and render time are a little high. Not freakishly so, but there may be some low-hanging fruit.
Target time for page to be generated and sent to the visitor is 150..300ms. All the important page content has to be loaded within one second from initial request.
i'm trying to figure out how to use ApacheBench and benchmark my website. I installed the default site project (it's ASP.NET MVC but please don't put stop reading if u're not a .NET person).
I didn't change anything. Add new project. Set confuration to RELEASE. Run without Debug. (so it's in LIVE mode). Yes, this is with the built in webserver, not the production grade IIS or Apache or whatever.
So here's the results :-
C:\Temp>ab -n 1000 -c 1 http://localhost:50035/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: ASP.NET
Server Hostname: localhost
Server Port: 50035
Document Path: /
Document Length: 1204 bytes
Concurrency Level: 1
Time taken for tests: 2.371 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 1504000 bytes
HTML transferred: 1204000 bytes
Requests per second: 421.73 [#/sec] (mean)
Time per request: 2.371 [ms] (mean)
Time per request: 2.371 [ms] (mean, across all concurrent requests)
Transfer rate: 619.41 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.1 0 16
Processing: 0 2 5.5 0 16
Waiting: 0 2 5.1 0 16
Total: 0 2 5.6 0 16
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 16
95% 16
98% 16
99% 16
100% 16 (longest request)
C:\Temp>
Now, i'm not sure exactly what I should be looking at.
Firstly, I after the number of requests a second. So if we have a requirement to handle 300 reqs/sec, then is this saying it handles and average of 421 req's a sec?
Secondly, what is the reason for adding more concurrent? As in, if i have 1000 hits on 1 concurrent, how does that differ to 500 on 2 concurrent? Is it to test if there's any code that blocks other requests?
Lastly, is there anything important I've missed from the results which I should take note of?
Thanks :)
what is the reason for adding more
concurrent? As in, if i have 1000 hits
on 1 concurrent, how does that differ
to 500 on 2 concurrent? Is it to test
if there's any code that blocks other
requests?
It's a bit about that, yes : your application is probably doing things where concurrency can bring troubles.
A couple of examples :
a page is trying to access a file -- locking it in the process ; it means if another page has to access the same file, it'll have to wait until the first page has finished working with it.
quite the same for database access : if one page is writing to a database, there is some kind of locking mecanisms (be it table-based, or row-based, or whatever, depending on your DBMS)
Testing with a concurrency of one is OK... As long as your website will never have more than one user at the same time ; which is quite not realistic, I hope for you.
You have to think about how many users will be on site at the same time, when it's in production -- and adjust the concurrency ; just remember that 5 users at the same time on your site doesn't mean you have to test with a concurrency of 5 with ab :
real users will wait a couple of seconds between each request (time to read the page, click on a link, ...)
ab doesn't wait at all : each time a page is loaded (ie, a request is finished), it launches another request !
Also, two other things :
ab only tests for one page -- real users will navigate on the whole website, which could cause concurrency problems you would not have while testing only one page
ab only loads one page : it doesn't request external resources (think CSS, images, JS, ...) ; which means you'll have lots of other requests, even if not realy costly, when your site is in production.
As a sidenote : you might want to take a look at other tools, which can do far more complete tests, like siege, Jmeter, or OpenSTA : ab is really nice when you want to measure if something you did is optimizing your page or not ; but if you want to simulate "real" usage of your site, those are far more adapted.
Yes, if you want to know how many requests per second your site is able to serve, look at the "Requests per second" line.
In your case it's really quite simple since you ran ab with concurrency of 1. Each request, on average took only 2.371ms. 421 of those, one after the other, take 1 second.
You really should play with the concurrency a little bit, to accurately gauge the capacity of your site.
Up to a certain degree of concurrency you'd expect the throughput to increase, as multiple requests get handled in parallel by IIS.
E.g. if your server has multiple CPUs/cores. Also if a page relies on external IO (middle tier service, or DB calls) the cpu can work on one request, while another is waiting for IO to complete.
At a certain point requests/sec will level off, with increasing concurrency, and you'll see latency increase. Increase concurrency even more and you'll see your throughput (req/sec) decrease, as the server has to devote more resources to juggling all these concurrent requests.
All that said, the majority of your requests return in about 2ms. That's pretty darn fast, so I am guessing there is not much going on in terms of DB or middle tier calls, and your system is probably maxed out on cpu when the test is running (or something is wrong, and failing really fast. Are you sure ab gets the response page you intend it to? I.e. is the page you think you are testing 1204 bytes large?).
Which brings up another point: ab itself consumes cpu too, especially once you up the concurrency. So you want to run ab on another machine.
Also, should your site make external calls to middle tier services or DBs, you want to adjust your machine.config to optimize the number of threads IIS allocates:http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
And just a little trivia: the time taken statistics is done in increments of ~16ms, as that appears to be the granularity of the timer used. I.e. 80% of your responses did not take 0ms, they took some time <16ms.