so i ran this command "ab -c 50 -n 5000 http://lala.la" today on the server, and i got these "amazing" results:
Document Path: /
Document Length: 26476 bytes
Concurrency Level: 50
Time taken for tests: 1800.514 seconds
Complete requests: 2427
Failed requests: 164
(Connect: 0, Receive: 0, Length: 164, Exceptions: 0)
Write errors: 0
Total transferred: 65169733 bytes
HTML transferred: 64345285 bytes
Requests per second: 1.35 [#/sec] (mean)
Time per request: 37093.408 [ms] (mean)
Time per request: 741.868 [ms] (mean, across all concurrent requests)
Transfer rate: 35.35 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.7 0 22
Processing: 4335 36740 9513.2 33755 102808
Waiting: 7 33050 8655.1 30407 72691
Total: 4355 36741 9512.4 33755 102808
Percentage of the requests served within a certain time (ms)
50% 33754
66% 37740
75% 40977
80% 43010
90% 47742
95% 56277
98% 62663
99% 71301
100% 102808 (longest request)
This is on a newly installed Nginx server, using Cloudflare and APC.
Dont think ive ever seen such poor performance, so what the heck can be causing it?
Thanks.
For starters, try testing directly to the origin and take cloudflare out of the mix (unless you have the html as cacheable and you're trying to test cloudflare's ability to serve it). Given that one of cloudflare's purposes is to protect sites it's not unreasonable to think that your test might be getting rate limited (at a minimum, bypassing it will remove a possible source of investigation).
Add $request_time to your access log format for nginx and that will tell you the server-side view of performance. If it still looks horrible you might have to use something like New Relic or DynaTrace to get more detail on where the time is going (if you don't instrument the app itself).
Are you using php-fpm for connecting nginx to php? If not, you should look into it.
For times that are that bad, odds are it's in the actual application though and not so much in the config.
Related
I've configured my jmeter as can be seen in the screen shot below-
However, when I examine the logs I can see that we only got to a rate of 37 requests per second as can be seen in the logs:
2021-10-18 03:20:30,005 INFO o.a.j.r.Summariser: summary = 3510096 in 26:26:03 = 36.9/s Avg: 67 Min: 16 Max: 69589 Err: 61 (0.00%)
Am I missing something? How can I increase the rate?
What rate is "expected"?
1 user will generate 1 hit per second only if application response time will be 1000 milliseconds.
If response time will be 2000 milliseconds you will have 0.5 requests per second
If response time will be 500 milliseconds you will have 2 requests per second.
There could be 2 explanations for the throughput lower than expected:
JMeter cannot send requests fast enough the reasons are in:
The number of virtual users is too low, just increase the number of threads in the Thread Group
JMeter cannot send requests fast enough because it's overloaded. Make sure to follow JMeter Best Practice and if it's still the case consider going for Distributed Testing
Your application cannot respond fast enough. In your case I can see response times as high as 69589 milliseconds so most probably that is the reason, you need to ensure that the application has enough headroom to operate in terms of CPU, RAM, etc. using an APM tool, check its logs, check its configuration, perform code profiling, etc.
I am doing ab testing of my application, Where I send 3000 concurrent requests and total of 10000 requests. My application is a spring boot application with actuator and I use kubernetes and docker for containerization. During the testing my actuator end points take longer time than expected, due to this my pods restart and requests start failing.
Now I have stopped liveness probe and during the test if I manually hit the actuator endpoint, I can see that it takes a lot of time to respond back and sometimes it does not even returns the result and just stucks.
I can see that each request is served within 10 millisecond by my application, as per the logs. But the AB test results are completely different. Below are the results from AB test:
Concurrency Level: 3000
Time taken for tests: 874.973 seconds
Complete requests: 10000
Failed requests: 6
(Connect: 0, Receive: 0, Length: 6, Exceptions: 0)
Non-2xx responses: 4
Total transferred: 1210342 bytes
Total body sent: 4950000
HTML transferred: 20580 bytes
Requests per second: 11.43 [#/sec] (mean)
Time per request: 262491.958 [ms] (mean)
Time per request: 87.497 [ms] (mean, across all concurrent requests)
Transfer rate: 1.35 [Kbytes/sec] received
5.52 kb/s sent
6.88 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 372 772.5 0 3051
Processing: 1152 226664 145414.1 188502 867403
Waiting: 1150 226682 145404.6 188523 867402
Total: 2171 227036 145372.2 188792 868447
Percentage of the requests served within a certain time (ms)
50% 188792
66% 249585
75% 295993
80% 330934
90% 427890
95% 516809
98% 635143
99% 716399
100% 868447 (longest request)
Not able to understand this behaviour, as it shows approximately only 11.43 requests are served within a second, which is very low. What could be the reason ? Also What should be the way to keep the liveness probe ?
I have below properties set in my application.properties:
server.tomcat.max-connections=10000
server.tomcat.max-threads=2000
Test node_redis benchmark, it show incr has more than 100000 ops/s
$ node multi_bench.js
Client count: 5, node version: 0.10.15, server version: 2.6.4, parser: hiredis
INCR, 1/5 min/max/avg/p95: 0/ 2/ 0.06/ 1.00 1233ms total, 16220.60 ops/sec
INCR, 50/5 min/max/avg/p95: 0/ 4/ 1.61/ 3.00 648ms total, 30864.20 ops/sec
INCR, 200/5 min/max/avg/p95: 0/ 14/ 5.28/ 9.00 529ms total, 37807.18 ops/sec
INCR, 20000/5 min/max/avg/p95: 42/ 508/ 302.22/ 467.00 519ms total, 38535.65 ops/sec
Then I add redis in nodejs with http server
var http = require("http"), server,
redis_client = require("redis").createClient();
server = http.createServer(function (request, response) {
response.writeHead(200, {
"Content-Type": "text/plain"
});
redis_client.incr("requests", function (err, reply) {
response.write(reply+'\n');
response.end();
});
}).listen(6666);
server.on('error', function(err){
console.log(err);
process.exit(1);
});
Use ab command to test, it only has 6000 req/s
$ ab -n 10000 -c 100 localhost:6666/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 10000 requests
Finished 10000 requests
Server Software:
Server Hostname: localhost
Server Port: 6666
Document Path: /
Document Length: 7 bytes
Concurrency Level: 100
Time taken for tests: 1.667 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 1080000 bytes
HTML transferred: 70000 bytes
Requests per second: 6000.38 [#/sec] (mean)
Time per request: 16.666 [ms] (mean)
Time per request: 0.167 [ms] (mean, across all concurrent requests)
Transfer rate: 632.85 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.3 0 2
Processing: 12 16 3.2 15 37
Waiting: 12 16 3.1 15 37
Total: 13 17 3.2 16 37
Percentage of the requests served within a certain time (ms)
50% 16
66% 16
75% 16
80% 17
90% 20
95% 23
98% 28
99% 34
100% 37 (longest request)
Last I test 'hello world', it reached 7k req/s
Requests per second: 7201.18 [#/sec] (mean)
How to profile and figure out the reason why redis in http lose some performance?
I think you have misinterpreted the result of multi_bench benchmark.
First, this benchmark spreads the load over 5 connections, while you have only one in your node.js program. More connections mean more communication buffers (allocated on a per socket basis) and better performance.
Then, while a Redis server is able to sustain 100K op/s (provided you open several connections, and/or use pipelining), node.js and node_redis are not able to reach this level. The result of your run of multi_bench shows that when pipelining is not used, only 16K op/s are achieved.
Client count: 5, node version: 0.10.15, server version: 2.6.4, parser: hiredis
INCR, 1/5 min/max/avg/p95: 0/ 2/ 0.06/ 1.00 1233ms total, 16220.60 ops/sec
This result means that with no pipelining, and with 5 concurrent connections, node_redis is able to process 16K op/s globally. Please note that measuring a throughput of 16K op/s while only sending 20K ops (default value of multi_bench) is not very accurate. You should increase num_requests for better accuracy.
The result of your second benchmark is not so surprising: you add an http layer (which is more expensive to parse than Redis protocol itself), use only 1 connection to Redis while ab tries to open 100 concurrent connections to node.js, and finally get 6K op/s, resulting in a 1.2K op/s throughput overhead compared to a "Hello world" HTTP server. What did you expect?
You could try to squeeze out a bit more performance by leveraging node.js clustering capabilities, as described in this answer.
What is the difference between those 2 fields? :
Time per request (mean)
Time per request (mean, across all concurrent requests)
How is each of them calculated?
Sample Output:
Time per request: 3953.446 [ms] (mean)
Time per request: 39.534 [ms] (mean, across all concurrent requests)
Why is there much difference?
Here is an example of an ab's test result. I make 1000 requests that with 3 concurrent requests.
C:\>ab -d -e a.csv -v 1 -n 1000 -c 3 http://www.example.com/index.aspx
This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0
Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking www.m-taoyuan.tw (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Finished 1000 requests
Server Software: Microsoft-IIS/6.0
Server Hostname: www.m-taoyuan.tw
Server Port: 80
Document Path: /index.aspx
Document Length: 25986 bytes
Concurrency Level: 3
Time taken for tests: 25.734375 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 26372000 bytes
HTML transferred: 25986000 bytes
Requests per second: 38.86 [#/sec] (mean)
Time per request: 77.203 [ms] (mean)
Time per request: 25.734 [ms] (mean, across all concurrent requests)
Transfer rate: 1000.72 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 4.4 0 15
Processing: 62 75 9.1 78 109
Waiting: 46 64 8.0 62 109
Total: 62 76 9.3 78 109
As you can see, there are two Time per request field.
Time per request (mean)
Time per request (mean, across all concurrent requests)
Please check the Time taken for tests field first. The value is 25.734375 seconds which is 25734.375 ms.
If we divide 25734.375 ms by 1000, you get 25.734 [ms] which is exact the Time per request (mean, across all concurrent requests) field's value.
For the Time per request (mean), the value is 77.203 [ms]. The value is a bit longer than Time per request (mean, across all concurrent requests). That is because the (mean) is counted by every specific request and calculate it's mean time.
Let me give you an simple example.
Assume that we make 3 requests with 3 concurrent connections. The Time taken for tests will be 90ms and each request are 40ms, 50ms, 30ms. So what's the value of these two Time per request?
Time per request (mean) = ( 40 + 50 + 30 ) / 3 = 40ms
Time per request (mean, across all concurrent requests) = 90 / 3 = 30ms
Hope you can understand. :)
It would be helpful to see your input, but, I believe the output is telling you that there is no time savings for performing concurrent requests.
Time per request (mean) tells you the average amount of time it took for a concurrent group of requests to process.
Time per request (mean, across all concurrent requests) tells you the average amount of time it took for a single request to process by itself.
If you processed 100 requests concurrently, it took 3953.446ms.
If you processed them individually, it would take 39.534ms * 100 = 3953.4ms
Same number. There is no time savings to performing concurrent requests (at least for the total number of requests you tested).
i'm trying to figure out how to use ApacheBench and benchmark my website. I installed the default site project (it's ASP.NET MVC but please don't put stop reading if u're not a .NET person).
I didn't change anything. Add new project. Set confuration to RELEASE. Run without Debug. (so it's in LIVE mode). Yes, this is with the built in webserver, not the production grade IIS or Apache or whatever.
So here's the results :-
C:\Temp>ab -n 1000 -c 1 http://localhost:50035/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: ASP.NET
Server Hostname: localhost
Server Port: 50035
Document Path: /
Document Length: 1204 bytes
Concurrency Level: 1
Time taken for tests: 2.371 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 1504000 bytes
HTML transferred: 1204000 bytes
Requests per second: 421.73 [#/sec] (mean)
Time per request: 2.371 [ms] (mean)
Time per request: 2.371 [ms] (mean, across all concurrent requests)
Transfer rate: 619.41 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.1 0 16
Processing: 0 2 5.5 0 16
Waiting: 0 2 5.1 0 16
Total: 0 2 5.6 0 16
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 16
95% 16
98% 16
99% 16
100% 16 (longest request)
C:\Temp>
Now, i'm not sure exactly what I should be looking at.
Firstly, I after the number of requests a second. So if we have a requirement to handle 300 reqs/sec, then is this saying it handles and average of 421 req's a sec?
Secondly, what is the reason for adding more concurrent? As in, if i have 1000 hits on 1 concurrent, how does that differ to 500 on 2 concurrent? Is it to test if there's any code that blocks other requests?
Lastly, is there anything important I've missed from the results which I should take note of?
Thanks :)
what is the reason for adding more
concurrent? As in, if i have 1000 hits
on 1 concurrent, how does that differ
to 500 on 2 concurrent? Is it to test
if there's any code that blocks other
requests?
It's a bit about that, yes : your application is probably doing things where concurrency can bring troubles.
A couple of examples :
a page is trying to access a file -- locking it in the process ; it means if another page has to access the same file, it'll have to wait until the first page has finished working with it.
quite the same for database access : if one page is writing to a database, there is some kind of locking mecanisms (be it table-based, or row-based, or whatever, depending on your DBMS)
Testing with a concurrency of one is OK... As long as your website will never have more than one user at the same time ; which is quite not realistic, I hope for you.
You have to think about how many users will be on site at the same time, when it's in production -- and adjust the concurrency ; just remember that 5 users at the same time on your site doesn't mean you have to test with a concurrency of 5 with ab :
real users will wait a couple of seconds between each request (time to read the page, click on a link, ...)
ab doesn't wait at all : each time a page is loaded (ie, a request is finished), it launches another request !
Also, two other things :
ab only tests for one page -- real users will navigate on the whole website, which could cause concurrency problems you would not have while testing only one page
ab only loads one page : it doesn't request external resources (think CSS, images, JS, ...) ; which means you'll have lots of other requests, even if not realy costly, when your site is in production.
As a sidenote : you might want to take a look at other tools, which can do far more complete tests, like siege, Jmeter, or OpenSTA : ab is really nice when you want to measure if something you did is optimizing your page or not ; but if you want to simulate "real" usage of your site, those are far more adapted.
Yes, if you want to know how many requests per second your site is able to serve, look at the "Requests per second" line.
In your case it's really quite simple since you ran ab with concurrency of 1. Each request, on average took only 2.371ms. 421 of those, one after the other, take 1 second.
You really should play with the concurrency a little bit, to accurately gauge the capacity of your site.
Up to a certain degree of concurrency you'd expect the throughput to increase, as multiple requests get handled in parallel by IIS.
E.g. if your server has multiple CPUs/cores. Also if a page relies on external IO (middle tier service, or DB calls) the cpu can work on one request, while another is waiting for IO to complete.
At a certain point requests/sec will level off, with increasing concurrency, and you'll see latency increase. Increase concurrency even more and you'll see your throughput (req/sec) decrease, as the server has to devote more resources to juggling all these concurrent requests.
All that said, the majority of your requests return in about 2ms. That's pretty darn fast, so I am guessing there is not much going on in terms of DB or middle tier calls, and your system is probably maxed out on cpu when the test is running (or something is wrong, and failing really fast. Are you sure ab gets the response page you intend it to? I.e. is the page you think you are testing 1204 bytes large?).
Which brings up another point: ab itself consumes cpu too, especially once you up the concurrency. So you want to run ab on another machine.
Also, should your site make external calls to middle tier services or DBs, you want to adjust your machine.config to optimize the number of threads IIS allocates:http://support.microsoft.com/default.aspx?scid=kb;en-us;821268
And just a little trivia: the time taken statistics is done in increments of ~16ms, as that appears to be the granularity of the timer used. I.e. 80% of your responses did not take 0ms, they took some time <16ms.