While I'm doing load testing of the golang api there is a report is generated but I don't know what is it and how to read it:-
I run the command in terminal
echo "GET http://localhost:8080/api" | vegeta attack -rate=100/m | vegeta report
then it will produce a below report:-
Requests [total, rate] 138, 1.68
Duration [total, attack, wait] 1m22.20931745s, 1m22.200130205s, 9.187245ms
Latencies [mean, 50, 95, 99, max] 8.956174ms, 9.06458ms, 10.682252ms, 16.007578ms, 46.439935ms
Bytes In [total, mean] 19596, 142.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:138
Error Set:
or when I run the echo "GET http://localhost:8080/api" | vegeta attack -rate=100/m | vegeta report -type=json
then the report generated in json format like below:-
{"latencies:
{"total":103506418,
"mean":9409674,
"50th":9484403,
"95th":11918898,
"99th":12008257,
"max":12008257},
"bytes_in":{"total":1562,"mean":142},
"bytes_out":
{"total":0,"mean":0},
"earliest":"2018-10-16T14:15:13.251091124+05:30",
"latest":"2018-10-16T14:15:19.251141502+05:30",
"end":"2018-10-16T14:15:19.260119671+05:30",
"duration":6000050378,
"wait":8978169,
"requests":11,
"rate":1.8333179401848014,
"success":1,
"status_codes":{"200":11},
"errors":[]}
How to understand this report. Is there any document for this or anybody knows about it?
Let's understand it line by line
Requests [total, rate] 138, 1.68
This line prints the total number of requests fired in the session (138) along with the rate per second (1.8 requests per second)
Duration [total, attack, wait] 1m22.20931745s, 1m22.200130205s, 9.187245ms
Total Duration of the attack which should be sum of time spent in requests and time spent in waiting for response
Latencies [mean, 50, 95, 99, max] 8.956174ms, 9.06458ms, 10.682252ms, 16.007578ms, 46.439935ms
This is simple and most useful : mean latency in milliseconds, 50th percentile, 95th percentile and 99th percile latency along with the request which took max latency
99th percentile latency would mean 99% of responses were served within this time
Depending on your product, you should consider 95th or 99th as the real number to improve
Bytes In [total, mean] 19596, 142.00
Total bytes received for all response as well as the mean bytes per response
Bytes Out [total, mean] 0, 0.00
Total bytes sent for all requests as well as the mean bytes per request. Since you are using GET which does not contain any payload it is 0 for you
Success [ratio] 100.00%
Success percentage : 100% of your requests were successful
Status Codes [code:count] 200:138
Status code division by response code: In your case all 138 requests responded with 200 response
Error Set:
Error code division: If there were any errors 400/500s , it would be reported here. This is empty as you have 100% success rate
Related
I have a requirement to create a product which should support 40 concurrent users per second (I am new to working on concurrency)
To achieve this, I tried to developed one hello world spring-boot project.
i.e.,
spring-boot (1.5.9)
jetty 9.4.15
rest controller which has get endpoint
code below:
#GetMapping
public String index() {
return "Greetings from Spring Boot!";
}
App running on machine Gen10 DL360
Then I tried to benchmark using apachebench
75 concurrent users:
ab -t 120 -n 1000000 -c 75 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 75
Time taken for tests: 37.184 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 26893.28 [#/sec] (mean)
Time per request: 2.789 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3755.61 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 23.5 0 3006
Processing: 0 2 7.8 1 404
Waiting: 0 2 7.8 1 404
Total: 0 3 24.9 2 3007
100 concurrent users:
ab -t 120 -n 1000000 -c 100 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 100
Time taken for tests: 36.708 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27241.77 [#/sec] (mean)
Time per request: 3.671 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3804.27 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 35.7 1 3007
Processing: 0 2 9.4 1 405
Waiting: 0 2 9.4 1 405
Total: 0 4 37.0 2 3009
500 concurrent users:
ab -t 120 -n 1000000 -c 500 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 500
Time taken for tests: 36.222 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27607.83 [#/sec] (mean)
Time per request: 18.111 [ms] (mean)
Time per request: 0.036 [ms] (mean, across all concurrent requests)
Transfer rate: 3855.39 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 14 126.2 1 7015
Processing: 0 4 22.3 1 811
Waiting: 0 3 22.3 1 810
Total: 0 18 129.2 2 7018
1000 concurrent users:
ab -t 120 -n 1000000 -c 1000 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 1000
Time taken for tests: 36.534 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27372.09 [#/sec] (mean)
Time per request: 36.534 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3822.47 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 30 190.8 1 7015
Processing: 0 6 31.4 2 1613
Waiting: 0 5 31.4 1 1613
Total: 0 36 195.5 2 7018
From above test run, I achieved ~27K per second with 75 users itself but it looks increasing the users also increasing the latency. Also, we can clearly note connect time is increasing.
I have requirement for my application to support 40k concurrent users (assume all are using own separate browsers) and request should be finished within 250 milliseconds.
Please help me on this
I am also not a grand wizard in the topic myself but here is some advice:
there is a hard limit how many request can handle one instance so if you want to support a lot of user you need more instance
if you work with multiple instance then you have to somehow distribute the requests among the instances. One popular solution is Netflix Eureka
if you don't want to maintain additional resources and the product will run in cloud then use the provided load balancing services (e.g. LoadBalancer on AWS)
also you can fine-tune your server's connection pool settings
I have a very simple go server:
package main
import(
"net/http"
"fmt"
"log"
)
func test(w http.ResponseWriter, r *http.Request){
fmt.Println("No bid")
http.Error(w, "NoBid", 204)
}
func main() {
http.HandleFunc("/test/bid", test)
http.ListenAndServe(":8080", nil)
log.Println("Done serving")
}
I then run the apache benchmark tool:
ab -c 50 -n 50000 -p post.txt http://127.0.0.1:8080/test/bid
The Server runs and responds to about 15000 requests and then times out. I was wondering why this happens and if there is something I can do about this.
If you running in Linux, Maybe too many open files, so it can't create connection, You need change system config to support more connections.
For example,
edit /etc/security/limits.conf add
* soft nofile 100000
* soft nofile 100000
To open more file.
edit /etc/sysctl.conf
# use more port
net.ipv4.ip_local_port_range = 1024 65000
# keep alive timeout
net.ipv4.tcp_keepalive_time = 300
# allow reuse
net.ipv4.tcp_tw_reuse = 1
# quick recovery
net.ipv4.tcp_tw_recycle = 1
I tried to replicate your problem on my linux amd64 laptop with no success - it worked fine even with
ab -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
There were about 28,000 sockets open though which may be bumping a limit on your system.
A more real world test might be to turn keepalives on which maxes out at 400 sockets
ab -k -c 200 -n 500000 -p post.txt http://127.0.0.1:8080/test/bid
The result for this was
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /test/bid
Document Length: 6 bytes
Concurrency Level: 200
Time taken for tests: 33.807 seconds
Complete requests: 500000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 500000
Total transferred: 77000000 bytes
Total body sent: 221500000
HTML transferred: 3000000 bytes
Requests per second: 14790.04 [#/sec] (mean)
Time per request: 13.523 [ms] (mean)
Time per request: 0.068 [ms] (mean, across all concurrent requests)
Transfer rate: 2224.28 [Kbytes/sec] received
6398.43 kb/s sent
8622.71 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 11
Processing: 0 14 5.2 13 42
Waiting: 0 14 5.2 13 42
Total: 0 14 5.2 13 42
Percentage of the requests served within a certain time (ms)
50% 13
66% 16
75% 17
80% 18
90% 20
95% 21
98% 24
99% 27
100% 42 (longest request)
I suggest you try ab with -k and take a look at tuning your system for lots of open sockets
I've been testing out a simple web server written in Go with http_load. When running the test for 1 second with 100 parallel I've seen 16k requests completed. However, running the test for 10 seconds results in a similar number of requests being completed at around 1/10th of the rate of the 1 second test.
Additionally, if I run several 1 second test close together, the first test will complete 16k requests and the subsequent tests will complete just 100-200 requests.
package main
import "net/http"
func main() {
bytes := make([]byte, 1024)
for i := 0; i < len(bytes); i++ {
bytes[i] = 100
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write(bytes)
})
http.ListenAndServe(":8000", nil)
}
I'm wondering whether there is any reason as to why the performance would hit a ceiling whilst dealing with this number of requests and whether there is something I have missed in the implementation of the above web server.
This is probably a limitation on your own system rather than the go server. The same kind of degradation happens if you try hitting something like google with http_load:
$> http_load -parallel 100 -seconds 10 google.txt
1000 fetches, 100 max parallel, 219000 bytes, in 10.0006 seconds
219 mean bytes/connection
99.9944 fetches/sec, 21898.8 bytes/sec
msecs/connect: 410.409 mean, 4584.36 max, 16.949 min
msecs/first-response: 279.595 mean, 3647.74 max, 35.539 min
HTTP response codes:
code 301 -- 1000
$> http_load -parallel 100 -seconds 50 google.txt
729 fetches, 100 max parallel, 159213 bytes, in 50.0008 seconds
218.399 mean bytes/connection
14.5798 fetches/sec, 3184.21 bytes/sec
msecs/connect: 1588.57 mean, 36192.6 max, 17.944 min
msecs/first-response: 237.376 mean, 33816.7 max, 33.092 min
2 bad byte counts
HTTP response codes:
code 301 -- 727
$> http_load -parallel 100 -seconds 100 google.txt
1091 fetches, 100 max parallel, 223161 bytes, in 100 seconds
204.547 mean bytes/connection
10.91 fetches/sec, 2231.61 bytes/sec
msecs/connect: 1652.16 mean, 35860.4 max, 17.825 min
msecs/first-response: 319.259 mean, 35482.1 max, 31.892 min
HTTP response codes:
code 301 -- 1019
As you can see, the rate goes down quite a bit the longer you hit google (google.txt contains the single url "http://google.com"). This is most likely due to limitations in your system (the maximum num of open connections your programs can have, memory, cpu, etc...).
Can anyone please explain like how to analyze jmeter's Summary Report?
Example:
Label : Login Action(sampler)
Sample# : 1
average: 104 // What does this mean actually?
min : 104 // What does this mean actually?
max : 104
stddev : 0 // What does this mean actually?
error% : 0
Throughput : 9.615384615 // What does this mean actually?
Kb/Sec : 91.74053486 // What does this mean actually?
Average Bytes : 9770 // What does this mean actually?
It is pretty straightforward:
Average, min and max is the response times for the request in milliseconds. The response time os from the request is sent to the response is received. Since you have only one request they are of course all equal.
stddev is a measure of the variation of the response times: http://en.wikipedia.org/wiki/Standard_deviation.
Throughput is number of requests per second. With a average response time a little over 100ms the throughput is a little below 10.
Kb/Sec is the number of kilobytes transferred per second. It is Average Bytes (per request) * Throughput. I am not sure if the average bytes is for only the response or both the request and the response. My guess is that it is the latter. It is the latter: response headers and body.
I have just found the very nice and simple explanation here:
http://jmeterresults.blogspot.jp/2012/07/jmeterunderstanding-summary-report.html
I am comparing performance of Node.js (0.5.1-pre) vs Apache (2.2.17) for a very simple scenario - serving a text file.
Here's the code I use for node server:
var http = require('http')
, fs = require('fs')
fs.readFile('/var/www/README.txt',
function(err, data) {
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'})
res.end(data)
}).listen(8080, '127.0.0.1')
}
)
For Apache I am just using whatever default configuration which goes with Ubuntu 11.04
When running Apache Bench with the following parameters against Apache
ab -n10000 -c100 http://127.0.0.1/README.txt
I get the following runtimes:
Time taken for tests: 1.083 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 27630000 bytes
HTML transferred: 24830000 bytes
Requests per second: 9229.38 [#/sec] (mean)
Time per request: 10.835 [ms] (mean)
Time per request: 0.108 [ms] (mean, across all concurrent requests)
Transfer rate: 24903.11 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 9
Processing: 5 10 2.0 10 23
Waiting: 4 10 1.9 10 21
Total: 6 11 2.1 10 23
Percentage of the requests served within a certain time (ms)
50% 10
66% 11
75% 11
80% 11
90% 14
95% 15
98% 18
99% 19
100% 23 (longest request)
When running Apache bench against node instance, these are the runtimes:
Time taken for tests: 1.712 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 25470000 bytes
HTML transferred: 24830000 bytes
Requests per second: 5840.83 [#/sec] (mean)
Time per request: 17.121 [ms] (mean)
Time per request: 0.171 [ms] (mean, across all concurrent requests)
Transfer rate: 14527.94 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.9 0 8
Processing: 0 17 8.8 16 53
Waiting: 0 17 8.6 16 48
Total: 1 17 8.7 17 53
Percentage of the requests served within a certain time (ms)
50% 17
66% 21
75% 23
80% 25
90% 28
95% 31
98% 35
99% 38
100% 53 (longest request)
Which is clearly slower than Apache. This is especially surprising if you consider the fact that Apache is doing a lot of other stuff, like logging etc.
Am I doing it wrong? Or is Node.js really slower in this scenario?
Edit 1: I do notice that node's concurrency is better - when increasing a number of simultaneous request to 1000, Apache starts dropping few of them, while node works fine with no connections dropped.
Dynamic requests
node.js is very good at handling at lot small dynamic requests(which can be hanging/long-polling). But it is not good at handling large buffers. Ryan Dahl(Author node.js) explained this one of his presentations. I recommend you to study these slides. I also watched this online somewhere.
Garbage Collector
As you can see from slide(13 from 45) it is bad at big buffers.
Slide 15 from 45:
V8 has a generational garbage
collector. Moves objects around
randomly. Node can’t get a pointer to
raw string data to write to socket.
Use Buffer
Slide 16 from 45
Using Node’s new Buffer object, the
results change.
Still not that good as for example nginx, but a lot better. Also these slides are pretty old so probably Ryan has even improved this.
CDN
Still I don't think you should be using node.js to host static files. You are probably better of hosting them on a CDN which is optimized for hosting static files. Some popular CDN's(some even free for) via WIKI.
NGinx(+Memcached)
If you don't want to use CDN to host your static files I recommend you to use Nginx with memcached instead which is very fast.
In this scenario Apache is probably doing sendfile which result in kernel sending chunk of memory data (cached by fs driver) directly to socket. In the case of node there is some overhead in copying data in userspace between v8, libeio and kernel (see this great article on using sendfile in node)
There are plenty possible scenarios where node will outperform Apache, like 'send stream of data with constant slow speed to as many tcp connections as possible'
The result of your benchmark can change in favor of node.js if you increase the concurrency and use cache in node.js
A sample code from the book "Node Cookbook":
var http = require('http');
var path = require('path');
var fs = require('fs');
var mimeTypes = {
'.js' : 'text/javascript',
'.html': 'text/html',
'.css' : 'text/css'
} ;
var cache = {};
function cacheAndDeliver(f, cb) {
if (!cache[f]) {
fs.readFile(f, function(err, data) {
if (!err) {
cache[f] = {content: data} ;
}
cb(err, data);
});
return;
}
console.log('loading ' + f + ' from cache');
cb(null, cache[f].content);
}
http.createServer(function (request, response) {
var lookup = path.basename(decodeURI(request.url)) || 'index.html';
var f = 'content/'+lookup;
fs.exists(f, function (exists) {
if (exists) {
fs.readFile(f, function(err,data) {
if (err) { response.writeHead(500);
response.end('Server Error!'); return; }
var headers = {'Content-type': mimeTypes[path.extname(lookup)]};
response.writeHead(200, headers);
response.end(data);
});
return;
}
response.writeHead(404); //no such file found!
response.end('Page Not Found!');
});
Really all you're doing here is getting the system to copy data between buffers in memory, in different process's address spaces - the disk cache means you aren't really touching the disk, and you're using local sockets.
So the fewer copies have to be done per request, the faster it goes.
Edit: I suggested adding caching, but in fact I see now you're already doing that - you read the file once, then start the server and send back the same buffer each time.
Have you tried appending the header part to the file data once upfront, so you only have to do a single write operation for each request?
$ cat /var/www/test.php
<?php
for ($i=0; $i<10; $i++) {
echo "hello, world\n";
}
$ ab -r -n 100000 -k -c 50 http://localhost/test.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software: Apache/2.2.17
Server Hostname: localhost
Server Port: 80
Document Path: /test.php
Document Length: 130 bytes
Concurrency Level: 50
Time taken for tests: 3.656 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 100000
Total transferred: 37100000 bytes
HTML transferred: 13000000 bytes
Requests per second: 27350.70 [#/sec] (mean)
Time per request: 1.828 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 9909.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 0 2 2.7 0 29
Waiting: 0 2 2.7 0 29
Total: 0 2 2.7 0 29
Percentage of the requests served within a certain time (ms)
50% 0
66% 2
75% 3
80% 3
90% 5
95% 7
98% 10
99% 12
100% 29 (longest request)
$ cat node-test.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, "127.0.0.1");
console.log('Server running at http://127.0.0.1:1337/');
$ ab -r -n 100000 -k -c 50 http://localhost:1337/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software:
Server Hostname: localhost
Server Port: 1337
Document Path: /
Document Length: 12 bytes
Concurrency Level: 50
Time taken for tests: 14.708 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 7600000 bytes
HTML transferred: 1200000 bytes
Requests per second: 6799.08 [#/sec] (mean)
Time per request: 7.354 [ms] (mean)
Time per request: 0.147 [ms] (mean, across all concurrent requests)
Transfer rate: 504.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 0 7 3.8 7 28
Waiting: 0 7 3.8 7 28
Total: 1 7 3.8 7 28
Percentage of the requests served within a certain time (ms)
50% 7
66% 9
75% 10
80% 11
90% 12
95% 14
98% 16
99% 17
100% 28 (longest request)
$ node --version
v0.4.8
In the below benchmarks,
Apache:
$ apache2 -version
Server version: Apache/2.2.17 (Ubuntu)
Server built: Feb 22 2011 18:35:08
PHP APC cache/accelerator is installed.
Test run on my laptop, a Sager NP9280 with Core I7 920, 12G of RAM.
$ uname -a
Linux presto 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
KUbuntu natty