Slow apigee response on cached items - caching

I've setup a responseCache and added a header for cache hits and misses based on the example in github. The responses seem to work as expected where the first call for a specific key returns a miss, but the second returns a hit in the headers. The problem is that there seems to be a lot of latency even on a cache hit. In the 500ms - 1000ms range, which seems really high to me. Is this because it's on a developer account?
I also tried the trace and those responses seem to be quick as expected, like 20 ms, but not in the app from my laptop in chrome.
Here are some details for the 10K request.
Compare those times to stackoverflow.com for example (logged in) which is 200ms and 40K for the page data. For fun, I added stackoverflow to the system and enabled the cache and got similar very slow responses.
ab -n 100 -c 1 http://frank_zivtech-test.apigee.net/schedule/get?date=12/12/12
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking frank_zivtech-test.apigee.net (be patient).....done
Server Software: Apache/2.2.16
Server Hostname: frank_zivtech-test.apigee.net
Server Port: 80
Document Path: /schedule/get?date=12/12/12
Document Length: 45664 bytes
Concurrency Level: 1
Time taken for tests: 63.421 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 4640700 bytes
HTML transferred: 4566400 bytes
Requests per second: 1.58 [#/sec] (mean)
Time per request: 634.208 [ms] (mean)
Time per request: 634.208 [ms] (mean, across all concurrent requests)
Transfer rate: 71.46 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 36 47 20.8 43 166
Processing: 385 587 105.5 574 922
Waiting: 265 435 80.7 437 754
Total: 428 634 114.6 618 1055
Percentage of the requests served within a certain time (ms)
50% 618
66% 624
75% 630
80% 652
90% 801
95% 884
98% 1000
99% 1055
100% 1055 (longest request)
Here it is using webpagetest.org with a different cached endpoint with very little data:
http://www.webpagetest.org/result/140228_7B_40G/

This shouldn't be related to a free vs. paid account.
If trace is showing ~20ms for the time spent in Apigee, I would factor in network latency from your client (laptop) to Apigee. Time between the client and Apigee can also be higher depending on the payload size and if it is compressed (gzip, deflate).

Related

NewSingleHostReverseProxy is 10 times slower

I have this very simple RP demo here which looks like to be slower than it should be.
Here is an ab result made on the RP,
ab -q -k -n 20000 -c 100 http://localhost:8080/home
This is ApacheBench, Version 2.3 <$Revision: 1748469 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient).....done
Server Software:
Server Hostname: localhost
Server Port: 8080
Document Path: /home
Document Length: 0 bytes
Concurrency Level: 100
Time taken for tests: 4.842 seconds
Complete requests: 20000
Failed requests: 0
Keep-Alive requests: 0
Total transferred: 4360000 bytes
HTML transferred: 0 bytes
Requests per second: 4130.48 [#/sec] (mean)
Time per request: 24.210 [ms] (mean)
Time per request: 0.242 [ms] (mean, across all concurrent requests)
Transfer rate: 879.34 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.9 0 10
Processing: 1 24 9.5 22 90
Waiting: 1 23 9.4 22 90
Total: 1 24 9.5 23 90
WARNING: The median and mean for the initial connection time are not within a normal deviation
These results are probably not that reliable.
Percentage of the requests served within a certain time (ms)
50% 23
66% 26
75% 29
80% 31
90% 36
95% 42
98% 49
99% 54
100% 90 (longest request)
When i performance test the backend it can do 40k qps.
I feel like the RP could be way faster, but i m not sure how nor why it is showing the current results.
Thanks!

What do you means KB/sec?

we have got TOTAL
Label: 10
Average: 1288
Median: 1278
90%: 1525
95%: 1525
99%: 1546
Min: 887
Max: 1546
Throughput: 6.406149903907751
KB/sec: 39.21264413837284
What do means of means KB/sec? please help me understand ot it
According to the Glossary
KB/s(Aggregate Report)
Throughput is measured in bytes and represents the amount of data that the Virtual users received from the server.Throughput KPI is measured in kilobytes(KB) per seconds.
So basically it is average amount of data received by JMeter from the application under test per second.
KB/sec is the speed of a connection.
KB meaning Kilobyte and sec meaning per second
You get faster speeds of MB/sec which is Megabyte and even faster speeds of GB/sec which is Gigabytes
1000 KB = 1 MB
1000 MB = 1 GB
Hope this helps :)

Increase apache requests per second

I want to increase apache's request per second figure.
I'm using apache benchmark to get it and it's not going over 500.
ab -n 100 -c 100 http://localhost/
this is the command I'm using it gives me 500 RPS
Concurrency Level: 100
Time taken for tests: 0.212 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 17925 bytes
HTML transferred: 900 bytes
Requests per second: 472.05 [#/sec] (mean)
Time per request: 211.843 [ms] (mean)
Time per request: 2.118 [ms] (mean, across all concurrent requests)
Transfer rate: 82.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 9 9 0.2 9 9
Processing: 20 150 36.8 160 200
Waiting: 19 148 36.6 159 200
Total: 30 159 36.8 169 209
Percentage of the requests served within a certain time (ms)
50% 169
66% 176
75% 182
80% 187
90% 200
95% 206
98% 209
99% 209
100% 209 (longest request)
this is the whole coutput.
I'm using worker mpm for this with configs as--
<IfModule mpm_worker_module>
ServerLimit 200
StartServers 200
MaxClients 5000
MinSpareThreads 1500
MaxSpareThreads 2000
ThreadsPerChild 64
MaxRequestsPerChild 0
</IfModule>
I suppose these are pretty high figures never the less I keep increasing it and nothing seems to change.
The application itself doesn't contain anything it only prints 'Hello World' with cherrypy.
I want to increase it to like 2000RPS my Ram is 5GB(using a VM).
The numbers you've set in your configuration look wrong - but the only way to get the right numbers is by measuring how your system behaves with real traffic.
Measuring response time across the loopback interface is not very meaningful. Measuring response time for a single URL is not very meaningful. Measuring response time with a load generator running on the same machine as the webserver is not very meaningful.
Making your site go faster / increasing the capacity is very difficult and needs much more testing, data and analysis than is appropriate for this forum.

Different results in ApacheBench with and without concurrent requests

I am trying to get some statistics on response time at my production server.
When calling ab -n100 -c1 "http://example.com/search?q=something" I get following results:
Connection Times (ms)
min mean[+/-sd] median max
Connect: 24 25 0.7 24 29
Processing: 526 874 116.1 868 1263
Waiting: 313 608 105.1 596 1032
Total: 552 898 116.1 892 1288
But when I call ab -n100 -c3 "http://example.com/search?q=something" the results are much worse:
Connection Times (ms)
min mean[+/-sd] median max
Connect: 24 25 0.8 25 30
Processing: 898 1872 1065.6 1689 8114
Waiting: 654 1410 765.5 1299 7821
Total: 923 1897 1065.5 1714 8138
Taking into account that site is in production, so there are requests besides mine, I can't explain why call with no concurrency are so much faster than with even small concurrency.
Any suggestions?
If you have a concurrency of 1 that means you are telling AB to hit this URL, as fast as it can, using one thread. The value -c3 is telling AB to do the same thing but using 3 threads which is probably going to result in a greater volume of calls which, in your case, appears to have caused things to slow down. (Note AB is single-threaded so doesn't actually use multiple os threads but the analogy still holds true.)
It's a bit like having more lanes at the tollbooth, one lane can only process cars so fast but with three lanes you're going to get more throughput. But no matter how many lanes you have the width of the tunnel the cars have to pass through after the tollbooth is also going to affect throughput which is probably what you are seeing.
As a general note, a better approach to load testing is to decide what level of traffic your app needs to be able to support and then design a test that generates this level of throughput and no more. Running threads as fast as they can like AB does tends to make any kind of controlled testing hard. JMeter is better.
Also, you might want to think about setting up a test server for his sort of thing, less risky...

NodeJS on Ubuntu slow?

I just installed Ubuntu 10.10 server with NodeJS 0.4.6 using this guide: http://www.codediesel.com/linux/installing-node-js-on-ubuntu-10-04/ on my laptop:
Acer 5920G (Intel Core 2 Duo (2ghz), 4 gb ram)
After that I created a little test how nodejs would perform and wrote this little hello world script:
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('Hello World');
res.end();
}).listen(8080);
Now to test the performance i used Apache Benchmark on Windows with the following settings
ab -r -c 1000 -n 10000 http://192.168.1.103:8000/
But the results are very low compared to http://zgadzaj.com/benchmarking-node-js-testing-performance-against-apache-php/
Server Software:
Server Hostname: 192.168.1.103
Server Port: 8000
Document Path: /
Document Length: 12 bytes
Concurrency Level: 1000
Time taken for tests: 23.373 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 760000 bytes
HTML transferred: 120000 bytes
Requests per second: 427.84 [#/sec] (mean)
Time per request: 2337.334 [ms] (mean)
Time per request: 2.337 [ms] (mean, across all concurrent requests)
Transfer rate: 31.75 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.3 1 28
Processing: 1236 2236 281.2 2327 2481
Waiting: 689 1522 169.5 1562 1785
Total: 1237 2238 281.2 2328 2484
Percentage of the requests served within a certain time (ms)
50% 2328
66% 2347
75% 2358
80% 2364
90% 2381
95% 2397
98% 2442
99% 2464
100% 2484 (longest request)
Any one got a clue? (Compile, Hardware problem, Drivers, Configuration, Slow script)
Edit 4-17 14:04 GMT+1
I am testing the machine over 1Gbit local connection. When I ping it gives me 0 ms so that would be good I guess. When I issue the apachebenchmark on my Windows 7 machine the CPU raises to 100% :|
It seems like you are running the test over a medium with a high Bandwidth-Delay Product; in your case, high latency (>1s). Assuming 1s delay, a 100MBit link and 76 Bytes per request, you need more than 150000 requests in parallel to saturate it.
First, test the latency (with ping or so). Also, watch the CPU and network usage on all participating machines. This will give you an indication of the bottleneck in your tests. What are the benchmark results for an Apache webserver?
Also, it could be hardware/driver problem. Watch dmesg on both machines. And although it's probably not the reason for this specific problem, don't forget to change the CPU speed governor to performance on both machines!

Resources