Performance Tuning in Apache webserver - Maxclient value - performance

Have 2 Apache IBM HTTP servers with the following settings
ThreadLimit 150
ServerLimit 8
MaxClients 1200
ThreadsPerChild 150
The server has 8 core and 24 Gig Ram (Linux box) I'm looking at increasing the maxclient values. What all are the things I should be considering ?
Also when I do ss -s
Transport Total IP IPv6
1243 - -
RAW 0 0 0
UDP 20 15 5
TCP 836 803 33
INET 856 818 38
FRAG 0 0 0
Does the TCP Total value (836) correspond to the Maxclients setting.?
Thanks

There is no real correspondence. MaxClients is the maximum number of concurrent threads Apache will use. If Apache only ever handled static files, the share of open sockets on Apache's port would be roughly capped by MaxClients.
But backend connections and connection pooling mean you could have far more open sockets than threads (maxclients).
1200 is pretty modest, 2000-3000 is still reasonable.
You can chart the usage via mod_mpmstats.

Related

Increase apache requests per second

I want to increase apache's request per second figure.
I'm using apache benchmark to get it and it's not going over 500.
ab -n 100 -c 100 http://localhost/
this is the command I'm using it gives me 500 RPS
Concurrency Level: 100
Time taken for tests: 0.212 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 17925 bytes
HTML transferred: 900 bytes
Requests per second: 472.05 [#/sec] (mean)
Time per request: 211.843 [ms] (mean)
Time per request: 2.118 [ms] (mean, across all concurrent requests)
Transfer rate: 82.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 9 9 0.2 9 9
Processing: 20 150 36.8 160 200
Waiting: 19 148 36.6 159 200
Total: 30 159 36.8 169 209
Percentage of the requests served within a certain time (ms)
50% 169
66% 176
75% 182
80% 187
90% 200
95% 206
98% 209
99% 209
100% 209 (longest request)
this is the whole coutput.
I'm using worker mpm for this with configs as--
<IfModule mpm_worker_module>
ServerLimit 200
StartServers 200
MaxClients 5000
MinSpareThreads 1500
MaxSpareThreads 2000
ThreadsPerChild 64
MaxRequestsPerChild 0
</IfModule>
I suppose these are pretty high figures never the less I keep increasing it and nothing seems to change.
The application itself doesn't contain anything it only prints 'Hello World' with cherrypy.
I want to increase it to like 2000RPS my Ram is 5GB(using a VM).
The numbers you've set in your configuration look wrong - but the only way to get the right numbers is by measuring how your system behaves with real traffic.
Measuring response time across the loopback interface is not very meaningful. Measuring response time for a single URL is not very meaningful. Measuring response time with a load generator running on the same machine as the webserver is not very meaningful.
Making your site go faster / increasing the capacity is very difficult and needs much more testing, data and analysis than is appropriate for this forum.

NS-3 TCP vs. UDP throughput

I'm a new NS-3 user. I'm trying to find and verify the throughput of TCP wireless network. When experimenting with the "ht-wifi-network.cc"(http://www.nsnam.org/doxygen-release/ht-wifi-network_8cc_source.html) in the example file, I used the default settings,which is a UDP flow, and then tried TCP flow. Then I noticed 2 things:
Throughput is very low comparing with datarate, UDP is 22.78 / 65 and TCP is 11.73 / 65. Is this how the result should be like? Because I was expecting at least 30 Mbps out of 65 Mbps.
UDP throughput is almost twice of the TCP throughput. But I expected that TCP throughput would be higher.
Can somebody help and explain why? Thanks!

Slow apigee response on cached items

I've setup a responseCache and added a header for cache hits and misses based on the example in github. The responses seem to work as expected where the first call for a specific key returns a miss, but the second returns a hit in the headers. The problem is that there seems to be a lot of latency even on a cache hit. In the 500ms - 1000ms range, which seems really high to me. Is this because it's on a developer account?
I also tried the trace and those responses seem to be quick as expected, like 20 ms, but not in the app from my laptop in chrome.
Here are some details for the 10K request.
Compare those times to stackoverflow.com for example (logged in) which is 200ms and 40K for the page data. For fun, I added stackoverflow to the system and enabled the cache and got similar very slow responses.
ab -n 100 -c 1 http://frank_zivtech-test.apigee.net/schedule/get?date=12/12/12
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking frank_zivtech-test.apigee.net (be patient).....done
Server Software: Apache/2.2.16
Server Hostname: frank_zivtech-test.apigee.net
Server Port: 80
Document Path: /schedule/get?date=12/12/12
Document Length: 45664 bytes
Concurrency Level: 1
Time taken for tests: 63.421 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 4640700 bytes
HTML transferred: 4566400 bytes
Requests per second: 1.58 [#/sec] (mean)
Time per request: 634.208 [ms] (mean)
Time per request: 634.208 [ms] (mean, across all concurrent requests)
Transfer rate: 71.46 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 36 47 20.8 43 166
Processing: 385 587 105.5 574 922
Waiting: 265 435 80.7 437 754
Total: 428 634 114.6 618 1055
Percentage of the requests served within a certain time (ms)
50% 618
66% 624
75% 630
80% 652
90% 801
95% 884
98% 1000
99% 1055
100% 1055 (longest request)
Here it is using webpagetest.org with a different cached endpoint with very little data:
http://www.webpagetest.org/result/140228_7B_40G/
This shouldn't be related to a free vs. paid account.
If trace is showing ~20ms for the time spent in Apigee, I would factor in network latency from your client (laptop) to Apigee. Time between the client and Apigee can also be higher depending on the payload size and if it is compressed (gzip, deflate).

Cassandra Amazon EC2 , lots of IOWait

We have the following stats on single node cassandra on Amazon EC2/Rightscale m1.large instance with 2 ephemeral disks with raid0. (7.6 GB Total Memory)
4 GB RAM is allocated to cassandra Heap, 800MB is Heap NEW size.
following stats are from OpsCenter community 2.0
Read Requests 285 to 340 per second
Write Requests 257 to 720 per second
OS Load 15.15 to 17.15
Write Request Latency 293 to 685 micros
OS Sent Network Traffic 18 MB to 30 MB per second
OS Recieved Network Traffic 22 MB to 34 MB per second
OS Disk Queue Size 23 to 26 requests
Read Requests Pending 8 to 20
Read Request Latency 69140 to 92885 micros
OS Disk latency 37 to 42 ms
OS Disk Throughput 12 to 14 Mb per second
Disk IOPs Reads 600 to 740 per second
Disk IOPs Writes 2 to 7 per second
IOWait 60 to 70 % CPU avg
Idle 24 to 30 % CPU avg
Rowcache is disabled.
Are the above stats are satisfying with the provided configuration....OR how could we tweak it more to get less IOWait..........because we think that we are experiencing lots of IOWait.....how could we tweak it to get the best.
Read Requests are mixed.........some are from one super column family and one standard having more than million keys......and varying no. of super columns max 14 with varying no. of subcolumns from 1 to 10000 and varying no. of columns max 14 in standard column family...............subcolumns are very thin in nature with 0 bytes value....8 bytes for name.
Process is removing the data from super column family and writing the processed data on standard one.
Would EBS Disks work better....on Amazon EC2
I'm not positive whether you can tweak your config easily to get more disk performance, but using Snappy compression could help a good deal in making your app need to read less overall. It may also help to use the new composite key layout instead of supercolumns.
One thing I can say for sure: EBS will NOT work better. Stay away from that at all costs if you care about latency.

NodeJS on Ubuntu slow?

I just installed Ubuntu 10.10 server with NodeJS 0.4.6 using this guide: http://www.codediesel.com/linux/installing-node-js-on-ubuntu-10-04/ on my laptop:
Acer 5920G (Intel Core 2 Duo (2ghz), 4 gb ram)
After that I created a little test how nodejs would perform and wrote this little hello world script:
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('Hello World');
res.end();
}).listen(8080);
Now to test the performance i used Apache Benchmark on Windows with the following settings
ab -r -c 1000 -n 10000 http://192.168.1.103:8000/
But the results are very low compared to http://zgadzaj.com/benchmarking-node-js-testing-performance-against-apache-php/
Server Software:
Server Hostname: 192.168.1.103
Server Port: 8000
Document Path: /
Document Length: 12 bytes
Concurrency Level: 1000
Time taken for tests: 23.373 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 760000 bytes
HTML transferred: 120000 bytes
Requests per second: 427.84 [#/sec] (mean)
Time per request: 2337.334 [ms] (mean)
Time per request: 2.337 [ms] (mean, across all concurrent requests)
Transfer rate: 31.75 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.3 1 28
Processing: 1236 2236 281.2 2327 2481
Waiting: 689 1522 169.5 1562 1785
Total: 1237 2238 281.2 2328 2484
Percentage of the requests served within a certain time (ms)
50% 2328
66% 2347
75% 2358
80% 2364
90% 2381
95% 2397
98% 2442
99% 2464
100% 2484 (longest request)
Any one got a clue? (Compile, Hardware problem, Drivers, Configuration, Slow script)
Edit 4-17 14:04 GMT+1
I am testing the machine over 1Gbit local connection. When I ping it gives me 0 ms so that would be good I guess. When I issue the apachebenchmark on my Windows 7 machine the CPU raises to 100% :|
It seems like you are running the test over a medium with a high Bandwidth-Delay Product; in your case, high latency (>1s). Assuming 1s delay, a 100MBit link and 76 Bytes per request, you need more than 150000 requests in parallel to saturate it.
First, test the latency (with ping or so). Also, watch the CPU and network usage on all participating machines. This will give you an indication of the bottleneck in your tests. What are the benchmark results for an Apache webserver?
Also, it could be hardware/driver problem. Watch dmesg on both machines. And although it's probably not the reason for this specific problem, don't forget to change the CPU speed governor to performance on both machines!

Resources