Ruby concurrency: non-blocking I/O vs threads - ruby

I am playing around with concurrency in Ruby (1.9.3-p0), and have created a very simple, I/O-heavy proxy task. First, I tried the non-blocking approach:
require 'rack'
require 'rack/fiber_pool'
require 'em-http'
require 'em-synchrony'
require 'em-synchrony/em-http'
proxy = lambda {|*|
result = EM::Synchrony.sync EventMachine::HttpRequest.new('http://google.com').get
[200, {}, [result.response]]
}
use Rack::FiberPool, :size => 1000
run proxy
=begin
$ thin -p 3000 -e production -R rack-synchrony.ru start
>> Thin web server (v1.3.1 codename Triple Espresso)
$ ab -c100 -n100 http://localhost:3000/
Concurrency Level: 100
Time taken for tests: 5.602 seconds
HTML transferred: 21900 bytes
Requests per second: 17.85 [#/sec] (mean)
Time per request: 5602.174 [ms] (mean)
=end
Hmm, I thought I must be doing something wrong. An average request time of 5.6s for a task where we are mostly waiting for I/O? I tried another one:
require 'sinatra'
require 'sinatra/synchrony'
require 'em-synchrony/em-http'
get '/' do
EM::HttpRequest.new("http://google.com").get.response
end
=begin
$ ruby sinatra-synchrony.rb -p 3000 -e production
== Sinatra/1.3.1 has taken the stage on 3000 for production with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)
$ ab -c100 -n100 http://localhost:3000/
Concurrency Level: 100
Time taken for tests: 5.476 seconds
HTML transferred: 21900 bytes
Requests per second: 18.26 [#/sec] (mean)
Time per request: 5475.756 [ms] (mean)
=end
Hmm, a little better, but not what I would call a success. Finally, I tried a threaded implementation:
require 'rack'
require 'excon'
proxy = lambda {|*|
result = Excon.get('http://google.com')
[200, {}, [result.body]]
}
run proxy
=begin
$ thin -p 3000 -e production -R rack-threaded.ru --threaded --no-epoll start
>> Thin web server (v1.3.1 codename Triple Espresso)
$ ab -c100 -n100 http://localhost:3000/
Concurrency Level: 100
Time taken for tests: 2.014 seconds
HTML transferred: 21900 bytes
Requests per second: 49.65 [#/sec] (mean)
Time per request: 2014.005 [ms] (mean)
=end
That was really, really surprising. Am I missing something here? Why is EM performing so badly here? Is there some tuning I need to do? I tried various combinations (Unicorn, several Rainbows configurations, etc), but none of them came even close to the simple, old I/O-blocking threading.
Ideas, comments and - obviously - suggestions for better implementations are very welcome.

See how your "Time per request" exactly equals total "Time taken for tests"? This is a reporting arithmetic artifact due to your request count (-n) being equal to your concurrency level (-c). The mean-time is the total-time*concurrency/num-requests. So the reported mean when -n == -c will be the time of the longest request. You should conduct your ab runs with -n > -c by several factors to get reasonable measures.
You seem to be using an old version of ab as a relatively current one reports far more detailed results by default. Running directly against google I show similar total-time == mean time when -n == -c, and get more reasonable numbers when -n > -c. You really want to look at the req/sec, mean across all concurrent requests, and the final service level breakdown to get a better understanding.
$ ab -c50 -n50 http://google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking google.com (be patient).....done
Server Software: gws
Server Hostname: google.com
Server Port: 80
Document Path: /
Document Length: 219 bytes
Concurrency Level: 50
Time taken for tests: 0.023 seconds <<== note same as below
Complete requests: 50
Failed requests: 0
Write errors: 0
Non-2xx responses: 50
Total transferred: 27000 bytes
HTML transferred: 10950 bytes
Requests per second: 2220.05 [#/sec] (mean)
Time per request: 22.522 [ms] (mean) <<== note same as above
Time per request: 0.450 [ms] (mean, across all concurrent requests)
Transfer rate: 1170.73 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 2 0.6 3 3
Processing: 8 9 2.1 9 19
Waiting: 8 9 2.1 9 19
Total: 11 12 2.1 11 22
WARNING: The median and mean for the initial connection time are not within a normal deviation
These results are probably not that reliable.
Percentage of the requests served within a certain time (ms)
50% 11
66% 12
75% 12
80% 12
90% 12
95% 12
98% 22
99% 22
100% 22 (longest request) <<== note same as total and mean above
$ ab -c50 -n500 http://google.com/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking google.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software: gws
Server Hostname: google.com
Server Port: 80
Document Path: /
Document Length: 219 bytes
Concurrency Level: 50
Time taken for tests: 0.110 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Non-2xx responses: 500
Total transferred: 270000 bytes
HTML transferred: 109500 bytes
Requests per second: 4554.31 [#/sec] (mean)
Time per request: 10.979 [ms] (mean)
Time per request: 0.220 [ms] (mean, across all concurrent requests)
Transfer rate: 2401.69 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 1 1 0.7 1 3
Processing: 8 9 0.7 9 13
Waiting: 8 9 0.7 9 13
Total: 9 10 1.3 10 16
Percentage of the requests served within a certain time (ms)
50% 10
66% 11
75% 11
80% 12
90% 12
95% 13
98% 14
99% 15
100% 16 (longest request)

Related

Large result is slow anywhere but local

I have a fairly large query running on Clickhouse. The problem is when running on localhost using cmd line it takes about 0.7 sec to complete. This is consistently fast. Issue is when querying from C# / HTTP / Postman. Here it takes about 10 times to return the data. (the size is about 3-4mb) so I dont think its a size issue.
I have tried to monitor network latency, but nothing to notice here.
On the host it works like a charm, but outside it does not :(.... what to do.
I exptect the latency to be a few 100 ms, but turns out to be 7 sec :/
check timings with curl https://clickhouse.yandex/docs/en/interfaces/http/
https://stackoverflow.com/a/22625150
and compare local vs remote
CH HTTP usually provides almost the same performance as TCP and HTTP could be faster for small resultsets (like 10 rows)
Again. The problem is not the HTTP.
Example:
time clickhouse-client -q "select number, arrayMap(x->sipHash64(number,x), range(10)) from numbers(10000)" >native.out
real 0m0.034s
time curl -S -o http.out 'http://localhost:8123/?query=select%20number%2C%20arrayMap(x-%3EsipHash64(number%2Cx)%2C%20range(10))%20from%20numbers(10000)'
real 0m0.017s
ls -l http.out native.out
2108707 Oct 1 16:17 http.out
2108707 Oct 1 16:17 native.out
10 000 rows - 2Mb
HTTP is faster 0.017s VS 0.034s
Canada -> Germany (openvpn)
time curl -S -o http.out 'http://user:xxx#cl.host.x:8123/?query=select%20number%2C%20arrayMap(x-%3EsipHash64(number%2Cx)%2C%20range(10))%20from%20numbers(10000)'
real 0m1.619s
ping cl.host.x
PING cl.host.x (10.253.52.6): 56 data bytes
64 bytes from 10.253.52.6: icmp_seq=0 ttl=61 time=131.710 ms
64 bytes from 10.253.52.6: icmp_seq=1 ttl=61 time=133.711 ms

minifi java agent uses high CPU on AIX

I noticed that the TailFile processor consumes CPU on the AIX operating system.
Can I do anything to reduce the consumption?
Processors:
- id: xxxxxxxxxxxxxxxxxxxxxxxxxxx
name: TailFile
class: org.apache.nifi.processors.standard.TailFile
max concurrent tasks: 1
scheduling strategy: TIMER_DRIVEN
scheduling period: 0 sec
penalization period: 30 sec
yield period: 1 sec
run duration nanos: 0
auto-terminated relationships list:
- success
Properties:
File Location: Local
File to Tail: *.log
Initial Start Position: Beginning of File
Rolling Filename Pattern:
tail-base-directory: /WorkingDir85/log/
tail-mode: Multiple files
tailfile-lookup-frequency: 10 minutes
tailfile-maximum-age: 24 hours
tailfile-recursive-lookup: 'false'
The scheduling period is 0 sec which basically means run as fast as possible. Setting to something like '10 ms' or even '1 ms' should lighten the CPU usage.

g-wan - reproducing the performance claims

Using gwan_linux64-bit.tar.bz2 under Ubuntu 12.04 LTS unpacking and running gwan
then pointing wrk at it (using a null file null.html)
wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html
Running 20s test # http://127.0.0.1:8080/null.html
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 11.65s 5.10s 13.89s 83.91%
Req/Sec 3.33k 3.65k 12.33k 75.19%
125067 requests in 20.01s, 32.08MB read
Socket errors: connect 0, read 37, write 0, timeout 49
Requests/sec: 6251.46
Transfer/sec: 1.60MB
.. very poor performance, in fact there seems to be some kind of huge latency issue.
During the test gwan is 200% busy and wrk is 67% busy.
Pointing at nginx, wrk is 200% busy and nginx is 45% busy:
wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html
Thread Stats Avg Stdev Max +/- Stdev
Latency 371.81us 134.05us 24.04ms 91.26%
Req/Sec 72.75k 7.38k 109.22k 68.21%
2740883 requests in 20.00s, 540.95MB read
Requests/sec: 137046.70
Transfer/sec: 27.05MB
Pointing weighttpd at nginx gives even faster results:
/usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html
weighttp - a lightweight and simple webserver benchmarking tool
starting benchmark...
spawning thread #1: 167 concurrent requests, 666667 total requests
spawning thread #2: 167 concurrent requests, 666667 total requests
spawning thread #3: 166 concurrent requests, 666666 total requests
progress: 9% done
progress: 19% done
progress: 29% done
progress: 39% done
progress: 49% done
progress: 59% done
progress: 69% done
progress: 79% done
progress: 89% done
progress: 99% done
finished in 7 sec, 13 millisec and 293 microsec, 285172 req/s, 57633 kbyte/s
requests: 2000000 total, 2000000 started, 2000000 done, 2000000 succeeded, 0 failed, 0 errored
status codes: 2000000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 413901205 bytes total, 413901205 bytes http, 0 bytes data
The server is a virtual 8 core dedicated server (bare metal), under KVM
Where do I start looking to identify the problem gwan is having on this platform ?
I have tested lighttpd, nginx and node.js on this same OS, and the results are all as one would expect. The server has been tuned in the usual way with expanded ephemeral ports, increased ulimits, adjusted time wait recycling etc.
Nov. 7 UPDATE: We have fixed the empty-file issue in G-WAN v4.11.7 and G-WAN is now twice faster (with the www cache disabled) than Nginx at this game too.
Recent releases of G-WAN are faster than Nginx with small and large files, and the G-WAN caches are disabled by default in order to make it easier for people to compare G-WAN with other servers like Nginx.
Nginx has a few caching features (a fd cahe to skip stat() calls and a memcached-based module) but both are necessarily much slower than G-WAN's local cache.
Disabling caching was also desirable for certain applications like CDNs. Other applications like AJAX applications greatly benefit from G-WAN caching capabilities so caching can be re-enabled at will, even on a per-request basis.
Hope this clarifies this question.
"reproducing the performance claims"
First, the title is misleading as the poorly documented* test above does not use the same tools nor the HTTP resources fetched by G-WAN tests.
[*] where is your nginx.conf file? what are the HTTP response headers of the two servers? what is your "bare metal" 8-Core CPU?
G-WAN tests are based on ab.c, a wrapper written by the G-WAN Team for weighttp (a test tool made by the Lighttpd server Team) because the information disclosed by ab.c is much more informative.
Second, the tested file "null.html" is... an empty file.
We won't waste time to discuss the irrelevance of such a test (how many empty HTML files your Web site is serving?) but it is likely to be the reason of the observed "poor performance".
G-WAN was not created to serve empty files (and we never tried nor ewre ever asked to do this). But we will surely add this feature to avoid the confusion created by such a test.
As you want to "check the claims" I would encourage you to use weighttp (the fastest HTTP load tool in your test) with a 100.bin file (a 100-byte file with an uncompressible MIME type: no Gzip will be involved here).
With a non-null file Nginx is massively slower than G-WAN, even in independent tests.
We did not know about wrk so far but it seems to be a tool made by the Nginx team:
"wrk was written specifically to try and push nginx to it's limits,
and in it's first round of tests was pushed up to 0.5Mr/s."
UPDATE (a day later)
Since you did not bother to publish any more data, we did it:
wrk weighttp
----------------------- -----------------------
Web Server 0.html RPS 100.html RPS 0.html RPS 100.html RPS
---------- ---------- ------------ ---------- ------------
G-WAN 80,783.03 649,367.11 175,515 717,813
Nginx 198,800.93 179,939.40 184,046 199,075
Like in your test, we can see that wrk is slightly slower than weighttp.
We can also see that G-WAN is faster than Nginx with both HTTP load tools.
Here are the detailled results:
G-WAN
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/0.html"
Running 3s test # http://127.0.0.1:8080/0.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.87ms 5.30ms 80.97ms 99.53%
Req/Sec 14.73k 1.60k 16.33k 94.67%
248455 requests in 3.08s, 55.68MB read
Socket errors: connect 0, read 248448, write 0, timeout 0
Requests/sec: 80783.03
Transfer/sec: 18.10MB
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/100.html"
Running 3s test # http://127.0.0.1:8080/100.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 263.15us 381.82us 16.50ms 99.60%
Req/Sec 115.55k 14.38k 154.55k 82.70%
1946700 requests in 3.00s, 655.35MB read
Requests/sec: 649367.11
Transfer/sec: 218.61MB
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/0.html"
progress: 100% done
finished in 1 sec, 709 millisec and 252 microsec, 175515 req/s, 20159 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 150147 succeeded, 149853 failed, 0 errored
status codes: 150147 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 35284545 bytes total, 35284545 bytes http, 0 bytes data
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/100.html"
progress: 100% done
finished in 0 sec, 417 millisec and 935 microsec, 717813 req/s, 247449 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 300000 succeeded, 0 failed, 0 errored
status codes: 300000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 105900000 bytes total, 75900000 bytes http, 30000000 bytes data
Nginx
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/100.html"
Running 3s test # http://127.0.0.1:8080/100.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.54ms 1.16ms 11.67ms 72.91%
Req/Sec 34.47k 6.02k 56.31k 70.65%
539743 requests in 3.00s, 180.42MB read
Requests/sec: 179939.40
Transfer/sec: 60.15MB
./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/0.html"
Running 3s test # http://127.0.0.1:8080/0.html
6 threads and 300 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.44ms 1.15ms 9.37ms 75.93%
Req/Sec 38.16k 8.57k 62.20k 69.98%
596070 requests in 3.00s, 140.69MB read
Requests/sec: 198800.93
Transfer/sec: 46.92MB
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/0.html"
progress: 100% done
finished in 1 sec, 630 millisec and 19 microsec, 184046 req/s, 44484 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 300000 succeeded, 0 failed, 0 errored
status codes: 300000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 74250375 bytes total, 74250375 bytes http, 0 bytes data
weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/100.html"
progress: 100% done
finished in 1 sec, 506 millisec and 968 microsec, 199075 req/s, 68140 kbyte/s
requests: 300000 total, 300000 started, 300000 done, 300000 succeeded, 0 failed, 0 errored
status codes: 300000 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 105150400 bytes total, 75150400 bytes http, 30000000 bytes data
Nginx configuration file trying to match G-WAN's behavior
# ./configure --without-http_charset_module --without-http_ssi_module
# --without-http_userid_module --without-http_rewrite_module
# --without-http_limit_zone_module --without-http_limit_req_module
user www-data;
worker_processes 6;
worker_rlimit_nofile 500000;
pid /var/run/nginx.pid;
events {
# tried other values up to 100000 without better results
worker_connections 4096;
# multi_accept on; seems to be slower
multi_accept off;
use epoll;
}
http {
charset utf-8; # HTTP "Content-Type:" header
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 10;
keepalive_requests 10; # 1000+ slows-down nginx enormously...
types_hash_max_size 2048;
include /usr/local/nginx/conf/mime.types;
default_type application/octet-stream;
gzip off; # adjust for your tests
gzip_min_length 500;
gzip_vary on; # HTTP "Vary: Accept-Encoding" header
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
# cache metadata (file time, size, existence, etc) to prevent syscalls
# this does not cache file contents. It should helps in benchmarks where
# a limited number of files is accessed more often than others (this is
# our case as we serve one single file fetched repeatedly)
# THIS IS ACTUALY SLOWING-DOWN THE TEST...
#
# open_file_cache max=1000 inactive=20s;
# open_file_cache_errors on;
# open_file_cache_min_uses 2;
# open_file_cache_valid 300s;
server {
listen 127.0.0.1:8080;
access_log off;
# only log critical errors
#error_log /usr/local/nginx/logs/error.log crit;
error_log /dev/null crit;
location / {
root /usr/local/nginx/html;
index index.html;
}
location = /nop.gif {
empty_gif;
}
location /imgs {
autoindex on;
}
}
}
Comments are welcome - especially from Nginx experts - to have a discussion based on this fully-documented test.

websocket - Maximum number of clients

I'm running a stress test on a websocket server to measure how many clients it can serve simultaneously and on what depends that number.
The server implementation I'm using is pywebsocket, the extension for apache server.
Apparently, this creates a new thread for every new client.
The problem is I can only go up to 378 clients, always the same number (and pretty low), and for the next one I receive the following trace:
[2013-08-22 07:47:09,454] [ERROR] __main__.WebSocketServer: Exception in processing request from: ('::ffff:10.36.154.147', 41509, 0, 0)
Traceback (most recent call last):
File "/usr/lib/python2.7/SocketServer.py", line 284, in _handle_request_noblock
self.process_request(request, client_address)
File "/usr/lib/python2.7/SocketServer.py", line 594, in process_request
t.start()
File "/usr/lib/python2.7/threading.py", line 495, in start
_start_new_thread(self.__bootstrap, ())
**error: can't start new thread**
I really don't know where this limit might come from, it seems to low to be the number of maximum threads for the process, which I just set to unlimited, or the maximum number of processes for the user, also now set to unlimited.
I also checked the apache2 configuration files and this is what I have in apache2.conf, should be enough:
MaxKeepAliveRequests 0
KeepAliveTimeout 5
<IfModule mpm_prefork_module>
StartServers 50
ServerLimit 2000
MinSpareServers 50
MaxSpareServers 2000
MaxClients 2000
MaxRequestsPerChild 2000
</IfModule>
<IfModule mpm_worker_module>
StartServers 50
ServerLimit 2000
MinSpareThreads 50
MaxSpareThreads 2000
ThreadLimit 0
ThreadsPerChild 2000
MaxClients 2000
MaxRequestsPerChild 2000
</IfModule>
<IfModule mpm_event_module>
StartServers 50
ServerLimit 2000
MinSpareThreads 50
MaxSpareThreads 2000
ThreadLimit 0
ThreadsPerChild 2000
MaxClients 2000
MaxRequestsPerChild 2000
</IfModule>
The server is an Amazon EC2 t1.micro instance with ubuntu.
What else can be causing this limit?
Try reducing ulimit -s to a much lower value than unlimited/default for whatever piece of code will create many threads, and make sure /proc/sys/kernel/threads-max is not lower then six figures.

What is the maximum throughput of Loggly?

How many requests per second from a client can Loggly handle? I am only able to get around 10–20 requests processed per second and I am wondering if this is normal.
I just ran a bunch of tests and found that it can't really handle much via a tcp connection using syslog-ng.
Here are my test results for anyone wanting to try it.
I used balabit's "loggen" program for this and sent 200 byte messages to the tcp port assigned to me by loggly.
Note that although the syslog RFC (3164 at least) states that a log message should not exceed 1024 bytes, I used 200 byte packets just to be fair and because many messages are that small.
Signed up for a free account.
Configured a TCP connection for testing.
Tried sending various amounts, results:
Test 1: FAIL
loggen -iS -r 6000 -s 200 -I 100 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 1392.13 msg/sec, count=18296, time=13.142, (average) msg size=200, bandwidth=271.74 kB/sec
Test 2: FAIL
loggen -iS -r 4000 -s 200 -I 100 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 2767.16 msg/sec, count=121146, time=43.779, (average) msg size=200, bandwidth=540.15 kB/sec
Test 3: FAIL
loggen -iS -r 2500 -s 200 -I 100 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 1931.27 msg/sec, count=85878, time=44.467, (average) msg size=200, bandwidth=376.98 kB/sec
Test 4: FAIL
loggen -iS -r 2000 -s 200 -I 100 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 1617.72 msg/sec, count=83134, time=51.389, (average) msg size=200, bandwidth=315.78 kB/sec
Test 5: FAIL
loggen -iS -r 1000 -s 200 -I 100 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 936.50 msg/sec, count=63331, time=67.624, (average) msg size=200, bandwidth=182.81 kB/sec
Test 6: PASS for duration configured, FAIL for > 100 seconds - SEE TEST 7
loggen -iS -r 500 -s 200 -I 100 logs.loggly.com 16225
average rate = 325.00 msg/sec, count=32501, time=100.001, (average) msg size=200, bandwidth=63.44 kB/sec
Test 7: FAIL - Ran a new test #500 EPS for a longer period and the pipe broke after 255 seconds:
loggen -iS -r 500 -s 200 -I 10000 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 323.35 msg/sec, count=82642, time=255.577, (average) msg size=200, bandwidth=63.12 kB/sec
Test 8: FAIL (ran for longer # 200 EPS, but still failed)
loggen -iS -r 200 -s 200 -I 10000 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 163.53 msg/sec, count=234090, time=1431.470, (average) msg size=200, bandwidth=31.92 kB/sec
Test 9: FAIL (again, ran longer but still failed)
loggen -iS -r 50 -s 200 -I 10000 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 47.36 msg/sec, count=89325, time=1886.014, (average) msg size=200, bandwidth=9.25 kB/sec
Test 10: FAIL? (same results, but lost the connection again. Hard to believe they can’t handle 10 eps?)
loggen -iS -r 10 -s 200 -I 10000 logs.loggly.com 16225
Send error Broken pipe, results may be skewed.
average rate = 9.94 msg/sec, count=1568, time=157.770, (average) msg size=200, bandwidth=1.94 kB/sec
Did some web searching to see what loggly can actually do, but there’s only marketing material that says it is scalable, not how scalable it is.
I did find this:
http://twitter.com/jordansissel/status/5948244626509824
Which is only 22 events per second…
Full Disclosure: I am the founder of LogZilla, so I was testing out the competition because we are launching a cloud-based syslog solution.
My tests show that our software is able to handle anywhere from 2,000 to 12,000 events per second depending on which servers we're using in the cloud.
I really don't know but I've been searching for a logging solution for node.js as well without luck.
Why?
Because all of those that I've checked (didn't check all) are using synchronous disk writing! ...... which AWFULLY degrades performance.
So if you ask me - you should re-consider your needs, and log only stuff you really need.
I ran tests similar to the ones in Clayton answer as his results made me worried that Loggly would drop messages if I sent too many at the same time. I wanted to see if the problems Clayton encountered in 2012 still existed today.
That said, here is what I found running loggen for 60 seconds generating 100,000 messages a second.
$ loggen -iS -r 100000 -s 200 -I 60 logs-01.loggly.com port
average rate = 34885.98 msg/sec, count=2093163, time=60.000, (average) msg size=200, bandwidth=6809.74 kB/sec
I was also curious what some competitors would return for similar tests and I found the following:
Papertrail
loggen -iS -D -r 100000 -s 200 -I 60 logs2.papertrailapp.com PORT
average rate = 24344.71 msg/sec, count=1461327, time=60.026, (average) msg size=200, bandwidth=4752.09 kB/sec
Logentries
$ loggen -iS -D -r 100000 -s 200 -I 60 api.logentries.com PORT
average rate = 14076.76 msg/sec, count=844609, time=60.000, (average) msg size=200, bandwidth=2747.78 kB/sec
Obviously these are not hard numbers that will always be the same as systems change over time. This just gives us a point in time reference of how they responded when I ran the tests. Your mileage will vary!
Update: I ran a longer (nearly 3 hour) test against Loggly and received the following:
loggen -iS -r 100000 -s 200 -I 10000 logs-01.loggly.com port
average rate = 15869.22 msg/sec, count=158692177, time=10000.000, (average) msg size=200, bandwidth=3097.67 kB/sec

Resources