is it possible to make thousands of WebSocket connections to a server like to do a load test? So far the most I have been able to make is 4000 using okhttp before I get java.lang.OutOfMemoryError: unable to create a native thread: possibly out of memory or process/resource limits reached.
The answer is don't use okHttp ha. I switched to using vertx http client and can open as many connections as I want doesn't even get my fan spinning.
Related
I am struggling to find information on how to gauge the scalability of websockets. A scenario -
Let's say from client wants to establish socket connection from a browser, and the client application and service layer (Micronaut) both have two instances behind an elb - service layer will sit us-east region and can expect anyone from around the world can access the frontend app from browser and can expect an open connection for an avg of 2-5 min, no longer than 30 minutes.
Is there a ballpark number on how many concurrent websocket connections a couple servers can handle? Or if there are certain factors that I didn't mention that are vital to handling websocket connections in general?
Thank you in advance.
I'm assuming you want to know the scalability of the implementation of WS in Micronaut and not WS in general. Of course, the scalability of WS is dependent on the specific implementation and WS itself. You probably already know this, but wanted to state it for the record. You may also want to be sure you increase your file descriptors for your server process to the max number (you may have to adjust your kernel to increase the FDs).
Btw, don't forget to handle retries and reconnects as you would for a low-level TCP connection
I am using OkHttp3 to make a websocket connection to a third-party server to receive data. I want to track the latency of the websocket connection.
This solution seems the way to go, but without ping/pong being exposed in OkHttp3, how do I figure out the latency?
This is something that OkHttp should consider exposing to clients, with the existing Ping events.
Until and if that becomes possible, your best best is probably to implement application level pings to the backend for tracking this.
Just starting to use Spring Webflux Webclient,Just wanted to know what is the default KeepAlive time for the Http Connection ? Is there a way to increase the keepAlive Time? In our Rest Service we get a request probably every five minutes,The request takes long time to process .It takes time between 500 seconds-- 10 second. However in load test if I send frequent requests the processing time is less than 250ms.
Spring WebFlux WebClient is an HTTP client API that wraps actual HTTP libraries - so configuration like connection management, timeouts, etc. are configured at the library level directly and behavior might change depending on the chosen library.
The default library with WebClient is Reactor Netty.
Many HTTP clients (and this is the case with Reactor Netty) are maintaining HTTP connections in a connection pool to reuse them. Clients usually acquire a new connection to a remote host, use it to send/receive information and then put it back in the connection pool. This is very useful since sometimes acquiring a new connection can be costly. This seems to be really costly in your case.
HTTP clients leave those unused connections in the pool, but what about keepAlive time?
Most clients leave those connections in the pool as long as possible and test them before acquiring them to see if they're still valid or listen to server events asynchronously to remove them from the pool (I believe Reactor Netty does that). So ultimately, the server is in control and decides when to close connections if they're inactive.
Now your problem description might suggest that connecting to that remote host is very costly, but it could be also the remote host taking a long time to respond to your requests (for example, it might be operating on an empty cache and needs to calculate a lot of things).
I'm experimenting with server-sent events (SSE) as an alternative to websockets for real-time data pushing (data in my application is primarily one-directional).
How scalable would this be? I know that each SSE connection uses an HTTP request -- does this mean that a web server can handle as many SSE connections as HTTP requests (something like this answer)? I feel as though this might be the case, but I'm not sure how a SSE connection works and if it is substantially more complex/resource-hungry than a simple HTTP request.
I'm mostly wondering how this compares to the number of concurrent websockets a browser can keep open. This answer suggests that only ~1400-1800 sockets can be handled by a server at the same time.
Can someone provide some insight on this?
(To clarify, I am not asking about how many SSE connections can be kept open from the client; I am asking about how many can be reasonably kept open by a web server.)
Tomcat 8 (web server to give an example) and above that uses the NIO connector for handling incoming requst. It can service max 10,000 concurrent connections(docs). It does not say anything about max connections pers se. They also provide another parameter called acceptCount which is the fall back if connections exceed 10,000.
socket connections are treated as files. Every incoming connection to tomcat is like opening a socket and depending on the OS e.g in linux depends on the file-descriptor policy. You will find a common error when too many connections are open or max connections have been reached as the following
java.net.SocketException: Too many files open
You can change the number of open files by editing
/etc/security/limits.conf
It is not clear what is max limit that is allowed. Some say default for tomcat is 1096 but the (default) one for linux is 30,000 which can be changed.
On the article I have shared the linkedIn team were able to go 250K connections on one host.
So that should give you a pretty good idea about max sse connections possible. depends on your web server max connection configuration, OS capacity etc.
I'm looking into getting an openfire server started and setting up a strophe.js client to connect to it. My concern is that using http-bind might be costly in terms of performance versus making a straight on XMPP connection.
Can anyone tell me whether my concern is relevant or not? And if so, to what extend?
The alternative would be to use a flash proxy for all communication with OpenFire.
Thank you
BOSH is more verbose than normal XMPP, especially when idle. An idle BOSH connection might be about 2 HTTP requests per minute, while a normal connection can sit idle for hours or even days without sending a single packet (in theory, in practice you'll have pings and keepalives to combat NATs and broken firewalls).
But, the only real way to know is to benchmark. Depending on your use case, and what your clients are (will be) doing, the difference might be negligible, or not.
Basics:
Socket - zero overhead.
HTTP - requests even on IDLE session.
I doubt that you will have 1M users at once, but if you are aiming for it, then conection-less protocol like http will be much better, as I'm not sure that any OS can support that kind of connected socket volume.
Also, you can tie your OpenFires together, form a farm, and you'll have nice scalability there.
we used Openfire and BOSH with about 400 concurrent users in the same MUC Channel.
What we noticed is that Openfire leaks memory. We had about 1.5-2 GB of memory used and got constant out of memory exceptions.
Also the BOSH Implementation of Openfire is pretty bad. We switched then to punjab which was better but couldn't solve the openfire issue.
We're now using ejabberd with their built-in http-bind implementation and it scales pretty well. Load on the server having the ejabberd running is nearly 0.
At the moment we face the problem that our 5 webservers which we use to handle the chat load are sometimes overloaded at about 200 connected Users.
I'm trying to use websockets now but it seems that it doesn't work yet.
Maybe redirecting the http-bind not via Apache rewrite rule but directly on a loadbalancer/proxy would solve the issue but I couldn't find a way on how to do this atm.
Hope this helps.
I ended up using node.js and http://code.google.com/p/node-xmpp-bosh as I faced some difficulties to connect directly to Openfire via BOSH.
I have a production site running with node.js configured to proxy all BOSH requests and it works like a charm (around 50 concurrent users). The only downside so far: in the Openfire admin console you will not see the actual IP address of the connected clients, only the local server address will show up as Openfire get's the connection from the node.js server.