I saw this question about WebSocket performance. The conclusion of this question was that:
On today's systems, handling 1 million concurrent TCP connections is not an issue.
We had to demonstrate several times, to some of our customers, that 1 million connections can be reached on a single box (and not necessarily a super-monster machine)
With at least 30 GiB RAM you can handle 1 million concurrent sockets. The CPU needed depends on the data throughput you need.
I need to build a service that can connect to multiple peers at large scale. The traffic should be very minimal and mostly passing a small messages between the server and the client on real-time. It could be that some connections will be idle for a long time.
I wonder which protocol will give me better performance with less resources on those circumstances. I need to choose a protocol that has real-time capabilities, but also supported on web-browsers so I ended up with WebSockets and WebRTC (On WebRTC, the server will establish a WebRTC DataChannel to each peer via some signaling service).
What's the performance of WebRTC at large scale comparing to TCP sockets?
Can it handle large amount of connection with less resources than TCP sockets?
Related
As HTTP2 supports multiplexing, do we need still a pool of connections for microservice communication?
If yes, what are the benefits of having such a pool?
Example:
Service A => Service B
Both the above services have only one instance available.
Multiple connections may help overcome OS buffer size limitation for each Connection(Socket)? What else?
Yes, you still need connection pool in a client contacting a microservice.
First, in general it's the server that controls the amount of multiplexing. A particular microservice server may decide that it cannot allow beyond a very small multiplexing.
If a client wants to use that microservice with a higher load, it needs to be prepared to open multiple connections and this is where the connection pool comes handy.
This is also useful to handle load spikes.
Second, HTTP/2 has flow control and that may severely limit the data throughput on a single connection. If the flow control window are small (the default defined by the HTTP/2 specification is 65535 bytes, which is typically very small for microservices) then client and server will spend a considerable amount of time exchanging WINDOW_UPDATE frames to enlarge the flow control windows, and this is detrimental to throughput.
To overcome this, you either need more connections (and again a client should be prepared for that), or you need larger flow control windows.
Third, in case of large HTTP/2 flow control windows, you may hit TCP congestion (and this is different from socket buffer size) because the consumer is slower than the producer. It may be a slow server for a client upload (REST request with a large payload), or a slow client for a server download (REST response with a large payload).
Again to overcome TCP congestion the solution is to open multiple connections.
Comparing HTTP/1.1 with HTTP/2 for the microservice use case, it's typical that the HTTP/1.1 connection pools are way larger (e.g. 10x-50x) than HTTP/2 connection pools, but you still want connection pools in HTTP/2 for the reasons above.
[Disclaimer I'm the HTTP/2 implementer in Jetty].
We had an initial implementation where the Jetty HttpClient was using the HTTP/2 transport with an hardcoded single connection per domain because that's what HTTP/2 preached for browsers.
When exposed to real world use cases - especially microservices - we quickly realized how bad of an idea that was, and switched back to use connection pooling for HTTP/2 (like HttpClient always did for HTTP/1.1).
I'm experimenting with server-sent events (SSE) as an alternative to websockets for real-time data pushing (data in my application is primarily one-directional).
How scalable would this be? I know that each SSE connection uses an HTTP request -- does this mean that a web server can handle as many SSE connections as HTTP requests (something like this answer)? I feel as though this might be the case, but I'm not sure how a SSE connection works and if it is substantially more complex/resource-hungry than a simple HTTP request.
I'm mostly wondering how this compares to the number of concurrent websockets a browser can keep open. This answer suggests that only ~1400-1800 sockets can be handled by a server at the same time.
Can someone provide some insight on this?
(To clarify, I am not asking about how many SSE connections can be kept open from the client; I am asking about how many can be reasonably kept open by a web server.)
Tomcat 8 (web server to give an example) and above that uses the NIO connector for handling incoming requst. It can service max 10,000 concurrent connections(docs). It does not say anything about max connections pers se. They also provide another parameter called acceptCount which is the fall back if connections exceed 10,000.
socket connections are treated as files. Every incoming connection to tomcat is like opening a socket and depending on the OS e.g in linux depends on the file-descriptor policy. You will find a common error when too many connections are open or max connections have been reached as the following
java.net.SocketException: Too many files open
You can change the number of open files by editing
/etc/security/limits.conf
It is not clear what is max limit that is allowed. Some say default for tomcat is 1096 but the (default) one for linux is 30,000 which can be changed.
On the article I have shared the linkedIn team were able to go 250K connections on one host.
So that should give you a pretty good idea about max sse connections possible. depends on your web server max connection configuration, OS capacity etc.
During a designing of a client/server architecture, is there any advantage to multiplexing multiple WEBSOCKET connections from the same process to the server (i.e. sharing one connection) vs opening one WEBSOCKET connection per thread/session in the client (as is typically done when connecting to memcached or database servers.)
I'm aware about the overhead associated with each connection (e.g. RAM ...). But expect to have less than 1K-10K at the most in each client side.
Specific use case:
Lets assume, I have a remote server with multiple sessions on one side, and on the other side I have multiple clients, each client will connect to a different session through the websocket server.
In the remote server, there are 2 ways to implement it: (1) each session create its own websocket connection (2) all sessions will use same websocket connection.
From connection point of view, I like the sharing solution (one websocket connection to all sessions), because websocket server is limited by #of connections available (saving servers/scaling).
But from traffic/data speed/performance point of view, if a sessions will send lots of small packages through the connection, then, if we use one sharing connection, we will not be able to utilize the bandwidth (payload..../collect few small packages into one or split big package into small packages), because we may have to send different packages to different clients from different sessions, in this case, we will not be able to collect few packages (small packages) since they have different destination and from different sources!!, unless we will create "virtual connection" that manage each session data to maximize the speed, but this would create much implementation complexity!!!
Any other opinions?
Thanks,
JB.
I think you should consider using a limited connection pool, like they do with Database connection architecture.
Another solution I would consider is a Pub/Sub database middleman such as Redis. This allows you to use existing solutions as well as easier scalability.
To the best of my understanding, both having a single connection and using a multitude of connections have their issues.
For example, one connection can send only one message at a time. A big enough message could block the connection... are you moving big data?
Many connections can cause an overhead that could be very expensive as well as introduce more chances for errors. Consider the following:
Creating new connections is very expensive, uses bandwidth, suffers from longer network delays and requires local resources and this is exactly what websockets allows us to avoid.
You will run into scalability issues. For instance, Heroku limits websocket connections to 600 per server, or at least they did so a short while back (and I think it's reasonable)... How will you connect all the servers together to one data-store?
Remember every OS has an open file limit and that websockets use the IO architecture (each one is an 'open-file', so that websockets are a limited resource).
Regarding traffic/data speed/performance, it is a question of server architecture... but I believe you will actually see a slight speed increase by using one connection (or a small pool of connections). It's important to remember that there isn't any effective multi-tasking when you need to send TCP/IP packets.
Also, with a limited number of connections (even with one connection), you will be able to benefit from the OS's packet joining feature that will allow you to send a number of websocket frames over one TCP/IP packet (unless you constantly flush the TCP/IP socket). You will actually waste more bandwidth with more connections - even disregarding the bandwidth used to open each new connection.
Just my 5 cents, we will all think differently, I'm sure.
Good Luck!
If a bunch of "Slow HTTP" connection to a server can consume so much resources so as to cause a denial of service, why wouldn't a bunch of web sockets to a server cause the same problem?
The accepted answer to a different SO question says that it is almost free to maintain a idle connection.
If it costs nothing to maintain an open TCP connection, why does a "Slow HTTP" cause denial of service?
A WebSocket and a "slow" HTTP connection both use an open connection. The difference is in expectations of the server design.
Typical HTTP servers do not need to handle a large number of open connections and are designed around the assumption that the number of open connections is small. If the server does not protect against slow clients, then an attacker can force a server designed around this assumption to hit a resource limit.
Here are a couple of examples showing how the different expectations can impact the design:
If you only have a few HTTP requests in flight at a time, then it's OK to use a thread per connection. This is not a good design for a WebSocket server.
The default file descriptor limits are often adequate for typical HTTP scenarios, but not for a large numbers of connections.
It is possible to design an HTTP server to handle a large number of open connections and several servers do so out of the box.
HTML5 websockets currently use a form of TCP communication. However, for real-time games, TCP just won't cut it (and is great reason to use some other platform, like native). As I probably need UDP to continue a project, I'd like to know if the specs for HTML6 or whatever will support UDP?
Also, are there any reliable benchmarks for WebSockets that would compare the WS protocol to a low-level, direct socket protocol?
On a LAN, you can get Round-trip times for messages over WebSocket of 200 microsec (from browser JS to WebSocket server and back), which is similar to raw ICMP pings. On MAN, it's around 10ms, WAN (over residential ADSL to server in same country) around 30ms, and so on up to around 120-200ms via 3.5G. The point is: WebSocket does add virtually no latency to the one you will get anyway, based on the network.
The wire level overhead of WebSocket (compared to raw TCP) is between 2 octets (unmasked payload of length < 126 octets) and 14 octets (masked payload of length > 64k) per message (the former numbers assume the message is not fragmented into multiple WebSocket frames). Very low.
For a more detailed analysis of WebSocket wire-level overhead, please see this blog post - this includes analysis covering layers beyond WebSocket also.
More so: with a WebSocket implementation capable of streaming processing, you can (after the initial WebSocket handshake), start a single WebSocket message and frame in each direction and then send up to 2^63 octets with no overhead at all. Essentially this renders WebSocket a fancy prelude for raw TCP. Caveat: intermediaries may fragment the traffic at their own decision. However, if you run WSS (that is secure WS = TLS), no intermediaries can interfere, and there you are: raw TCP, with a HTTP compatible prelude (WS handshake).
WebRTC uses RTP (= UDP based) for media transport but needs a signaling channel in addition (which can be WebSocket i.e.). RTP is optimized for loss-tolerant real-time media transport. "Real-time games" often means transferring not media, but things like player positions. WebSocket will work for that.
Note: WebRTC transport can be over RTP or secured when over SRTP. See "RTP profiles" here.
I would recommend developing your game using WebSockets on a local wired network and then moving to the WebRTC Data Channel API once it is available. As #oberstet correctly notes, WebSocket average latencies are basically equivalent to raw TCP or UDP, especially on a local network, so it should be fine for you development phase. The WebRTC Data Channel API is designed to be very similar to WebSockets (once the connection is established) so it should be fairly simple to integrate once it is widely available.
Your question implies that UDP is probably what you want for a low latency game and there is truth to that. You may be aware of this already since you are writing a game, but for those that aren't, here is a quick primer on TCP vs UDP for real-time games:
TCP is an in-order, reliable transport mechanism and UDP is best-effort. TCP will deliver all the data that is sent and in the order that it was sent. UDP packets are sent as they arrive, may be out of order, and may have gaps (on a congested network, UDP packets are dropped before TCP packets). TCP sounds like a big improvement, and it is for most types of network traffic, but those features come at a cost: a delayed or dropped packet causes all the following packets to be delayed as well (to guarantee in-order delivery).
Real-time games generally can't tolerate the type of delays that can result from TCP sockets so they use UDP for most of the game traffic and have mechanisms to deal with dropped and out-of-order data (e.g. adding sequence numbers to the payload data). It's not such a big deal if you miss one position update of the enemy player because a couple of milliseconds later you will receive another position update (and probably won't even notice). But if you don't get position updates for 500ms and then suddenly get them all out once, that results in terrible game play.
All that said, on a local wired network, packets are almost never delayed or dropped and so TCP is perfectly fine as an initial development target. Once the WebRTC Data Channel API is available then you might consider moving to that. The current proposal has configurable reliability based on retries or timers.
Here are some references:
WebRTC Introduction
WebRTC FAQ
WebRTC Data Channel Proposal
To make a long story short, if you want to use TCP for multiplayer games, you need to use what we call adaptive streaming techniques. In other words, you need to make sure that the amount of real-time data sent to synchronize the game world among the clients is governed by the currently available bandwidth and latency for each client.
Dynamic throttling, conflation, delta delivery, and other mechanisms are adaptive streaming techniques, which don't magically make TCP as efficient as UDP, but make it usable enough for several types of games.
I tried to explain these techniques in an article: Optimizing Multiplayer 3D Game Synchronization Over the Web (http://blog.lightstreamer.com/2013/10/optimizing-multiplayer-3d-game.html).
I also gave a talk on this topic last month at HTML5 Developer Conference in San Francisco. The video has just been made available on YouTube: http://www.youtube.com/watch?v=cSEx3mhsoHg
There's no UDP support for Websockets (there really should be), however you can apparently use WebRTC's RTCDataChannel API for UDP-like communication. There's a good article here:
http://www.html5rocks.com/en/tutorials/webrtc/datachannels/
RTCDataChannel actually uses SCTP which has configurable reliability and ordered delivery. You can get it to act like UDP by telling it to deliver messages unordered, and setting the maximum number of retransmits to 0.
I haven't tried any of this though.
I'd like to know if the specs for HTML6 or whatever will support UDP?
WebSockets won't. One of the benefits of WebSockets is that it piggybacks the existing HTTP connection. This means that to proxies and firewalls WebSockets looks like HTTP so they don't get blocked.
It's likely arbitrary UDP connections will never be part of any web specification because of security concerns. The closest thing to what you're after will likely come as part of WebRTC and it's associated JSEP protocol.
are there any reliable benchmarks ... that .. compare the WS protocol to a low-level, direct socket protocol?
Not that I'm aware of. I'm going to go out on a limb and predict WebSockets will be slower ;)