How to scale solve WebSocket connection when server goes down - websocket

Is it possible to store the web socket connection and use that for distributed the load.
For Example - Similar to what we have in AWS ALB to distribute the traffic.

Related

How does AWS Application Load balancer select a target within a target group? How to load balance the websocket traffic?

I have an AWS Application load balancer to distribute the http(s) traffic.
Problem 1:
Suppose I have a target group with 2 EC2 instances: micro and xlarge. Obviously they can handle different traffic levels. Does the load balancer manage traffic proportionally to instance sizes or just round robin? If only round robin is used and no other factors taken into account, then it's not really balancing load, because at some point the micro instance will be suffering from the traffic, while xlarge will starve.
Problem 2:
Suppose I have target group with 2 EC2 instances, both are same size. But my service is not using a classic http request/response flow. It is using HTTP websockets, i.e. a client makes HTTP request just once, to establish a socket, and then keeps the socket open for longer time, sending and receiving messages (e.g. a chat service). Let's suppose my load balancer is using round robin and both EC2 instances have 1000 clients connected each. Now suppose one of the EC2 instances goes down and 1000 connected clients drop their socket connections. The instance gets back up quickly and is ready to accept websocket calls again. The 1000 clients who dropped are trying to reconnect. Now, if the load balancer would use pure round robin, I'll end up with 1500 clients connected to instance #1 and 500 clients connected to instance #2, thus not really balancing the load correctly.
Basically, I'm trying to find out if some more advanced logic is being used to select a target in a group, or is it just a naive round robin selection. If it's round robin only, then how can I really balance the websocket connections load?
Websockets start out as http or https connections, so a load balancer can dispatch them to a server. Once the server accepts the http connection, both the server and the client "upgrade" the connection to use the websocket protocol. They then leave the connection open to use for websocket traffic. As far as the load balancer can tell, the connection is simply a long-lasting http connection.
Taking a server down when it has websocket connections to clients requires your application to retry lost connections. Reconnecting on connection failure is one of the trickiest parts of websocket client programming. Your application cannot be robust without reconnect logic.
AWS's load balancer has no built-in knowledge of the capabilities of the servers behind it. You have observed that it sends requests equally to big and small servers. That can overwhelm the small ones.
I have managed this by building a /healthcheck endpoint in my servers. It's a straightforward https://example.com/heathcheck web page. You can put a little bit of content on the page announcing how many websocket connections are currently open, or anything else. Don't password protect it or require a session to hit it.
My /healthcheck endpoints, whenever hit, measure the server load. I simply use the number of current websocket connections, but you can use any metric you want. I compare the current load to a load threshold configured for each server. For example, on a micro instance I can handle 20 open websockets, and on a production instance I can handle 400.
If the server load is too high, my endpoint gives back a 503 http error status along with its content. 503 typically means "I am overloaded, please try again later." It can also mean "I will shut down when all my connections are closed. Please don't use me for any more connections."
Then I configure the load balancer to perform those health checks every couple of minutes on all the servers in the server pool (AWS calls the pool a "target group"). The health check operation detects "unhealthy" servers and temporarily takes them out of its rotation. (The health check also detects crashed servers, which is good.)
You need this loadbalancer health check for a large-scale production setup.
All that being said, you will get best results if all your server instances in your pool have roughly the same capacity as each other.

Load balancer and WebSockets

Our infrastructure is composed by
1 F5 load balancer
3 nodes
We have an application which uses websockets, so when a user goes to our site, it opens a websocket to the balancer which it connects to the first available node, and it works as expected.
Our truobles arrives with maintenance tasks, when we have to update our software, we need to turn offline 1 node at a time, deploy the new release and then turn it on again. Doing this task, the balancer drops the open websocket connections to the node and the clients retries to connect after few seconds to the first available nodes, creating an inconvenience for the client because he could miss a signal (or more).
How we can keep the connection between the client and the balancer, changing the backend websocket server? Is the load balancer enough to achieve our goal or we need to change our infrastructure?
To avoid this kind of problems I recommend to read about the Azure SignalR. With this you don't need to thing about stuff like load balancer, redis backplane and other infrastructures that you possibly need to a WebSockets connection.
Basically the clients will not connected to your node directly but redirected to Azure SignalR. You can read more about it here: https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-overview
Since it is important to your application to maintain the connection, I don't see how any other way to archive no connection drop to your nodes, since you need to shut them down.
It's important to understand that the F5 is a full TCP proxy. This means that the F5 is the server to the client and the client to the server. If you are using the websockets protocol then you must apply a websockets profile to the F5 Virtual Server in order for the websockets application to be handled properly by the Load Balancer.
Details of the websockets profile can be found here: https://support.f5.com/csp/article/K14754
If a websockets and an HTTP profile are applied to the Virtual Server - meaning that you have websockets and web traffic using the same port and LB nodes - then the F5 will allow the websockets traffic as passthrough. Also keep in mind that if this is an HTTPS virtual sever that you will need to ensure a client and server side HTTPS profile (SSL offload) are applied to the Virtual Server.
While there are a variety of ways that you can fiddle with load balancers to minimize the downtime caused by a software upgrade, none of them solve the problem, which is that your application-layer protocol seems to not tolerate some small network outages.
Even if you have a perfect load balancer and your software deploys cause zero downtime, the customer's computer may be on flaky wifi which causes a network dropout for half a second - or going over ethernet and someone reconfigures some routing on their LAN, etc.
I'd suggest having your server maintain a queue of messages for clients (up to some size/time limit) so that when a client drops a connection - whether it be due to load balancers/upgrades - or any other reason, it can continue without disruption.

Google Cloud Platform - load balancer websocket keep disconnecting after few seconds

We are using 2 servers and have setup load balancer to redirect the trafic. Both servers are Compute engines.
We are also using websocket (socket.io) to keep the connection between users (online and offline status). When connection is established between users, it gets disconnected after few seconds. We concluded that it is load balancer configuration issue as if we use single server (without load balancer), connection remains alive until user goes offline.
We need help here if we need to do anything extra in load balancer configurations to work it smoothly with websocket.
Using ip addresses, not domain name (if that makes any difference)

Websockets and scalability

I am a beginner with websockets.
I have a need in my application where server needs to notify clients when something changes and am planning to use websockets.
Single server instance and single client ==> How many websockets will be created and how many connections to websockets?
Single server instance and 10 clients ==> How many websockets will be created and how many connections to websockets?
Single server instance and 1000 clients ==> How many websockets will be created and how many connections to websockets?
How do you scale with websockets when your application has a 1000’s of user base?
Thanks much for your feedback.
1) Single server instance and single client ==> How many websockets will be created and how many connections to websockets?
If your client creates one webSocket connection, then that's what there will be one webSocket connection on the client and one on the server. It's the client that creates webSocket connections to the server so it is the client that determines how many there will be. If it creates 3, then there will be 3. If it creates 1, then there will be 1. Usually, the client would just create 1.
2) Single server instance and 10 clients ==> How many websockets will be created and how many connections to websockets?
As described above, it depends upon what the client does. If each client creates 1 webSocket connection and there are 10 clients connected to the server, then the server will see a total of 10 webSocket connections.
3) Single server instance and 1000 clients ==> How many websockets will be created and how many connections to websockets?
Same as point #2.
How do you scale with webscokets when your application has a 1000’s of user base?
A single server, configured appropriately can handle hundreds of thousands of simultaneous webSocket connections that are mostly idle since an idle webSocket uses pretty much no server CPU. For even larger scale deployments, one can cluster the server (run multiple server processes) and use sticky load balancing to spread the load.
There are many other articles like these on Google worth reading if you're pursuing large scale webSocket or socket.io deployments:
The Road to 2 Million Websocket Connections in Phoenix
600k concurrent websocket connections on AWS using Node.js
10 million concurrent webSockets
Ultimately, the achievable scale per a properly configured server will likely have more to do with how much activity there is per connection and how much computation is needed to deliver that.

Loadbalancing web sockets

I have a server which supports web sockets. Browsers connect to my site and each one opens a web socket to www.mydomain.example. That way, my social network app can push messages to the clients.
Traditionally, using just HTTP requests, I would scale up by adding a second server and a load balancer in front of the two web servers.
With web sockets, the connection has to be directly with the web server, not the load balancers, because if a machine has a physical limit of say 64k open ports, and the clients were connecting to the load balancer, then I couldn't support more than 64k concurrent users.
So how do I:
get the client to connect directly to the web server (rather than the load balancer) when the page loads? Do I simply load the JavaScript from a node, and the load balancers (or whatever) randomly modifies the URL for the script, every time the page is initially requested?
handle a ripple start? The browser will notice that the connection is closed as the web server shuts down. I can write JavaScript code to attempt to reopen the connection, but the node will be gone for a while. So I guess I would have to go back to the load balancer to query the address of the next node to use?
I did wonder about the load balancers sending a redirect on the initial request, so that the browser initially requests www.mydomain.example and gets redirected to www34.mydomain.example. That works quite well, until the node goes down - and sites like Facebook don't do that. How do they do it?
Put a L3 load-balancer that distributes IP packets based on source-IP-port hash to your WebSocket server farm. Since the L3 balancer maintains no state (using hashed source-IP-port) it will scale to wire speed on low-end hardware (say 10GbE). Since the distribution is deterministic (using hashed source-IP-port), it will work with TCP (and hence WebSocket).
Also note that a 64k hard limit only applies to outgoing TCP/IP for a given (source) IP address. It does not apply to incoming TCP/IP. We have tested Autobahn (a high-performance WebSocket server) with 200k active connections on a 2 core, 4GB RAM VM.
Also note that you can do L7 load-balancing on the HTTP path announced during the initial WebSocket handshake. In that case the load balancer has to maintain state (which source IP-port pair is going to which backend node). It will probably scale to millions of connections nevertheless on decent setup.
Disclaimer: I am original author of Autobahn and work for Tavendo.
Note that if your websocket server logic runs on nodejs with socket.io, you can tell socket.io to use a shared redis key/value store for synchronization.
This way you don't even have to care about the load balancer, events will propagate among the server instances.
var io = require('socket.io')(3000);
var redis = require('socket.io-redis');
io.adapter(redis({ host: 'localhost', port: 6379 }));
See: Socket IO - Using multiple nodes
But at some point I guess redis can become the bottleneck...
You can also achieve layer 7 load balancing with inspection and "routing functionality"
See "How to inspect and load-balance WebSockets traffic using Stingray Traffic Manager, and when necessary, how to manage WebSockets and HTTP traffic that is received on the same IP address and port." https://splash.riverbed.com/docs/DOC-1451

Resources