Maintain user session with Route 53 - session

Problem description:
I have 2 EC2 instances which are located in the Europe and North America region. Now I have setup a Route 53 to direct user request to these servers with Weighted Round Robin record set
When user being directed to one server and created session, the next time it resolve the domain name it could be direct to the other server which doesn't have the session. (e.g user logged in and clicked another link and has to login again)
Thoughts:
I could have a load balancer to ensure session stickiness of both server but in this case the Weighted Round Robin DNS routing can't be set.
I can also increase the TTL of DNS response but again it almost eliminates the effect of WRR.
Or I could configure servers to share sessions (which I don't know how to. The server is Ofbiz server) and I am not sure whether it is good practice.
Question:
Is there a way to maintain user sessions while using Weighted Round Robin Record on Amazon Web Service Route 53 ?
Many thanks !

DNS round-robin works perfect for a short-life TCP connection (e.g. Performing a query). However, If you want to provide a longer session, please make sure that you have a longer-life session between your client and server. You can use HTTPSession between your servers and clients and the Connection will stay open till the session gets expired.
As you mentioned, increasing TTL will not solve the problem because you could set a TCP connection close to expiry time of one DNS response and next time you get different IP address from a new DNS response.
If you don't want to use HTTPSession for any reason, you probably better use Elastic Load Balancing (ELB) to do the same thing for you.

Related

How does AWS Application Load balancer select a target within a target group? How to load balance the websocket traffic?

I have an AWS Application load balancer to distribute the http(s) traffic.
Problem 1:
Suppose I have a target group with 2 EC2 instances: micro and xlarge. Obviously they can handle different traffic levels. Does the load balancer manage traffic proportionally to instance sizes or just round robin? If only round robin is used and no other factors taken into account, then it's not really balancing load, because at some point the micro instance will be suffering from the traffic, while xlarge will starve.
Problem 2:
Suppose I have target group with 2 EC2 instances, both are same size. But my service is not using a classic http request/response flow. It is using HTTP websockets, i.e. a client makes HTTP request just once, to establish a socket, and then keeps the socket open for longer time, sending and receiving messages (e.g. a chat service). Let's suppose my load balancer is using round robin and both EC2 instances have 1000 clients connected each. Now suppose one of the EC2 instances goes down and 1000 connected clients drop their socket connections. The instance gets back up quickly and is ready to accept websocket calls again. The 1000 clients who dropped are trying to reconnect. Now, if the load balancer would use pure round robin, I'll end up with 1500 clients connected to instance #1 and 500 clients connected to instance #2, thus not really balancing the load correctly.
Basically, I'm trying to find out if some more advanced logic is being used to select a target in a group, or is it just a naive round robin selection. If it's round robin only, then how can I really balance the websocket connections load?
Websockets start out as http or https connections, so a load balancer can dispatch them to a server. Once the server accepts the http connection, both the server and the client "upgrade" the connection to use the websocket protocol. They then leave the connection open to use for websocket traffic. As far as the load balancer can tell, the connection is simply a long-lasting http connection.
Taking a server down when it has websocket connections to clients requires your application to retry lost connections. Reconnecting on connection failure is one of the trickiest parts of websocket client programming. Your application cannot be robust without reconnect logic.
AWS's load balancer has no built-in knowledge of the capabilities of the servers behind it. You have observed that it sends requests equally to big and small servers. That can overwhelm the small ones.
I have managed this by building a /healthcheck endpoint in my servers. It's a straightforward https://example.com/heathcheck web page. You can put a little bit of content on the page announcing how many websocket connections are currently open, or anything else. Don't password protect it or require a session to hit it.
My /healthcheck endpoints, whenever hit, measure the server load. I simply use the number of current websocket connections, but you can use any metric you want. I compare the current load to a load threshold configured for each server. For example, on a micro instance I can handle 20 open websockets, and on a production instance I can handle 400.
If the server load is too high, my endpoint gives back a 503 http error status along with its content. 503 typically means "I am overloaded, please try again later." It can also mean "I will shut down when all my connections are closed. Please don't use me for any more connections."
Then I configure the load balancer to perform those health checks every couple of minutes on all the servers in the server pool (AWS calls the pool a "target group"). The health check operation detects "unhealthy" servers and temporarily takes them out of its rotation. (The health check also detects crashed servers, which is good.)
You need this loadbalancer health check for a large-scale production setup.
All that being said, you will get best results if all your server instances in your pool have roughly the same capacity as each other.

Route requests for the same room to the same server which uses web sockets when implementing a load balancer

I have an online whiteboard where users connect to the same room depending on the last part of the url where the room name is present. The urls are dynamic and is created per new room.
Eg: https://.../room/123456
I use web sockets to communicate between client and server. The users are subscribed to the same channel based on the room name. I'm going to implement a load balancer server to handle the traffic. Since we create a session on the server for that particular room it is essential that every user in the room is directed to that particular server. How can I achieve this?
I think creating a proxy with the uri balancing method may be what you're looking for. By default, it will distribute traffic based on the hash of your URL path.
backend bk_whiteboard
balance uri

Loadbalancing web sockets - AWS Elastic Loadbalancer

I have a question about how to load balance web sockets with AWS elastic load balancer.
I have 2 EC2 instances behind AWS elastic load balancer.
When any user login, the user session will be established with one of the server, say EC2 instance1. Now, all the requests from the same user will be routed to EC2 instance1.
Now, I have a different stateless request coming from a different system. This request will have userId in it. This request might end up going to a EC2 instance2. We are supposed to send a notification to the user based on the userId in the request.
Now,
1) Assume, the user session is with the EC2 instance1, but the notification is originating from the EC2 instance2.
I am not sure how to notify the user browser in this case.
2) Is there any limitation on the websocket connection like 64K and how to overcome with multiple servers, since user is coming thru Load balancer.
Thanks
You will need something else to notify the browser's websocket's server end about the event coming from the other system. There are a couple of publish-subscribe based solution which might help, but without knowing more details it is a bit hard to figure out which solution fits the best. Redis is generally a good answer, and Elasticache supports it.
I found this regarding to AWS ELB's limits:
http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_elastic_load_balancer
But none of them seems to be related to your question.
Websocket requests start with HTTP communication before handing over to websockets. In theory if you could include a cookie in that initial HTTP request then the sticky session features of ELB would allow you to direct websockets to specific EC2 instances. However, your websocket client may not support this.
A preferred solution would be to make your EC2 instances stateless. Store the websocket session data in AWS Elasticache (Either Redis or Memcached) and then incoming connections will be able to access the session regardless of which EC2 instance is used.
The advantage of this solution is that you remove the dependency on individual EC2 instances and your application will scale and handle failures better.
If the ELB has too many incoming connections, then it should scale automatically. Although I can't find a reference for that. ELB's are relatively slow to scale - minutes rather than seconds, if you are expecting surges in traffic then AWS can "pre-warm" more ELB resource for you. This is done via support requests.
Also, factor in the ELB connection time out. By default this is 60 seconds, it can be increased via the AWS console or API. Your application needs to send at least 1 byte of traffic before the timeout or the ELB will drop the connection.
Recently had to hook up crossbar.io websockets with ALB. Basically there are two things to consider. 1) You need to set stickiness to 1 day on the target group attributes. 2) You either need something on the same port that returns static webpage if connection is not upgraded, or a separate port serving a static webpage with a custom health check specifying that port on the target group. Go for a ALB over ELB, ALB's have support for ws:// and wss://, they only lack the health check over websockets.

Google Compute Engine load balancing keep session

I have 2 TomEE servers on google's machines. they both serves the same application.
The web application have login page with jaas. an both servers works with the same DB.
when Im tring to access the servers separatly everything works fine.
but when I try to access via the load-balancer It's look like the load balancer hopping my requests between the two servers and therefore my web app not working well since the VM that I didn't login to rejects my requests.
my problem is how to make the session works well when loadbalancing the servers?
You want to look at the sessionAffinity feature of load balancer.
Specifically, per the load balancer target pool docs:
sessionAffinity
[Optional] Controls the method used to select a backend virtual machine instance. You can only set this value during the creation of the target pool. Once set, you cannot modify this value. The hash method selects a backend based on a subset of the following 5 values:
Source / Destination IP
Source / Destination Port
Layer 4 Protocol (TCP, UDP)
Possible hashes are:
NONE (i.e., no hash specified) (default)
5-tuple hashing, which uses the source and destination IPs, source and destination ports, and protocol. Each new connection can end up on any instance, but all traffic for a given connection will stay on the same instance if the instance stays healthy.
CLIENT_IP_PROTO
3-tuple hashing, which uses the source and destination IPs and the protocol. All connections from a client will end up on the same instance as long as they use the same protocol and the instance stays healthy.
CLIENT_IP
2-tuple hashing, which uses the source and destination IPs. All connections from a client will end up on the same instance regardless of protocol as long as the instance stays healthy.
5-tuple hashing provides a good distribution of traffic across many virtual machines. However, a second session from the same client may arrive on a different instance because the source port may change. If you want all sessions from the same client to reach the same backend, as long as the backend stays healthy, you can specify CLIENT_IP_PROTO or CLIENT_IP options.
In general, if you select a 3-tuple or 2-tuple method, it will provide for better session affinity than the default 5-tuple method, but the overall traffic may not be as evenly distributed.
Caution: If a large portion of your clients are behind a proxy server, you should not use CLIENT_IP_PROTO or CLIENT_IP. Using them would end up sending all the traffic from those clients to the same instance.

How to protect websocket connection ip from being modified

I am working on a small project to help me understand websockets better. I am making a simple browser game that connects to an ip via a websocket. There will be 3 ip addresses however I want to assign the user an ip and not have them able to modify it so they are unable to get on the same server as friends.
I will assign the ip based on how full the games are etc and this will be down via php. Currently although it connects to this ip, the user is able to use the console in a browser to modify the ip to one of the other ones.
I was thinking of sending a check number, so the web server sends this to the user along with the ip. It also sends it to the websocket server. Then when a user connects if the check number doesn't match it rejects the connection.
I'm new to websockets so I'm not sure if this would be easy to implement, so are there any easy solutions to this?
That seems to be the duty of other element, in particular the load balancer. How are you balancing the requests across those 3 servers? Does your load balancer support sticky sessions?
If not, probably you can record to which IP address the user connected first, and they if it connects to one of the other two later, you can return a HTTP 302 (Redirect) pointing to the server you want.
Cheers.

Resources