Is it possible to stay in the same server behind a round robin load balancer? - spring

I have a Web App, with Spring Security, and it's behind a round robin load balancer, so whenever the load balancer jumps from server A to Server B the session lost.
We don't want to use Remember-Me cookies, maybe is paranoia, but the data is too sensible.
And we can't configure the load balancer to have sticky sessions, (that's another department, and asking them to configure the load balancer to do this is our last option)
Is it possible to configure the xmls of Spring, to never go outside the server the user originally fell in?
So all the petitions the user make in server A always gonna be served by Server A ?

You could have different URL's for each server:
server A: www1.myapp.com
server B: www2.myapp.com
Then when user goes to the app, they redirected to one of the above URL's. This way they will be fixed on that server for future requests.
However, this will mean that if one server goes down, then the user will not be directed to the other server, so it would not be redundant.
You could get around this by having the server, that was still up, take over the other URL if the other server went down.
Here is the flow:
user hits www.myapp.com
loadbalancer send traffic to server A
server A notices URL=www.myapp.com, so it redirects to www1.myapp.com
user hits www1.myapp.com
traffic hits server A (loadbalancer is bypassed)
If you dont want to expose the servers to the wWW, then you could setup extra pools on the LB as follows:
www.myapp.com : server A, server B
www1.myapp.com : server A
www2.myapp.com : server B

Related

Load balancer and WebSockets

Our infrastructure is composed by
1 F5 load balancer
3 nodes
We have an application which uses websockets, so when a user goes to our site, it opens a websocket to the balancer which it connects to the first available node, and it works as expected.
Our truobles arrives with maintenance tasks, when we have to update our software, we need to turn offline 1 node at a time, deploy the new release and then turn it on again. Doing this task, the balancer drops the open websocket connections to the node and the clients retries to connect after few seconds to the first available nodes, creating an inconvenience for the client because he could miss a signal (or more).
How we can keep the connection between the client and the balancer, changing the backend websocket server? Is the load balancer enough to achieve our goal or we need to change our infrastructure?
To avoid this kind of problems I recommend to read about the Azure SignalR. With this you don't need to thing about stuff like load balancer, redis backplane and other infrastructures that you possibly need to a WebSockets connection.
Basically the clients will not connected to your node directly but redirected to Azure SignalR. You can read more about it here: https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-overview
Since it is important to your application to maintain the connection, I don't see how any other way to archive no connection drop to your nodes, since you need to shut them down.
It's important to understand that the F5 is a full TCP proxy. This means that the F5 is the server to the client and the client to the server. If you are using the websockets protocol then you must apply a websockets profile to the F5 Virtual Server in order for the websockets application to be handled properly by the Load Balancer.
Details of the websockets profile can be found here: https://support.f5.com/csp/article/K14754
If a websockets and an HTTP profile are applied to the Virtual Server - meaning that you have websockets and web traffic using the same port and LB nodes - then the F5 will allow the websockets traffic as passthrough. Also keep in mind that if this is an HTTPS virtual sever that you will need to ensure a client and server side HTTPS profile (SSL offload) are applied to the Virtual Server.
While there are a variety of ways that you can fiddle with load balancers to minimize the downtime caused by a software upgrade, none of them solve the problem, which is that your application-layer protocol seems to not tolerate some small network outages.
Even if you have a perfect load balancer and your software deploys cause zero downtime, the customer's computer may be on flaky wifi which causes a network dropout for half a second - or going over ethernet and someone reconfigures some routing on their LAN, etc.
I'd suggest having your server maintain a queue of messages for clients (up to some size/time limit) so that when a client drops a connection - whether it be due to load balancers/upgrades - or any other reason, it can continue without disruption.

Google Cloud Platform - load balancer websocket keep disconnecting after few seconds

We are using 2 servers and have setup load balancer to redirect the trafic. Both servers are Compute engines.
We are also using websocket (socket.io) to keep the connection between users (online and offline status). When connection is established between users, it gets disconnected after few seconds. We concluded that it is load balancer configuration issue as if we use single server (without load balancer), connection remains alive until user goes offline.
We need help here if we need to do anything extra in load balancer configurations to work it smoothly with websocket.
Using ip addresses, not domain name (if that makes any difference)

clients managment with multi SignalR servers behind a load balancer

Let's say I've got 2 Stock ticker servers that're pushing quotes to web browser clients. the 2 servers sits behind a load balancer (Round Robin mode).
consider the following scenario:
client A subscribe to Google stock on Server1 like so: Groups.Add(Context.ConnectionId, "Google");
client B subscribe to Yahoo stock on Server2:Groups.Add(Context.ConnectionId, "Yahoo");
client C subscribe to Google stock on Server2:Groups.Add(Context.ConnectionId, "Google");
Now both servers are already synced with the stock market so when a stock gets updated they both get the update at real time.
my question is:
when server2 push a new update like so:
Clients.Group("Google").tick(quote);
who are the clients it will send the message to? will it always be client C? I guess not, we have a load balancer in between so the connected clients at a given time may change, right? it may me C now, but next tick it can be clients A&C or only A. A web sockets's connection suppose to stay open so how does the load balancer will handle that, will it always forward the connection from 1 client to a specific server?
backplane won't help me here, because my 2 servers already synced and will send the same messages at the same time. so if I'll force them to route their messages through the backplane to the other server it will end up with duplicate messages to the clients like so:
server1 gets ticker X to Google at 10:00 --> route to backplane --> route to server 2
server2 gets ticker X to Google at 10:00--> route to backplane --> route to server 1
server 1 sends 2 X Google tickers to his clients
server 2 sends 2 X Google tickers to his clients
OK, eventually I have synced all group subscriptions thorugh a shared cache (Redis) so all servers knows all users and their subsciptions. this way each server will know his current clients registerd groups, and will push the relevant data.
Update:
After much thought this is what we've ended up doing:
load balancer will assign a sticky session to an incoming connection so each new connection will have a one constant SignalR server.
section 1 will make the Redis sync redundant as each server will know all his clients.
In case of a server\network failure, the SignalR client will reconnect and will be assigned with a new(in case of a server failure) server by the load balancer.
After a reconnect the SignalR client will resubscribe with the relevant stocks(May be redundant if the failure was on the network and the load balancer redirect it to the old SignalR server, but I'll live with that).

How to protect websocket connection ip from being modified

I am working on a small project to help me understand websockets better. I am making a simple browser game that connects to an ip via a websocket. There will be 3 ip addresses however I want to assign the user an ip and not have them able to modify it so they are unable to get on the same server as friends.
I will assign the ip based on how full the games are etc and this will be down via php. Currently although it connects to this ip, the user is able to use the console in a browser to modify the ip to one of the other ones.
I was thinking of sending a check number, so the web server sends this to the user along with the ip. It also sends it to the websocket server. Then when a user connects if the check number doesn't match it rejects the connection.
I'm new to websockets so I'm not sure if this would be easy to implement, so are there any easy solutions to this?
That seems to be the duty of other element, in particular the load balancer. How are you balancing the requests across those 3 servers? Does your load balancer support sticky sessions?
If not, probably you can record to which IP address the user connected first, and they if it connects to one of the other two later, you can return a HTTP 302 (Redirect) pointing to the server you want.
Cheers.

Sticky and NON-Sticky sessions

I want to know the difference between sticky- and non-sticky sessions. What I understood after reading from internet:
Sticky : only single session object will be there.
Non-sticky session : session object for each server node
When your website is served by only one web server, for each client-server pair, a session object is created and remains in the memory of the web server. All the requests from the client go to this web server and update this session object. If some data needs to be stored in the session object over the period of interaction, it is stored in this session object and stays there as long as the session exists.
However, if your website is served by multiple web servers which sit behind a load balancer, the load balancer decides which actual (physical) web-server should each request go to. For example, if there are 3 web servers A, B and C behind the load balancer, it is possible that www.mywebsite.com is served from server A, www.mywebsite.com is served from server B and www.mywebsite.com/ are served from server C.
Now, if the requests are being served from (physically) 3 different servers, each server has created a session object for you and because these session objects sit on three independent boxes, there's no direct way of one knowing what is there in the session object of the other. In order to synchronize between these server sessions, you may have to write/read the session data into a layer which is common to all - like a DB. Now writing and reading data to/from a db for this use-case may not be a good idea. Now, here comes the role of sticky-session.
If the load balancer is instructed to use sticky sessions, all of your interactions will happen with the same physical server, even though other servers are present. Thus, your session object will be the same throughout your entire interaction with this website.
To summarize, In case of Sticky Sessions, all your requests will be directed to the same physical web server while in case of a non-sticky load balancer may choose any webserver to serve your requests.
As an example, you may read about Amazon's Elastic Load Balancer and sticky sessions here : http://aws.typepad.com/aws/2010/04/new-elastic-load-balancing-feature-sticky-sessions.html
I've made an answer with some more details here :
https://stackoverflow.com/a/11045462/592477
Or you can read it there ==>
When you use loadbalancing it means you have several instances of tomcat and you need to divide loads.
If you're using session replication without sticky session : Imagine you have only one user using your web app, and you have 3
tomcat instances. This user sends several requests to your app, then
the loadbalancer will send some of these requests to the first tomcat
instance, and send some other of these requests to the secondth
instance, and other to the third.
If you're using sticky session without replication : Imagine you have only one user using your web app, and you have 3 tomcat
instances. This user sends several requests to your app, then the
loadbalancer will send the first user request to one of the three
tomcat instances, and all the other requests that are sent by this
user during his session will be sent to the same tomcat instance.
During these requests, if you shutdown or restart this tomcat
instance (tomcat instance which is used) the loadbalancer sends the
remaining requests to one other tomcat instance that is still
running, BUT as you don't use session replication, the instance
tomcat which receives the remaining requests doesn't have a copy of
the user session then for this tomcat the user begin a session : the
user loose his session and is disconnected from the web app although
the web app is still running.
If you're using sticky session WITH session replication : Imagine you have only one user using your web app, and you have 3 tomcat
instances. This user sends several requests to your app, then the
loadbalancer will send the first user request to one of the three
tomcat instances, and all the other requests that are sent by this
user during his session will be sent to the same tomcat instance.
During these requests, if you shutdown or restart this tomcat
instance (tomcat instance which is used) the loadbalancer sends the
remaining requests to one other tomcat instance that is still
running, as you use session replication, the instance tomcat which
receives the remaining requests has a copy of the user session then
the user keeps on his session : the user continue to browse your web
app without being disconnected, the shutdown of the tomcat instance
doesn't impact the user navigation.
Let's say the user sends a request to get its profile, there won't be anything in the memory of our web application instance. we get the user profile from DB nit before sending the response, we save the data in the memory of let's say Instance3. But the next request from the same user can go to any instance.
When the request first comes to Instance3, that time it will create a session that will have a session id. when the response is sent to the client, the client is supplied with a cookie. so next time this client makes a request, this cookie will be attached to the request, the load balancer will look at the cookie, and the load balancer will know that that request has to be forwarded to Instance3. This is sticky session solution. Its downside is what if Instance3 goes down? the load balancer will route the request to other instances but they do not have a cache. All the users stored in Instance3 will have high latency. This will impact the reliability of your system.
If you store sessions in all instances, now you would have memory issues. Let's say if an instance could store 100 user sessions and you have 3 instances, you would be able to store 300 sessions. But if each instance stores each session, you will be able to store only 100 sessions in all of your 3 instances. So this will impact the scalability of your application.
sticky and non-sticky sessions are components of stateful replication. If you want higher scalability you do not cache anything on your web application instance, your web instance will hit the DB with every request but this will cause high latency.
A better way is stateless replication where you do not store anything on your application instance but instead, you use server-side caching (memcached/redis)

Resources