How to disable sticky sessions in Openshift3 - session

If you scale up a Pod in Openshift3, all requests coming from the same client IP address are sent to container which has the session associated.
Is there any configuration to disable sticky sessions? How can I manage the options of internal HAProxy in Openshift?

For posterity, and since I had the same problem, I want to document the solution I used from Graham Dumpleton's excellent comment.
As it turns out, a cookie is set during the first request that redirects subsequent requests to the same back-end. To disable this behavior on a per-route basis:
oc annotate routes myroute haproxy.router.openshift.io/disable_cookies='true'
This prevents the cookie from being set and allows the balance algorithm to select the appropriate back-end for subsequent requests from the same client. To change the balance algorithm:
oc annotate routes myroute haproxy.router.openshift.io/balance='roundrobin'
With these two annotations set, requests from the same client IP address are sent to each back-end in turn, instead of the same back-end over and over.

oc set env dc/router ROUTER_TCP_BALANCE_SCHEME=roundrobin will change the load balancing algorithm haproxy uses for routes it just passes through (default is source). ROUTER_LOAD_BALANCE_ALGORITHM will change it for routes where it terminates TLS (default us leastconn).
More info on changing the internals of how haproxy works in the OCP 3.5 docs .

Related

Can I use CloudFlare's Proxy with an AWS Application Load Balancer?

I have created a listener on the LB (load balancer) with rules so that requests to different subdomains. I have set a CNAME for each subdomain in Cloudflare.
The problem is when I try to use Proxy feature, when I turn it off, my page works without a problem, but when i turn it on, it results in a time out.
There is a way to use Proxy feature with an LB?
The very simple answer is yes, this is very possible to do.
The longer answer is that a timeout indicates that some change in the request by Cloudflare creates a timeout for your AWS servers. Just a couple of examples could be:
A firewall rule that times out Cloudflare's IPs
Servers are not respecting X-Forwarded-For header and all requests from a small group of IPs are messing with application logic
It would help to know if the requests are reaching your load balancer and furthermore if the servers behind the LB are receiving the requests.

If request sent through jmeter, in glassfish clustering requests are not segregating to different servers

For the application server set as clustering in glass fish. I have sent request through jmeter and all the requests hits to only one server . Expected was requests should be distributed to multiple servers in the cluster. But if sent requests manually clustering is working. Please help to sort out this issue
There could be different clustering load balancing mechanisms, as far as I can see from the GlassFish Server High Availability Administration Guide:
Cookie Method
The Loadbalancer Plug-In uses a separate cookie to record the route information. The HTTP client (typically, the web browser) must support cookies to use the cookie based method. If the HTTP client is unable to accept cookies, the plug-in uses the following method.
Explicit URL Rewriting
The sticky information is appended to the URL. This method works even if the HTTP client does not support cookies. To implement explicit URL rewriting, the application developer must use HttpResponse.encodeURL() and encodeRedirectURL() calls to ensure that any URLs in the application have the session information appended to them.
So depending on your Load Balancer configuration you need to
Either define either different cookies in the HTTP Cookie Manager
Or make sure different threads send requests to different URLs i.e. via HTTP URL Re-writing Modifier
In any case it is recommended to add DNS Cache Manager so each virtual user would resolve the underlying IP address of the application under test on its own.

Persistent & clustered connections with traefik reverse proxy

Let's say I have a cluster of database replicas that I would like to make available under a frontend. These databases replicate with each other. Can I have Traefik serve the same backend to the same client IP if possible, such that the UI can be made consistent even when the DBs are still replicating the newest state?
What you seem to be asking for is sticky sessions (aka session affinity) on a per-IP address basis.
Traefik supports cookie-based stickiness, which means that a cookie will be assigned on the initial request if the relevant Traefik option is enabled. Subsequent requests will then reach the same backend unless it fails to be reachable, at which point a new sticky backend will be selected.
The option can be enabled like this:
[backends]
[backends.backend1]
[backends.backend1.loadbalancer]
sticky = true
Documentation can be found here (search for "sticky sessions").
If you are running one of the dynamic providers with Traefik (e.g., Docker, Kubernetes, Marathon), there are usually labels/tags/annotations available you can set per-frontend. The TOML configuration file documentation contains all the details.
If you are looking for true IP address-based stickiness where the IP address space gets hashed and traffic evenly distributed across all backends: This isn't possible yet, although there's an open feature request.

What does the Amazon ELB automatic health check do and what does it expect?

Here is the thing:
We've implemented a C++ RESTful API Server, with built-in HTTP parser and no standard HTTP server like apache or anything of the kind
It has been in use for several months in Amazon structure, using both plain and SSL communications, and no problems have been identified, related to Amazon infra-structure
We are deploying our first backend using Amazon ELB
Amazon ELB has a customizable health check system but also as an automatic one, as stated here
We've found no documentation of what data is sent by the health check system
The backend simple hangs on the socket read instruction and, eventually, the connection is closed
I'm not looking for a solution for the problem since the backend is not based on a standard web server, just if someone knows what kind of message is being sent by the ELB health check system, since we've found no documentation about this, anywhere.
Help is much appreciated. Thank you.
Amazon ELB has a customizable health check system but also as an
automatic one, as stated here
With customizable you are presumably referring to the health check configurable via the AWS Management Console (see Configure Health Check Settings) or via the API (see ConfigureHealthCheck).
The requirements to pass health checks configured this way are outlined in field Target of the HealthCheck data type documentation:
Specifies the instance being checked. The protocol is either TCP,
HTTP, HTTPS, or SSL. The range of valid ports is one (1) through
65535.
Note
TCP is the default, specified as a TCP: port pair, for example
"TCP:5000". In this case a healthcheck simply attempts to open a TCP
connection to the instance on the specified port. Failure to connect
within the configured timeout is considered unhealthy.
SSL is also specified as SSL: port pair, for example, SSL:5000.
For HTTP or HTTPS protocol, the situation is different. You have to
include a ping path in the string. HTTP is specified as a
HTTP:port;/;PathToPing; grouping, for example
"HTTP:80/weather/us/wa/seattle". In this case, a HTTP GET request is
issued to the instance on the given port and path. Any answer other
than "200 OK" within the timeout period is considered unhealthy.
The total length of the HTTP ping target needs to be 1024 16-bit
Unicode characters or less.
[emphasis mine]
With automatic you are presumably referring to the health check described in paragraph Cause within Why is the health check URL different from the URL displayed in API and Console?:
In addition to the health check you configure for your load balancer,
a second health check is performed by the service to protect against
potential side-effects caused by instances being terminated without
being deregistered. To perform this check, the load balancer opens a
TCP connection on the same port that the health check is configured to
use, and then closes the connection after the health check is
completed. [emphasis mine]
The paragraph Solution clarifies the payload being zero here, i.e. it is similar to the non HTTP/HTTPS method described for the configurable health check above:
This extra health check does not affect the performance of your
application because it is not sending any data to your back-end
instances. You cannot disable or turn off this health check.
Summary / Solution
Assuming your RESTful API Server, with built-in HTTP parser is supposed to serve HTTP only indeed, you will need to handle two health checks:
The first one you configured yourself as a HTTP:port;/;PathToPing - you'll receive a HTTP GET request and must answer with 200 OK within the specified timeout period to be considered healthy.
The second one configured automatically by the service - it will open a TCP connection on the HTTP port configured above, won't send any data, and then closes the connection after the health check is completed.
In conclusion it seems that your server might be behaving perfectly fine already and you are just irritated by the 2nd health check's behavior - does ELB actually consider your server to be unhealthy?
As far as I know it's just an HTTP GET request looking for a 200 OK http response.

Difference between session affinity and sticky session?

What is the difference between session affinity and sticky session in context of load balancing servers?
I've seen those terms used interchangeably, but there are different ways of implementing it:
Send a cookie on the first response and then look for it on subsequent ones. The cookie says which real server to send to.
Bad if you have to support cookie-less browsers
Partition based on the requester's IP address.
Bad if it isn't static or if many come in through the same proxy.
If you authenticate users, partition based on user name (it has to be an HTTP supported authentication mode to do this).
Don't require state.
Let clients hit any server (send state to the client and have them send it back)
This is not a sticky session, it's a way to avoid having to do it.
I would suspect that sticky might refer to the cookie way, and that affinity might refer to #2 and #3 in some contexts, but that's not how I have seen it used (or use it myself)
As I've always heard the terms used in a load-balancing scenario, they are interchangeable. Both mean that once a session is started, the same server serves all requests for that session.
Sticky session means that when a request comes into a site from a client all further requests go to the same server initial client request accessed. I believe that session affinity is a synonym for sticky session.
They are the same.
Both mean that when coming in to the load balancer, the request will be directed to the server that served the first request (and has the session).
Sticky session means to route the requests of particular session to the same physical machine who served the first request for that session.
This article clarifies the question for me and discusses other types of load balancer persistence.
Dave's Thoughts: Load balancer persistence (sticky sessions)
difference is explained in this article:
https://www.haproxy.com/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/
Main part from this link:
Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server. Client's IP address is used in this case. IP address may change during same session and then connection may switch to different server.
Persistence: this is when we use Application layer information to stick a client to a single server. In this case, loadbalancer inject some cookie in response and use same cookie in subsequent request to route to same server.
sticky session: a sticky session is a session maintained by persistence
The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable(when client dont allow cookies like cookie less browser), so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…
They are Synonyms.
No Difference At all
Sticky Session / Session Affinity:
Affinity/Stickiness/Contact between user session and, the server to which user request is sent is retained.

Resources