What are the drawbacks of session replication on Tomcat? - session

I was trying to decide what is better in a Tomcat+Apache reverse proxy mode for session replication. What is more common on deployments? session replication or stick session? Are there any drawbacks for session replication?
Thanks

I can point out the following considerations if you go for session replication.
Performance
The main drawback will be on performance. Replicated sessions involve copying of session data over to all the servers in the cluster. The more servers you have in the cluster, the additional overheads involved.
Tomcat helps with this overhead by definining two modes for session replication.
DeltaManager (default) and BackupManager
From this URL http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html
Using the above configuration will
enable all-to-all session replication
using the DeltaManager to replicate
session deltas. By all-to-all we mean
that the session gets replicated to
all the other nodes in the cluster.
This works great for smaller cluster
but we don't recommend it for larger
clusters(a lot of tomcat nodes). Also
when using the delta manager it will
replicate to all nodes, even nodes
that don't have the application
deployed.
To get around this problem,
you'll want to use the BackupManager.
This manager only replicates the
session data to one backup node, and
only to nodes that have the
application deployed. Downside of the
BackupManager: not quite as battle
tested as the delta manager
Read this URL for good design tips for the cluster if enabling session replication.
Memory
How many concurrent users will be hitting the application? the more users, the more data gets stored into sessions, and hence an overload for session replication.
Code considerations
Additionally you need to ensure the data being put into the session by the application is serializable. Serializing session data has some overhead for replicating the session state. It's a good idea to keep the session size reasonably small, so the developers need to check the amount of data being put into the session.
Sticky Sessions
Given these considerations, it actually depends on the criticality of the use cases. If you go for sticky sessions alone, then there is a chance of loss of user data during a critical journey.
Do you have means to recover from that - eg: by persisiting critical data into database at each step of a order or payment journey? If not the user has to login and start again. This is fine for websites which are not transactional, but browse brochureware type of data or filling out forms to capture data which is not payment etc.

Related

How to avoid single point of failure from given distributed architecture

I went through this video - Scalability Harvard Web Development David Malan
This is where I got stuck. Explaining the problem -
Lets assume LB is using round robin kind of approach.
As per First image, all servers are storing session in their local space, that is not accessible to other servers. If same request comes next time, and if LB redirects this request to another server, then that server will ask about authentication. That is very irritating from user point of view.
As per second image, all servers are sharing sessions. In this case, when next request comes from same client, and LB redirects to another server. Now, instead of asking for authentication, it will fetch information from Session host.
This is mentioned in above video link.
Question -
Now session host become single point of failure. If the host is down, it will impact availability badly. How can we avoid such case ?
You have these options (assuming session is something which cannot be lost at any cost)
1) The session data store is a highly available data store. For eg: You can use MongoDB replica set for such a session store. It consists of three nodes of MongoDB with a master and two slaves (minimum) and when the master goes down one of the nodes is promoted as the master. This election may take a few seconds but the session would not be lost.
2) Use an in memory data sharing library which does data partioning as well as replication. An example would be Hazelcast for Java. It gives you object level sharing across the web tier and here you can store the session which is shared. Please note AFAIK there is no data persistence in this case (on disk).
3) The most scalable approach that I have used till now is to have client side session and no server side data/session storage. What you can do in this case is to have a very long secret key stored in each app server and you set all the data in the cookie after encrypting the data with this secret key. The only problem with this approach is that you need to be very selective with what you store in the session as there is a limit to the data size on cookie. This encryption is a 2-way. Most of the SAAS based tools use this approach.
Implementing Session host as a replicated data store helps remove single point of failure. Example, using a replicated cache like Hazelcast will keep the cache replicated and distributed thus eliminating the single point of failure. There are others like Memcached and Mongo. Automatic fail over on these can be achieved via virtual ip addresses.
For this exact reason, usually session hosts (eg memcache) are fronted with a VIP (virtual IP) and have more than one hosts. In a distributed architecture you generally want to have 1-N hosts. Most companies that operate at scale generate use data storage like Couchbase (memcahce buckets) to store session state because it's fast, redundant, and highly scalable.

Why do I need to support sticky sessions in Jetty when storing in data store?

I'm looking to store sessions in an external data store instead of in memory. I want to do this in order to have session data available to all my Jetty server instances.
So I read the Jetty docs:
Session Clustering with a Database
Session Clustering with MongoDB
Both docs have the following statement:
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
My question is - why do I need stickiness in this mechanism? The whole point is to allow any server in the cluster serve any request because the session data is stored externally.
The problem is that the servlet specification does not provide any transactional semantics around sessions. Thus, unless your sessions are sticky its possible for concurrent requests involving the same session to go to multiple servers and produce inconsistent results, depending on the interleaving of the writes. If your sessions are read-mostly you may get away with non-sticky sessions, although you may find that sessions time out a little sooner or a little later than you'd expect, due to having multiple servers trying to manage the same session.

Alternative to session replication \ tomcat clustering

We have 3 tomcats with the same web app, using the same DB.
We want to use non-stickey session.
this means we will have to share the session (replicate) between the tomcats (cluster?)
We dont like the idea of the delta-manger since it is an all-to-all replication with preformance cost.
However we dont really like the backup-manager as well (still multiple copies)
My question is:
Is it possible to define a single tomcat that will be a "session manager" and all other tomcats will not keep sessions by themselves?
this way no broadcasting of sessions is needed...
My reading of the Tomcat docs finds:
... when using the delta manager it will replicate to all nodes, even
nodes that don't have the application deployed.
exactly as you say, but then says:
To get around this problem, you'll want to use the BackupManager. This
manager only replicates the session data to one backup node
You seem to object to "multiple copies", but this doesn't seem very different from your proposed suggestion, the BackupManager is, so far as I can see, acting as a Session Manager.
When you don't have sticky sessions you are pretty much guaranteeing that 2 of every 3 requests will need to get a copy of the session data from somewhere else, with only 3 tomcats how much performance cost would all-to-all replication impose?
I suspect that tuning your session sizes is more important. Large sessions tend to be a problem for any sort of replication.

Pros & Cons of Session Repliction

Do I really need Session Replication?
I am working on a number of web projects for a firm. Most of the projects are about one or two pages of input and then doing a save to a mysql database. Very Basic projects. My SA's are pushing to try to get session replication working in JBoss but I don't really see any need for it and all of its overhead.
We need load balancing and clustering so if the server does go down we can move the new requests to the backup service but I am not to big in session replication.
This is very low volume projects. I my eyes what is the odds of a user being in the project as the server goes down on the one or two pages.
I need to convince the SAs that session replication is an un-necessary complication in this instance. I am looking for pros and cons of session replication so that I can better structure my argument.
Well, the "pro" is that you have session failover, either in deliberate cluster member restarting or in inadvertent cluster-member failure. That's it.
Some of the "cons" are:
Session objects and their included objects have to be Serializable
You have to choose Session persistence or replication and manage their configurations and/or datastore
You have to think about Session persistence/replication policies (e.g. every write, request end, time scheduled) and still risk losing the session or losing the most current state of it if a failure occurs before recent changes have been stored/replicated
Non-zero performance impact of replicating or or persisting, inversely related to how robust the replication policy is. (That is, the more likely that you'll get every session change replicated promptly, the worse the performance.)
We do session replication because we considered failover to be an absolute requirement years ago when we started this, but I think if I had it to do over again I'd suggest we don't bother for the majority of our applications.

Spring + Load balancing/Clustering

I am working on a webapp project and we are considering deploying it on multiple servers.
What solution do you advise for clustering/load-balancing with Spring?
What are the issues to take into account?
For example: How do singletons behave in a cluster of machines? What about session replication? Are there any other issues to take into account?
Here is the list of possible issues (not necessarily Spring-related):
stateful beans - if your beans have state, like collections accumulating something or counters, you need to think whether this state should be replicated or not. E.g. should this counter be local to one JVM or global in the whole cluster? In the latter case consider terracotta and hazelcast
filesystem - as long as all instances use the same database, everything is fine. But if one node writes to disk, other instance can't read it. Solutions? Either use database for all storage or distributed file system
HTTP sessions - either use sticky session or replicate sessions. If you go for replication, keep sessions as small as possible.
asynchronous jobs - if you have a job running every hour, should it run on every machine, or just on a dedicated one (or maybe on random)?

Resources