I have a Spring 2.5.6/Flex application setup and running with Spring Security 2.0.4. Recently a load balancer (A Foundry ServerIron 4g http://www.foundrynet.com/products/a...ems/si-4g.html) was put into place and now I am getting cross domain errors. Basically the load balancer is firing off a request to myloadbalancer.abc.com and myrealserver1.abc.com is being returned as the domain name. Spring security is forwarding the request to the real server somehow. How can I get around this?
Also the ConcurrentSessionFilter is no longer working. The application is set up to disable concurrent logons, but this functionality stopped after the application was put behind the load balancer. I believe there are multiple Oracle Application Servers being clustered together as well. I have never dealt with clustering or load balancers before and I wasn't aware that the software had to be written differently in certain areas.
These sound like separate issues to me, but I need help for both.
Concerning your second problem:
If the ConcurrentSessionFilter stopped working (i.e. does not prevent concurrent sessions anymore), that could be due to clustered application containers with sticky sessions.
In such a setup, each of the cluster's nodes works independently and doesn't share state with other nodes. Instead, the load balancer makes sure that existing sessions will always be served by the same node.
Now Spring Security's ConcurrentSessionController works by mapping sessions to principals. The controller itself relies on the HttpSessionEventPublisher sending ApplicationEvents on start and termination of user sessions.
Everything is will work fine if someone intending to open more than one session ends up on the same node he already has a session opened. HttpSessionEventPublisher informs the concurrent session mechanism of the session's creation and authentication will fail because there is already a session associated with this user. On a different node however, there is no session for that user yet, so ConcurrentSessionController does not complain and login succeeds.
Fortunately, solving the problem should be easy: Implement your own SessionRegistry and use a shared data store for all nodes (e.g. the application's database).
Related
I have design question about Ignite web session clustering.
I have springboot app with UI. It clustered app ie multiple instance of springboot app behind the load balancer. I am using org.apache.ignite.cache.websession.WebSessionFilter()to intercept request and create\manage session for any incoming request.
I have 2 option
Embed the ignite node inside springboot app. So have these embedded ignite node (on each springboot JVM) be part of cluster. This way request session is replicated across the entire springboot cluster. On load balancer I don’t have to maintain the sticky connection. The request can go to any app in round robin or least load algorithm.
Few considerations
Architect is simple. I don’t have worry about the cache being
down etc.
Now the cache being embedded, its using CPU and memory
from app jvm. It has potential of starving my app of resources.
Have ignite cluster running outside of app JVM. So now I run client node in springboot app and connect to main ignite cluster.
Few considerations
For any reason, if the client node cannot connect to main ignite
cluster. Do I have to manage the session manually and then push
those session manually at later point to the ignite cluster??
If I manage session locally I will need to have sticky connection on
the load balancer. Which I want to avoid if possible.
I am leaning to approach 2, but want to make it simple. So if client node
cannot create session (override
org.apache.ignite.cache.websession.WebSessionFilter()) it redirects
user to page indicating the app is down or to another app node in
the cluster.
Are there any other design approach I can take?
Am I overlooking anything in either approach?
If you have dealt with it, please share your thoughts.
Thanks in advance.
Shri
if you have a local cache for sessions and sticky sessions why do you need to use ignite at all?
However, It's better to go with ignite, your app will have HA, if some node is failed, the whole app still will work fine.
I agree you should split app cluster and ignite cluster, however, I think you shouldn't care about the server and client connection problems.
This kind of problems should lead to 500 error, would you emulate main storage if you DB go down or you can't connect to it?
I'm looking to store sessions in an external data store instead of in memory. I want to do this in order to have session data available to all my Jetty server instances.
So I read the Jetty docs:
Session Clustering with a Database
Session Clustering with MongoDB
Both docs have the following statement:
The persistent session mechanism works in conjunction with a load balancer that supports stickiness.
My question is - why do I need stickiness in this mechanism? The whole point is to allow any server in the cluster serve any request because the session data is stored externally.
The problem is that the servlet specification does not provide any transactional semantics around sessions. Thus, unless your sessions are sticky its possible for concurrent requests involving the same session to go to multiple servers and produce inconsistent results, depending on the interleaving of the writes. If your sessions are read-mostly you may get away with non-sticky sessions, although you may find that sessions time out a little sooner or a little later than you'd expect, due to having multiple servers trying to manage the same session.
To be more specific, I'm studying sessions, and I'm reading about the <distributable> tag in the deployment descriptor (for example). The text states,
"...it is possible - for the sake of load balancing of fail-over or both - to mark a web application as distributable, if it supported by your application server."
Can someone provide a little more info/context? If possible, I don't need a full background on how the mechanism works (I'm studying for the Web Components exam), just enough to understand in the context of sessions.
Thanks!
Here are some useful lines,
If an application is run in a cluster without being marked as distributable, session changes will only occur on a single JVM. Therefore, when the user connects to one of the other JVM's, their session will not be recognised, and a new session will be created. This may force them to log in again, establishing a 2nd session on the other JVM. As they switch between the two servers, various other problems may arise.
I have a web application that does most of its logged in user dependent actions based on a persistent cookie that maps to a database record such that each ajax POST does not need or want a tomcat session.
In a very small number of ajax requests, I have a small amount of servlet session data to save for the logged in user, and I want the session to persist for a very long time (as long as he has the browser open), even if there is a long period of inactivity.
Now, my understanding is that there are sticky sessions or replicated sessions for a tomcat cluster, your choice. In most cases, I would want my load balancer to send the traffic to the least loaded tomcat instance, and the servlet never gets or creates a session. In a very few cases, I need a session and access to the small amount of session data.
Also, I am using apache mod-proxy. Does this constrain the choice?
If I were to choose a sticky session load balancing, then the vast majority of my ajax requests that don't need to be sticky would go to the same tomcat server anyway. Yet some have said that sticky sessions give better performance if you are not worried about failover.
Can someone tell me what the right choice is in my case?
One idea that I had was in that whenever I create a session in tomcat, I also create a MYSESSIONID cookie for just a particular servlet (path) set to the same value as the tomcat sessionid. Then, in all of my very few servlet requests that require access to the session data, I go thru this one routing servlet, and the load balancer can create a sticky session tied to MYSESSIONID cookie. Is this a good solution?
Andy
A session is global to a web app. It isn't tied to a particular servlet in the webapp. Your routing servlet doesn't make much sense.
If you're not worried about failover, then sticky sessions are easier. If you need clustering, this probably means that you have an awful lot of concurrent users. So, in average, the load should be similar on all the servers anyway.
On the other hand, if you have very few data in the session, and your application modifies it rarely, replicating the session should not cost much. You would have failover as an added advantage, and the load balancer could use a pure round-robin algorithm, making sure that each server gets the same number of requests as the other ones.
I have several tomcat instances running in physically independent machines.
I want to configure the tomcat to share sessions between this instances.
I have tried to configure org.apache.catalina.session.PersistentManager from http://tomcat.apache.org/tomcat-6.0-doc/config/manager.html. But I only see the session file when I shutdown the tomcat instances and I don't know if the instances are sharing this session. I think not. Because It doesn't make sense if tomcat writes down the session only on shutdown.
The other thing that I found is the cluster-howto but I can't do that couse the machines can't see eachother. They only shares a storage path to use.
Other thing that I think I can do is to implement a manager but it seems a litle bit tricky.
I have to add that I am using tomcat for deploying grails war files and I am using the grails session. I thing it has something to do with Spring
So, the question is: What is the best thing you think I can do to accomplish more effectibly the task? Or maybe I am missing something? Can you give me any pointer?
You have the F5 Big IP doing the load balancing in front of the tomcat servers, so it will handle the sessionID for you by sending you back to the correct Tomcat server. Use the sticky-round-robin algorithm.
As per the usecase in your comments -
What I am trying to do is to save some
data in session, then redirect to a
login server, who in a success
scenario it redirects to my servers.
And my concern is what happend if the
load balancer redirects the request to
the server that doesn't previously
saved the needed data in session.
Maybe sticky session is what I need.
So: can I configure sticky session in
a non tomcat-cluster enviromnent?
At the successful login - you redirect back first to the BigIP. It will pick up the sessionID from the browser. It will send you to the correct Tomcat and you should be able to retrieve the session data.
If not, looks like you need to store the "sessionID" itself against some "user Id" in a database but thats a bad design. I think the former should work