I have two instances of Oracle Application Server (OAS) clustered together and replicating sessions. Whenever I terminate one of the instances by killing the process, the other instance picks up and contains the session. Everything works as expected. If I gracefully shutdown one instance (using opmn stopall) of OAS, HttpSessionDestroyedEvent events are fired off and information is getting deleted, thus causing the application to not fail over gracefully. This is my first experience with a clustered environment and I am curious if this is common. I know and expect that the HttpSessionDestroyedEvent events are fired off in a non clustered environment when the server instance is stopped, but it just doesn't seem correct here. How would one perform any kind of maintenance on one server? I am using the Spring Framework which is where the HttpSessionDestroyedEvent event comes from.
It seems that this is a common problem with clustering and web servers. Basically when a single node belonging to a cluster is gracefully shutdown, that node will fire off session destroyed events for all of the sessions that belong to that node, even if more nodes are up and running in the cluster. Here are a few more links that describe the same problem I am having.
Tomcat Issues
JBoss Issues
A workaround is to load a properties file (see JBoss link) that contains a shutdown flag anywhere that you listen for a session destroyed event. One drawback to this is that the system admin has to remember to update the properties file before and after restart.
Related
I am new to caching mechanism and just started learning about Hazelcast. I gone through couple of tutorials and hazelcast site but still I am not clear.
I am trying to build a caching for my springboot & angular application. It is a single standalone application.
So in my case, since my application single and no plan in running as multiple instance can I just go with Hazelcast member without client. Is client is needed?
No, the client is not mandatory, and for your case it would seem unnecessary.
The idea is around abstraction, you ask Hazelcast for item X and it is returned if it exists. Hazelcast works out where that item is held, and mostly this is hidden from you.
X could be found in your process:
Your process is a client, has near-caching active, and has a copy.
Your process is one of 1 or more servers, and happens to be the server responsible for storing item X.
X could be found in another process:
Your process is a client, has no near-caching, so is not storing anything
Your process is one of several servers, and it happens that one of the other servers is responsible for item X.
"Mostly this is hidden from you" == There will be a retrieval time difference between data found in the same process and data retrieved from another process, as it has to pass across the network. If this is a significant difference at low volumes, it's time to upgrade the network.
We are trying to replicate the WebSphere Traditional (5/6/7/8/9) behaviour about session persistance for servlets and http, but with Hazelcast and Tomcat. Let me explain...
WebSphere, even when configured as client to a replication domain, keeps a local register of session data. And this local register works fine even if the server processes that should keep replicated data are shutdown from the very first moment. That is, you start the client, and session persistence works within the servlet container. Obviously, you cannot expect to recover your session in another servlet container if the first one crashes, but your applications work anyway.
On the other hand, Hazelcast client on Tomcat containers expect the Hazelcast server (at least one member of the cluster) to be up and running to initialize. If no cluster member is available, initialization fails, and ... web applications in the Tomcat servlet container do not start right. They won't answer any request.
Furthermore, once initialization fails, only way to recover is to shutdown and re-start the tomcat web containers (once a hazelcast cluster member is online).
This behaviour is a bit harsh on system administrators: no one can guarantee that a backup service as distributed session persistence is online all time. That means that launching a Tomcat client becomes a risky task, with a single point of failure by design, which is undesirable.
Now, maybe I overlooked something, maybe I got something wrong. So, ¿Did someone ever managed to start a Hazelcast client without servers, and how? For us, the difference is decisive: if we cannot make the web container start with the hazelcast server offline, then we must keep going on with WebSphere.
We have been trying it on a CentOS 7.5 on Virtual Box 5.2.22, and our Tomcat version is 8.5. Hazelcast client and server is 3.11.1/2.
<group>
<name>Integracion</name>
<password></password>
</group>
<network>
<cluster-members>
<address>hazelcastsrv1/address>
<address>hazelcastsrv2</address>
</cluster-members>
</network>
Sadly, we expect exactly what we get: the reading of the Hazelcast manual suggest that offline servers won't allow tomcat to serve applications. But we cannot beleive what we read, because it makes the library unsafe in a distributed context. We expect to be wrong, and that there are good news around the corner.
Hazelcast is not "a single point of failure by design". The design is to avoid a single point of failure. Data is mirrored across the nodes by default.
It's a data grid, you run as many nodes as capacity and resilience requires, and they cluster together.
If you need 3 nodes to be up for successful operations, and also anticipate that 1 might go down, then you need to run 4 in total. Should that 1 failure happen, you have a cluster surviving that is big enough.
Power-on/Power-off order is not relevant in Hazelcast, as long as you are providing remaining nodes, during power-off, enough time to let repartitioning complete. For example, in a 4 nodes cluster, if you take out 1 node and give the other 3 room to complete repartitioning then you dont loose the data. If you take out 2 nodes together then the cluster will be without the data whose backup was stored on 1 of the 2 nodes you took out.
For starting up, the startup sequence is not relevant as each node owns certain set of partitions that are determined based on consistent hashing. And this ownership continues to change even if there are nodes leaving/joining a running cluster.
I am working on a webapp project and we are considering deploying it on multiple servers.
What solution do you advise for clustering/load-balancing with Spring?
What are the issues to take into account?
For example: How do singletons behave in a cluster of machines? What about session replication? Are there any other issues to take into account?
Here is the list of possible issues (not necessarily Spring-related):
stateful beans - if your beans have state, like collections accumulating something or counters, you need to think whether this state should be replicated or not. E.g. should this counter be local to one JVM or global in the whole cluster? In the latter case consider terracotta and hazelcast
filesystem - as long as all instances use the same database, everything is fine. But if one node writes to disk, other instance can't read it. Solutions? Either use database for all storage or distributed file system
HTTP sessions - either use sticky session or replicate sessions. If you go for replication, keep sessions as small as possible.
asynchronous jobs - if you have a job running every hour, should it run on every machine, or just on a dedicated one (or maybe on random)?
I have a .NET 1.1 application hosted on two different servers, but on one of them whenever the application pool is recycled, all sessions are dropped.
Both applications are using “StateServer” session mode and as far as I could tell, both servers have exactly the same configuration and have the “ASP .NET State Server” service running.
This is a particularly troublesome issue, due to the fact that this application pool is recycling every 2-3 hours (that’s another issue that I have to solve).
Does anyone have any idea of might be causing this?
Thanks in advance,
Zeon
Monitor the number of active user sessions in each State Server instance with the perfom counter "State Server Sessions Active" to (a) ensure that both servers are using STate Server and (b) see that this really is caused by the app pool recycling.
It's a bit umclear from your question if these two apps share session state, or if they are two different applications, that could be important for the solution.
I have a Spring 2.5.6/Flex application setup and running with Spring Security 2.0.4. Recently a load balancer (A Foundry ServerIron 4g http://www.foundrynet.com/products/a...ems/si-4g.html) was put into place and now I am getting cross domain errors. Basically the load balancer is firing off a request to myloadbalancer.abc.com and myrealserver1.abc.com is being returned as the domain name. Spring security is forwarding the request to the real server somehow. How can I get around this?
Also the ConcurrentSessionFilter is no longer working. The application is set up to disable concurrent logons, but this functionality stopped after the application was put behind the load balancer. I believe there are multiple Oracle Application Servers being clustered together as well. I have never dealt with clustering or load balancers before and I wasn't aware that the software had to be written differently in certain areas.
These sound like separate issues to me, but I need help for both.
Concerning your second problem:
If the ConcurrentSessionFilter stopped working (i.e. does not prevent concurrent sessions anymore), that could be due to clustered application containers with sticky sessions.
In such a setup, each of the cluster's nodes works independently and doesn't share state with other nodes. Instead, the load balancer makes sure that existing sessions will always be served by the same node.
Now Spring Security's ConcurrentSessionController works by mapping sessions to principals. The controller itself relies on the HttpSessionEventPublisher sending ApplicationEvents on start and termination of user sessions.
Everything is will work fine if someone intending to open more than one session ends up on the same node he already has a session opened. HttpSessionEventPublisher informs the concurrent session mechanism of the session's creation and authentication will fail because there is already a session associated with this user. On a different node however, there is no session for that user yet, so ConcurrentSessionController does not complain and login succeeds.
Fortunately, solving the problem should be easy: Implement your own SessionRegistry and use a shared data store for all nodes (e.g. the application's database).