Tomcat Hazelcast Session Store Session Attributes Vanishing - session

I am trying to setup a Tomcat cluster on AWS and since AWS does not support IP multicasting, one of the option is tomcat clustering using DB
That is well understood, however, due to performance penalties related to DB calls, I am currently considering Hazelcast as the session store. The current Hazelcast filter approach does not work out for me as there are other filters on the web app and they are somewhat interfering and a better and cleaner approach would be to configure the PersistenceManager with a custom store implementation and configure the same on the tomcat/conf context.xml, the configuration section is provided below:
<Manager className="org.apache.catalina.session.PersistentManager"
distributable="true"
maxActiveSessions="-1"
maxIdleBackup="2"
maxIdleSwap="5"
processingTime="1000"
saveOnRestart="true"
maxInactiveInterval="1200">
<Store className="com.hm.vigil.platform.session.HC_SessionStore"/>
</Manager>
The sessions are being saved in the Hazelcast instance and the trace from tomcat is below:
---------------------------------------------------------------------------------------
HC_SessionStore == Saving Session ID == C19A496F2BB9E6A4A55E70865261FC9F SESSION == StandardSession[
C19A496F2BB9E6A4A55E70865261FC9F]
SESSION ATTRIBUTE :: USER_IDENTIFIER :: 50
SESSION ATTRIBUTE :: APPLICATION_IDENTIFIER :: APPLICATION_1
SESSION ATTRIBUTE :: USER_EMAIL :: x#y.com
SESSION ATTRIBUTE :: USER_ROLES :: [PLATFORM_ADMIN, CLIENT_ADMIN, PEN_TESTER, USER]
SESSION ATTRIBUTE :: CLIENT_IDENTIFIER :: 1
---------------------------------------------------------------------------------------
03-Nov-2015 15:12:02.562 FINE [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.ca
talina.session.PersistentManagerBase.processExpires End expire sessions PersistentManager processing
Time 75 expired sessions: 0
03-Nov-2015 15:12:02.563 FINE [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.ca
talina.session.PersistentManagerBase.processExpires Start expire sessions PersistentManager at 14465
43722563 sessioncount 0
03-Nov-2015 15:12:02.577 FINE [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.ca
talina.session.PersistentManagerBase.processExpires End expire sessions PersistentManager processing
Time 14 expired sessions: 0
The above trace if from the 'save' method as overridden by the store implementation, the code is provided below:
#Override
public void save(Session session) throws IOException {
//System.out.println("HC_SessionStore == Saving Session ID == "+session.getId()+" SESSION == "+session);
try{
String sessionId=session.getId();
ByteArrayOutputStream baos=new ByteArrayOutputStream();
ObjectOutputStream oos=new ObjectOutputStream(baos);
oos.writeObject(session);
oos.close();
byte[] serializedSession=baos.toByteArray();
sessionStore.put(sessionId,serializedSession);
sessionCounter++;
System.out.println("---------------------------------------------------------------------------------------");
System.out.println("HC_SessionStore == Saving Session ID == "+sessionId+" SESSION == "+session);
Enumeration<String> attributeNames=((StandardSession)session).getAttributeNames();
while(attributeNames.hasMoreElements()){
String attributeName=attributeNames.nextElement();
System.out.println("SESSION ATTRIBUTE :: "+attributeName+" :: "+((StandardSession)session).getAttribute(attributeName));
}//while closing
System.out.println("---------------------------------------------------------------------------------------");
}catch(Exception e){throw new IOException(e);}
}//save closing
Where the 'sessionStore' is a Hazelcast distributed Map.
The corresponding 'load' method of the store is as follows:
#Override
public Session load(String sessionId) throws ClassNotFoundException, IOException {
Session session=null;
try{
byte[] serializedSession=(byte[])sessionStore.get(sessionId);
ObjectInputStream ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
//Read the saved session from serialized state
//StandardSession session_=new StandardSession(manager);
StandardSession session_=(StandardSession)ois.readObject();
session_.setManager(manager);
ois.close();
//Initialize the transient properties of the session
ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
session_.readObjectData(ois);
session=session_;
ois.close();
System.out.println("===========================================================");
System.out.println("HC_SessionStore == Loading Session ID == "+sessionId+" SESSION == "+session);
Enumeration<String> attributeNames=session_.getAttributeNames();
while(attributeNames.hasMoreElements()){
String attributeName=attributeNames.nextElement();
System.out.println("SESSION ATTRIBUTE :: "+attributeName+" :: "+session_.getAttribute(attributeName));
}//while closing
System.out.println("===========================================================");
}catch(Exception e){throw new IOException(e);}
return session;
}//load closing
Now, one of the most intriguing thing is that while the 'store' method gets called at the default interval of 60 seconds, the 'load' method is never called with the net impact that any session attributes that are saved is lost after a while, which is most unusual. Technically any new session attributes that are bound to the session will be saved in the Hazelcast once the 'save' method is called and the manager is configured to swap out every 5 seconds.
However, the session attribute is lost (the new one), the old ones are still there. But whatever it is the 'load' method is not called (at least I don't see the trace).
Some help on this will be really appreciated.

Hope this helps someone, the problem is actually in the following code sections:
public void save(Session session) throws IOException method:
String sessionId=session.getId();
ByteArrayOutputStream baos=new ByteArrayOutputStream();
ObjectOutputStream oos=new ObjectOutputStream(baos);
oos.writeObject(session);
oos.close();
byte[] serializedSession=baos.toByteArray();
sessionStore.put(sessionId,serializedSession);
public Session load(String sessionId) throws ClassNotFoundException, IOException method:
byte[] serializedSession=(byte[])sessionStore.get(sessionId);
ObjectInputStream ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
//Read the saved session from serialized state
//StandardSession session_=new StandardSession(manager);
StandardSession session_=(StandardSession)ois.readObject();
session_.setManager(manager);
ois.close();
//Initialize the transient properties of the session
ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
session_.readObjectData(ois);
session=session_;
ois.close();
If you notice, the session is summarily serialized and saved to the Hazelcast, that is not a problem by itself.
Now if we look at the Tomcat code for StandardSession, we see that it contains a number of transient properties that will not be serialized. So during deserialization these properties must be given values, which is done in the 'load' method, however, it is done wrongly, first it deserializes the session from the ObjectInputStream 'readObjectData' method to initialize the transient properties. In the StandardSession, 'readObjectData' calls 'doReadObject' a protected method to reinitialize the transient properties, which in turn expects that the object input stream provided is a series of objects. In our case, however, it is the entire serialized object and not the series of objects that it expects.
In fact, after enabling fine level logging on Tomcat only this exception is seen, not otherwise.
The workaround is simple, StandardSession has a method 'writeObjectData' method, which internally calls a protected method 'doWriteObject', which writes the session state in a series of objects to the output stream, reading this serialized bytes solves the problem.

Related

Updating global store from data within transform

I currently have a simple topology:
KStream<String, Event> eventsStream = builder.stream(sourceTopic);
eventsStream.transformValues(processorSupplier, "nameCache")
.to(destinationTopic);
My events sometimes have a key/value pair and other times have just the key. I want to be able to add the value to those events that are missing the value. I have this working fine with a local state store but when I add more tasks, sometimes the key/value events and the value events are in different threads and so they aren't updated correctly.
I'd like to use a global state store for this but I'm having difficulty figuring out how to update the global store when new key/value pairs come in. I've created a global state store with the following code:
builder.addGlobalStore(stateStore, "global_store", Consumed.with(Serdes.String(), Serdes.String()), new ProcessorSupplier<String, String>() {
#Override
public Processor<String, String> get() {
return new Processor<String, String>() {
private ProcessorContext context;
#Override
public void init(final ProcessorContext processorContext) {
this.context = processorContext;
}
#Override
public void process(final String key, final String value) {
context.forward(key, value);
}
#Override
public void close() {
}
};
}
});
As far as I can tell, it is working but since there is no data in the topic, I'm not sure.
So my question is how do I update the global store from inside of the transformValues? store.put() fails with an error that global store is read only.
I found Write to GlobalStateStore on Kafka Streams but the accepted answer just says to update the underlying topic but I don't see how I can do that since the topic isn't in my stream.
---Edited---
I updated the code per #1 in the accepted answer. I see the new key/value pairs show up in global_store. But the globalStore doesn't seem to see the new keys. If I restart the application, it fills the cache with the data in the topic but new keys aren't visible until after I stop/start the application.
I added logging to the process(String, String) in the global store processor and it shows new keys being processed. Any ideas?
You can only get a real-only access on Global state store inside transformValues, and if you want to update a global state store, yes, you have to send the update to the underlying input topic of Global state store, and your state will update the value when this update message is consumed. The reason behind this is that, Global state store are populated on all application instances and use this input topic for fault tolerance. You can do this by branching you topology:
KStream<String, Event> eventsStream = builder.stream(sourceTopic);
//processing message as normal
eventsStream.transformValues(processorSupplier, "nameCache")
.to(destinationTopic);
//this transform to the updated message to global state
eventsStream.transform(updateGlobalStateProcessorSupplier, "nameCache")
.to("global_store");
Using low level API to construct your Topology manually, so you can forward both to your destinationTopic topic and global_state topic using ProcessorContext.forward to forward message to sink processor node using name of the sink processor.

Does Hazelcast trigger the onRemoved listener for expired cache values?

I've been trying to integrate Hazelcast into my application but am running into a behaviour I had not anticipated with the onExpired vs onRemoved listener.
Ideally, I would like to execute some code whenever a value is removed from my cache. I configured an Expiry policy on the cache, and am expecting that my onRemoved listener will follow after my cache value expires, but it does not seem to be the case.
Does Hazelcast call the onRemoved listener after when it removes an expired value from the cache, or only on an explicit cache.remove() call?
My configuration is:
hazelcastInstance = HazelcastInstanceFactory.getOrCreateHazelcastInstance(getHazelcastConfig());
// Add cache used by adams
CacheSimpleConfig cacheSimpleConfig = new CacheSimpleConfig()
.setName(CACHE_NAME)
.setKeyType(UserRolesCacheKey.class.getName())
.setValueType((new String[0]).getClass().getName())
.setReadThrough(true)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setEvictionConfig(new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(1000)
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT))
.setExpiryPolicyFactoryConfig(
new ExpiryPolicyFactoryConfig(
new TimedExpiryPolicyFactoryConfig(ACCESSED,
new DurationConfig(
120,
TimeUnit.SECONDS))));
hazelcastInstance.getConfig().addCacheConfig(cacheSimpleConfig);
ICache<UserRolesCacheKey, String[]> userRolesCache = hazelcastInstance.getCacheManager().getCache(CACHE_NAME);
userRolesCache.registerCacheEntryListener(new MutableCacheEntryListenerConfiguration<>(
new UserRolesCacheListenerFactory(), null, false, false));
}
}
}
My Listener is fairly simple:
public class UserRolesCacheListenerFactory implements Factory<CacheEntryListener<UserRolesCacheKey, String[]>> {
#Override
public CacheEntryListener create() {
return new UserRolesCacheEntryListener();
}
}
And:
public class UserRolesCacheEntryListener implements CacheEntryRemovedListener<UserRolesCacheKey, String[]>{
private final static Logger LOG = LoggerFactory.getLogger(UserRolesCacheEntryListener.class);
#Override
public void onRemoved(Iterable<CacheEntryEvent<? extends UserRolesCacheKey, ? extends String[]>> cacheEntryEvents) throws CacheEntryListenerException {
cacheEntryEvents.forEach(this::deleteDBData);
}
I would expect that sometime after 120s my onRemoved method would be called by Hazelcast as it removes the expired value from the cache, but it never seems to be.
Is this expected behaviour? Is something missing in my cache configuration?
According to the JCache specification, section 8.4, the REMOVED event is only for explicit operations.
Listening for EXPIRED event will be better but still not ideal.
Note the wording in the specification and the code here. EXPIRED events are implementation dependent -- a caching provider is allowed to never notice the data has expired, never remove it, and so never generate the event.
Hazelcast does notice see here, but this makes the timely appearance of the event you need dependent on the implementation.

Does Google Guava Cache do deduplication when refreshing value of the same key

I implemented a non-blocking cache using Google Guava, there's only one key in the cache, and value for the key is only refreshed asynchronously (by overriding reload()).
My question is that does Guava cache handle de-duplication if the first reload() task hasn't finished, and a new get() request comes in.
//Cache is defined like below
this.cache = CacheBuilder
.newBuilder()
.maximumSize(1)
.refreshAfterWrite(10, TimeUnit.MINUTES)
.recordStats()
.build(loader);
//reload is overwritten asynchronously
#Override
public ListenableFuture<Map<String, CertificateInfo>> reload(final String key, Map<String, CertificateInfo> prevMap) throws IOException {
LOGGER.info("Refreshing certificate cache.");
ListenableFutureTask<Map<String, CertificateInfo>> task = ListenableFutureTask.create(new Callable<Map<String, CertificateInfo>>() {
#Override
public Map<String, CertificateInfo> call() throws Exception {
return actuallyLoad();
}
});
executor.execute(task);
return task;
}
Yes, see the documentation for LoadingCache.get(K) (and it sibling, Cache.get(K, Runnable)):
If another call to get(K) or getUnchecked(K) is currently loading the value for key, simply waits for that thread to finish and returns its loaded value.
So if a cache entry is currently being computed (or reloaded/recomputed), other threads that try to retrieve that entry will simply wait for the computation to finish - they will not kick off their own redundant refresh.

spring security current user in thread

hi i have some problems when use spring security in thread scope
System.out.println(((User) SecurityContextHolder.getContext().getAuthentication().getPrincipal()).getId());
new Thread(() -> System.out.println(((User) SecurityContextHolder.getContext().getAuthentication().getPrincipal()).getId())).start();
this two lines should give me current user id
the first line work as expected
the second line give me NullPointerException as there is no current user it is null value
i found this problem as i want to save many rows to the song table and it hava #CreatedBy user and this will ask for current user in thread and will fail as this will give null value for current user
If you want spawned threads to inherit SecurityContext of the parent thread, you should set MODE_INHERITABLETHREADLOCAL strategy.
SecurityContextHolder.setStrategyName(SecurityContextHolder.MODE_INHERITABLETHREADLOCAL)
There was an issue, when using this with thread pools. This seems to be fixed.
you can transfer the SecurityContext from one Thread to another
Runnable originalRunnable = new Runnable() {
public void run() {
// invoke secured service
}
};
SecurityContext context = SecurityContextHolder.getContext();
DelegatingSecurityContextRunnable wrappedRunnable =
new DelegatingSecurityContextRunnable(originalRunnable, context);
new Thread(wrappedRunnable).start();
See Concurrency Support
https://docs.spring.io/spring-security/reference/features/integrations/concurrency.html
If you want all your child threads to inherit SecurityContextHolder from the ThreadLocal you can use a method annotated with #PostConstruct to set it globally. Now your child threads will have access to the same SecurityContextHolder.
#PostConstruct
void setGlobalSecurityContext() {
SecurityContextHolder.setStrategyName(SecurityContextHolder.MODE_INHERITABLETHREADLOCAL);
}
Cheers

Invalidate Session at all Cluster Weblogic

i try to save user session in a hashmap on every cluster. and when i need to invalidate it, i will take specified session id. and invalidate it where the session created with normal way to invalidate session.
public class SessionListener implements HttpSessionListener {
public HashMap<String, HttpSession> sessionHolder = new HashMap<String, HttpSession>();
#Override
public void sessionCreated(HttpSessionEvent se) {
sessionHolder.put(se.getSession().getId(), se.getSession());
}
public void invalidate(String sessionId){
if(this.sessionHolder.get(sessionId)!= null){
System.out.println("Invalidate session ID : " + sessionId);
HttpSession session = sessionHolder.get(sessionId);
session.invalidate();
} else {
System.out.println("Session is not created in this cluster ID : " + sessionId);
}
}
#Override
public void sessionDestroyed(HttpSessionEvent se) {
System.out.println("Session " + se.getSession().getId() + " has been destoryed");
sessionHolder.remove(se.getSession().getId());
}
}
session will perish where invalidate occur. but on other cluster session is still avaliable.
why the session on other cluster is still. and how to also invalidate session on other cluster.
thanks.
(it would be good to confirm whether we're talking about servers or clusters - the config.xml for your domain would be a help)
The session object is only managed by WebLogic Server inside the web container - if you create a copy of the session object in a HashMap manually either in another server or another cluster entirely, WebLogic Server won't automatically invalidate the copy of that session object, because it's outside of the web container.
WebLogic Server will automatically, and quite transparently create a replicated copy of HttpSession objects provided that your <session-descriptor> element in your weblogic.xml deployment descriptor has correct settings (replicated_if_clustered for persistent-store-type is recommended)
Doc available here: http://download.oracle.com/docs/cd/E13222_01/wls/docs103/webapp/weblogic_xml.html#wp1071982
It would be good to understand what you're trying to achieve here - cross-cluster replication shouldn't be necessary unless you're talking about a monster application or spanning a wide network segment, although it is supported with WebLogic Server.

Resources