i try to save user session in a hashmap on every cluster. and when i need to invalidate it, i will take specified session id. and invalidate it where the session created with normal way to invalidate session.
public class SessionListener implements HttpSessionListener {
public HashMap<String, HttpSession> sessionHolder = new HashMap<String, HttpSession>();
#Override
public void sessionCreated(HttpSessionEvent se) {
sessionHolder.put(se.getSession().getId(), se.getSession());
}
public void invalidate(String sessionId){
if(this.sessionHolder.get(sessionId)!= null){
System.out.println("Invalidate session ID : " + sessionId);
HttpSession session = sessionHolder.get(sessionId);
session.invalidate();
} else {
System.out.println("Session is not created in this cluster ID : " + sessionId);
}
}
#Override
public void sessionDestroyed(HttpSessionEvent se) {
System.out.println("Session " + se.getSession().getId() + " has been destoryed");
sessionHolder.remove(se.getSession().getId());
}
}
session will perish where invalidate occur. but on other cluster session is still avaliable.
why the session on other cluster is still. and how to also invalidate session on other cluster.
thanks.
(it would be good to confirm whether we're talking about servers or clusters - the config.xml for your domain would be a help)
The session object is only managed by WebLogic Server inside the web container - if you create a copy of the session object in a HashMap manually either in another server or another cluster entirely, WebLogic Server won't automatically invalidate the copy of that session object, because it's outside of the web container.
WebLogic Server will automatically, and quite transparently create a replicated copy of HttpSession objects provided that your <session-descriptor> element in your weblogic.xml deployment descriptor has correct settings (replicated_if_clustered for persistent-store-type is recommended)
Doc available here: http://download.oracle.com/docs/cd/E13222_01/wls/docs103/webapp/weblogic_xml.html#wp1071982
It would be good to understand what you're trying to achieve here - cross-cluster replication shouldn't be necessary unless you're talking about a monster application or spanning a wide network segment, although it is supported with WebLogic Server.
Related
I have a Rest service and when it gets it has to do some insertion and updation to almost 25 database. So when I tried like the below code, it was working in my localhost but when I deploy to my staging server I was getting FATAL: too many connections for role "user123"
List<String> databaseUrls = null;
databaseUrls.forEach( databaseUrl -> {
DataSource dataSource = DataSourceBuilder.create()
.driverClassName("org.postgresql.Driver")
.url(databaseUrl)
.username("user123")
.password("some-password")
.build();
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update("Some...Update...Query");
});
As per my understanding DataSource need not to be closed because it is never opened.
Note:
A DataSource implementation need not be closed, because it is never
“opened”. A DataSource is not a resource, is not connected to the
database, so it is not holding networking connections nor resources on
the database server. A DataSource is simply information needed when
making a connection to the database, with the database server's
network name or address, the user name, user password, and various
options you want specified when a connection is eventually made.
Can someone tell why I am getting this issue
The problem is in DataSourceBuilder, it actually creates of the connection pools which spawns some number of connections and keeps them running:
private static final String[] DATA_SOURCE_TYPE_NAMES = new String[] {
"org.apache.tomcat.jdbc.pool.DataSource",
"com.zaxxer.hikari.HikariDataSource",
"org.apache.commons.dbcp.BasicDataSource" };
Javadoc says:
/**
* Convenience class for building a {#link DataSource} with common implementations and
* properties. If Tomcat, HikariCP or Commons DBCP are on the classpath one of them will
* be selected (in that order with Tomcat first). In the interest of a uniform interface,
* and so that there can be a fallback to an embedded database if one can be detected on
* the classpath, only a small set of common configuration properties are supported. To
* inject additional properties into the result you can downcast it, or use
* <code>#ConfigurationProperties</code>.
*/
Try to use e.g. SingleConnectionDataSource, then your problem will gone:
List<String> databaseUrls = null;
Class.forName("org.postgresql.Driver");
databaseUrls.forEach( databaseUrl -> {
SingleConnectionDataSource dataSource;
try {
dataSource = new SingleConnectionDataSource(
databaseUrl, "user123", "some-password", true /*suppressClose*/);
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.update("Some...Update...Query");
} catch (Exception e) {
log.error("Failed to run queries for {}", databaseUrl, e);
} finally {
// release resources
if (dataSource != null) {
dataSource.destroy();
}
}
});
First thing it is very bad architecture decision to have single application managing 50 database. Anyway instead of creating DataSource in for loop, you should make use of Factory Design pattern to create DataSource for each DB. You should add some connection pooling mechanism to your system . HijariCP and TomcatPool are most widely used. Analyse logs of failure thread for any further issues.
I've been trying to integrate Hazelcast into my application but am running into a behaviour I had not anticipated with the onExpired vs onRemoved listener.
Ideally, I would like to execute some code whenever a value is removed from my cache. I configured an Expiry policy on the cache, and am expecting that my onRemoved listener will follow after my cache value expires, but it does not seem to be the case.
Does Hazelcast call the onRemoved listener after when it removes an expired value from the cache, or only on an explicit cache.remove() call?
My configuration is:
hazelcastInstance = HazelcastInstanceFactory.getOrCreateHazelcastInstance(getHazelcastConfig());
// Add cache used by adams
CacheSimpleConfig cacheSimpleConfig = new CacheSimpleConfig()
.setName(CACHE_NAME)
.setKeyType(UserRolesCacheKey.class.getName())
.setValueType((new String[0]).getClass().getName())
.setReadThrough(true)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setEvictionConfig(new EvictionConfig()
.setEvictionPolicy(EvictionPolicy.LRU)
.setSize(1000)
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT))
.setExpiryPolicyFactoryConfig(
new ExpiryPolicyFactoryConfig(
new TimedExpiryPolicyFactoryConfig(ACCESSED,
new DurationConfig(
120,
TimeUnit.SECONDS))));
hazelcastInstance.getConfig().addCacheConfig(cacheSimpleConfig);
ICache<UserRolesCacheKey, String[]> userRolesCache = hazelcastInstance.getCacheManager().getCache(CACHE_NAME);
userRolesCache.registerCacheEntryListener(new MutableCacheEntryListenerConfiguration<>(
new UserRolesCacheListenerFactory(), null, false, false));
}
}
}
My Listener is fairly simple:
public class UserRolesCacheListenerFactory implements Factory<CacheEntryListener<UserRolesCacheKey, String[]>> {
#Override
public CacheEntryListener create() {
return new UserRolesCacheEntryListener();
}
}
And:
public class UserRolesCacheEntryListener implements CacheEntryRemovedListener<UserRolesCacheKey, String[]>{
private final static Logger LOG = LoggerFactory.getLogger(UserRolesCacheEntryListener.class);
#Override
public void onRemoved(Iterable<CacheEntryEvent<? extends UserRolesCacheKey, ? extends String[]>> cacheEntryEvents) throws CacheEntryListenerException {
cacheEntryEvents.forEach(this::deleteDBData);
}
I would expect that sometime after 120s my onRemoved method would be called by Hazelcast as it removes the expired value from the cache, but it never seems to be.
Is this expected behaviour? Is something missing in my cache configuration?
According to the JCache specification, section 8.4, the REMOVED event is only for explicit operations.
Listening for EXPIRED event will be better but still not ideal.
Note the wording in the specification and the code here. EXPIRED events are implementation dependent -- a caching provider is allowed to never notice the data has expired, never remove it, and so never generate the event.
Hazelcast does notice see here, but this makes the timely appearance of the event you need dependent on the implementation.
What's the best way to reuse sessions in Spring JavaMailSender?
In a scenario where a consumer reads messages from a queue and trigger emails based on the messages, the emails will be send one after the other. If a new session is created everytime, isn't that an overhead? If JavaMailSender is a singleton bean, does it use the same session always? What's the best solution here?
I saw samples of JNDI sessions being set in to JavaMailSender bean configuration. We don't have support for JNDI, so that's not an option.
If you use the standard JavaMailSender for the MailSendingMessageHandler, so you just reuse the Session!
// Check transport connection first...
if (transport == null || !transport.isConnected()) {
...
try {
transport = connectTransport();
}
...
Transport transport = getTransport(getSession());
transport.connect(getHost(), getPort(), username, password);
return transport;
...
public synchronized Session getSession() {
if (this.session == null) {
this.session = Session.getInstance(this.javaMailProperties);
}
return this.session;
}
Not sure from where you heard that a new session is created for each message...
I've got a backend Spring application and Orientdb graph database. I use Tinkerpop Frames to map orientdb vertices to java objects and OPS4J for spring transaction management. Now I want to implement there a multitenancy where several customers (tenants) uses this one application instance. This application completely works on REST principles and it is opened to several Angular applications - each per customer. So there's as many frontend Angular applications as our customers and only one backend REST Spring application. Backend recognize the tenant from a HTTP request.
Now I'm not sure about the best solution...
First solution
When I read the Orientdb documentation, I found there a way how to implement multitenancy in orientdb - http://orientdb.com/docs/2.1/Partitioned-Graphs.html. However I don't know how to use it through the Java API unless I don't want to create a new database connection for each request. Because right now the spring transaction manager takes connections from connection pool which is centrally set in Spring transaction management configuration. I didn't find any Java example to this.
Spring transaction management config:
#Configuration
#EnableTransactionManagement
public class TransactionConfig {
#Bean
#Qualifier("graphDbTx")
public OrientTransactionManager graphDbTransactionManager() {
OrientTransactionManager bean = new OrientTransactionManager();
bean.setDatabaseManager(graphDatabaseFactory());
return bean;
}
#Bean
public OrientBlueprintsGraphFactory graphDatabaseFactory() {
OrientBlueprintsGraphFactory dbf = new OrientBlueprintsGraphFactory();
dbf.setMaxPoolSize(6);
dbf.setUrl(DbConfig.DATABASE_URL);
dbf.setUsername("admin");
dbf.setPassword("admin");
return dbf;
}
#Bean
public FramedGraphFactory framedGraphFactory() {
return new FramedGraphFactory(new JavaHandlerModule());
}
}
Getting connection:
protected FramedGraph<OrientGraph> framedGraph() {
return framedGraphFactory.create(gdbf.graph());
}
Second solution
Another solution is to use the Tinkerpop
PartitionGraph
class which works on Orientdb but I didn't find any sentence about this possibility in Orientdb documentation. Just this in Tinkerpop - https://github.com/tinkerpop/blueprints/wiki/Partition-Implementation. It works but in the end it just creates a not indexed property in every orientdb vertex so I'm afraid about performance of querying here.
Does anyone have any experiences with this? Any suggestion?
Using the Java API to create a partitioned DB (if I understand what you're interested in) macro steps are:
get connection (using the pool the istance of db are reused);
modify class V and E; create new user enable to write;
when you log in the db, user1 can write Vertices, invisible to the
user2 and contrary;
//WRITE IN YOUR CONTROLLER: CREATE USER ENABLE TO WRITE ON DB ..............
Connection con = new Connection();
OrientGraph noTx = con.getConnection();
//create partition
noTx.begin();
noTx.command(new OCommandSQL("ALTER CLASS V superclass orestricted")).execute();
noTx.command(new OCommandSQL("ALTER CLASS E superclass orestricted")).execute();
noTx.commit();
//create different users
noTx.begin();
String ridRule = "";
Iterable<Vertex> rule = noTx.command(new OCommandSQL("select from ORole where name = 'writer'")).execute();
ridRule = rule.iterator().next().getId().toString();
noTx.command(new OCommandSQL("INSERT INTO ouser SET name = 'user1', status = 'ACTIVE', password = 'user1', roles = ["+ridRule+"]")).execute();
noTx.command(new OCommandSQL("INSERT INTO ouser SET name = 'user2', status = 'ACTIVE', password = 'user2', roles = ["+ridRule+"]")).execute();
noTx.commit();
//will not close the graph instance, but will keep open and available for the next requester
noTx.shutdown();
//finally To release all the instances and free all the resources
con.clodeAllConnect();
//WRITE IN YOUR CONTROLLER: LOGIN WITH USER APPROPRIATE .....................
//CODE to login with user1 or user2, CREATE VERTEX SET label = 'food', name = 'Pizza' etc....
}
//beans
public static class Connection {
private OrientGraphFactory factory = null;
public Connection() {
//recyclable pool of instances
factory = new OrientGraphFactory("remote:localhost/blog").setupPool(1, 10);
}
//return the connection
public OrientGraph getConnection() {
OrientGraph txGraph = factory.getTx();
return txGraph;
}
public void clodeAllConnect(){
factory.close();
}
}
To adapt these steps and insert them in Spring might be useful this link that is OrientDB - spring implementation. it isn't much but I hope will be of help.
I am trying to setup a Tomcat cluster on AWS and since AWS does not support IP multicasting, one of the option is tomcat clustering using DB
That is well understood, however, due to performance penalties related to DB calls, I am currently considering Hazelcast as the session store. The current Hazelcast filter approach does not work out for me as there are other filters on the web app and they are somewhat interfering and a better and cleaner approach would be to configure the PersistenceManager with a custom store implementation and configure the same on the tomcat/conf context.xml, the configuration section is provided below:
<Manager className="org.apache.catalina.session.PersistentManager"
distributable="true"
maxActiveSessions="-1"
maxIdleBackup="2"
maxIdleSwap="5"
processingTime="1000"
saveOnRestart="true"
maxInactiveInterval="1200">
<Store className="com.hm.vigil.platform.session.HC_SessionStore"/>
</Manager>
The sessions are being saved in the Hazelcast instance and the trace from tomcat is below:
---------------------------------------------------------------------------------------
HC_SessionStore == Saving Session ID == C19A496F2BB9E6A4A55E70865261FC9F SESSION == StandardSession[
C19A496F2BB9E6A4A55E70865261FC9F]
SESSION ATTRIBUTE :: USER_IDENTIFIER :: 50
SESSION ATTRIBUTE :: APPLICATION_IDENTIFIER :: APPLICATION_1
SESSION ATTRIBUTE :: USER_EMAIL :: x#y.com
SESSION ATTRIBUTE :: USER_ROLES :: [PLATFORM_ADMIN, CLIENT_ADMIN, PEN_TESTER, USER]
SESSION ATTRIBUTE :: CLIENT_IDENTIFIER :: 1
---------------------------------------------------------------------------------------
03-Nov-2015 15:12:02.562 FINE [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.ca
talina.session.PersistentManagerBase.processExpires End expire sessions PersistentManager processing
Time 75 expired sessions: 0
03-Nov-2015 15:12:02.563 FINE [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.ca
talina.session.PersistentManagerBase.processExpires Start expire sessions PersistentManager at 14465
43722563 sessioncount 0
03-Nov-2015 15:12:02.577 FINE [ContainerBackgroundProcessor[StandardEngine[Catalina]]] org.apache.ca
talina.session.PersistentManagerBase.processExpires End expire sessions PersistentManager processing
Time 14 expired sessions: 0
The above trace if from the 'save' method as overridden by the store implementation, the code is provided below:
#Override
public void save(Session session) throws IOException {
//System.out.println("HC_SessionStore == Saving Session ID == "+session.getId()+" SESSION == "+session);
try{
String sessionId=session.getId();
ByteArrayOutputStream baos=new ByteArrayOutputStream();
ObjectOutputStream oos=new ObjectOutputStream(baos);
oos.writeObject(session);
oos.close();
byte[] serializedSession=baos.toByteArray();
sessionStore.put(sessionId,serializedSession);
sessionCounter++;
System.out.println("---------------------------------------------------------------------------------------");
System.out.println("HC_SessionStore == Saving Session ID == "+sessionId+" SESSION == "+session);
Enumeration<String> attributeNames=((StandardSession)session).getAttributeNames();
while(attributeNames.hasMoreElements()){
String attributeName=attributeNames.nextElement();
System.out.println("SESSION ATTRIBUTE :: "+attributeName+" :: "+((StandardSession)session).getAttribute(attributeName));
}//while closing
System.out.println("---------------------------------------------------------------------------------------");
}catch(Exception e){throw new IOException(e);}
}//save closing
Where the 'sessionStore' is a Hazelcast distributed Map.
The corresponding 'load' method of the store is as follows:
#Override
public Session load(String sessionId) throws ClassNotFoundException, IOException {
Session session=null;
try{
byte[] serializedSession=(byte[])sessionStore.get(sessionId);
ObjectInputStream ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
//Read the saved session from serialized state
//StandardSession session_=new StandardSession(manager);
StandardSession session_=(StandardSession)ois.readObject();
session_.setManager(manager);
ois.close();
//Initialize the transient properties of the session
ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
session_.readObjectData(ois);
session=session_;
ois.close();
System.out.println("===========================================================");
System.out.println("HC_SessionStore == Loading Session ID == "+sessionId+" SESSION == "+session);
Enumeration<String> attributeNames=session_.getAttributeNames();
while(attributeNames.hasMoreElements()){
String attributeName=attributeNames.nextElement();
System.out.println("SESSION ATTRIBUTE :: "+attributeName+" :: "+session_.getAttribute(attributeName));
}//while closing
System.out.println("===========================================================");
}catch(Exception e){throw new IOException(e);}
return session;
}//load closing
Now, one of the most intriguing thing is that while the 'store' method gets called at the default interval of 60 seconds, the 'load' method is never called with the net impact that any session attributes that are saved is lost after a while, which is most unusual. Technically any new session attributes that are bound to the session will be saved in the Hazelcast once the 'save' method is called and the manager is configured to swap out every 5 seconds.
However, the session attribute is lost (the new one), the old ones are still there. But whatever it is the 'load' method is not called (at least I don't see the trace).
Some help on this will be really appreciated.
Hope this helps someone, the problem is actually in the following code sections:
public void save(Session session) throws IOException method:
String sessionId=session.getId();
ByteArrayOutputStream baos=new ByteArrayOutputStream();
ObjectOutputStream oos=new ObjectOutputStream(baos);
oos.writeObject(session);
oos.close();
byte[] serializedSession=baos.toByteArray();
sessionStore.put(sessionId,serializedSession);
public Session load(String sessionId) throws ClassNotFoundException, IOException method:
byte[] serializedSession=(byte[])sessionStore.get(sessionId);
ObjectInputStream ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
//Read the saved session from serialized state
//StandardSession session_=new StandardSession(manager);
StandardSession session_=(StandardSession)ois.readObject();
session_.setManager(manager);
ois.close();
//Initialize the transient properties of the session
ois=new ObjectInputStream(new ByteArrayInputStream(serializedSession));
session_.readObjectData(ois);
session=session_;
ois.close();
If you notice, the session is summarily serialized and saved to the Hazelcast, that is not a problem by itself.
Now if we look at the Tomcat code for StandardSession, we see that it contains a number of transient properties that will not be serialized. So during deserialization these properties must be given values, which is done in the 'load' method, however, it is done wrongly, first it deserializes the session from the ObjectInputStream 'readObjectData' method to initialize the transient properties. In the StandardSession, 'readObjectData' calls 'doReadObject' a protected method to reinitialize the transient properties, which in turn expects that the object input stream provided is a series of objects. In our case, however, it is the entire serialized object and not the series of objects that it expects.
In fact, after enabling fine level logging on Tomcat only this exception is seen, not otherwise.
The workaround is simple, StandardSession has a method 'writeObjectData' method, which internally calls a protected method 'doWriteObject', which writes the session state in a series of objects to the output stream, reading this serialized bytes solves the problem.