I got an open-source component called tomcat-redis-session-manager can store http session in redis, to provide high-availability in many tomcat servers.
So I want to find if there is a way to store glassfish http session in redis or memcached.
But I have not find what is the http session creation or acquire interceptor in glassfish.
Can anyone tell me how?
Tomcat does it by adding the following in context.xml
<Valve className="com.radiadesign.catalina.session.RedisSessionHandlerValve" />
<Manager className="com.radiadesign.catalina.session.RedisSessionManager"
host="localhost"
port="6379"
database="0"
maxInactiveInterval="60"
/>
Related
I'm using Apache Ignite to cluster web sessions, and use Spring security to do the form-based authentication. The software I use are:
JDK 1.8.0_60
Apache Tomcat 7.0.68
Apache Ignite 1.5.0.final
Spring Security 3.1.3.RELEASE
(Without Apache Ignite, the form based authentication works fine, and the JSESSIONID cookie gets changed upon the success of authentication to protect against session fixation attacks, as expected.)
With Apache Ignite, I cannot log in, and I get the following warning:
2016-04-18 16:49:07,283 WARN org.springframework.security.web.authentication.session.SessionFixationProtectionStrategy/onAuthentication 102 - Your servlet container did not change the session ID when a new session was created. You will not be adequately protected against session-fixation attacks
If I turn off session fixation protection in the Spring configuration as below:
<http>
...
<session-management session-fixation-protection="none" />
...
</http>
It works. (However as a result, the JSESSIONID cookie does not change upon the success of authentication.)
As advised by Valentin (, thanks), I tried the nightly build from Apache Ignite, of version 1.6.0-SNAPSHOT#20160419-sha1:186c8604. Indeed, it works.
It works with the following Spring security configuration:
<http>
...
<session-management session-fixation-protection="none" />
...
</http>
And of course the JSESSIONID cookie does not change upon the success of Spring security authentication.
Then I comment out the following configuration:
<session-management session-fixation-protection="none" />
It also works. And upon the success of authentication, the JSESSIONID cookie gets changed as it is supposed to do.
OK, I'll use Ignite version 1.5.0.final for now (with no session-fixation-protection), and wait for the release of version 1.6.x.
Tomcat 7 has in-built functionality for session fixation,
Changing the jsessionid on authentication to prevent session fixation attacks altogether
Tomcat is not letting the application to change the session ID.
I have set up a persistancce caching with Eclipselink on Wildfly 8. It works, but I also want to do cache coordination. I have the following setup for Eclipselink cache coordination in my persistance.xml:
<property name="eclipselink.cache.coordination.protocol" value="jms" />
<property name="eclipselink.cache.coordination.jms.topic" value="jms/MemberTopic" />
<property name="eclipselink.cache.coordination.jms.factory" value="jms/MemberConnectionFactory" />
However, when my entity is merged, no messages are sent by Eclipselink. I have logging set to "ALL", but nothing appears in the console.
I tried adding coordinationType=CacheCoordinationType.SEND_NEW_OBJECTS_WITH_CHANGES to entity's #Cache annotation, but it doesn't change anything. Also tried using an MDB as suggested for WebSphere (http://www.eclipse.org/eclipselink/documentation/2.4/concepts/cache011.htm#CDECEHFH).
The JMS topic and connectionfactory exist and Wildfly startup / application deployment shows no errors. For server clustering I run Wildfly in domain mode.
The problem ironically was in my Wildfly configuration instead - I didn't have my messaging cluster set up. I used the default messaging cluster settings from full-ha profile and set Eclipselink's cache coordination host accordingly:
<property name="eclipselink.cache.coordination.jms.host" value="231.7.7.7:9876" />
I have configured Tomcat 6 with in-memory session replication. I am also using IIS 7 (I know, I know) and the AJP connector via isapi_redirector. The cluster is working properly and I am able to replicate session attributes using the SessionExample in the examples war. The problem is that I am unable to do the same in my custom application. I have added the distributable tag to the web.xml file on both servers in my test cluster. However, I don't see any message in the logs mentioning the attributes getting sent to the cluster (I see them for SessionExample). The only primary differences that I can see in my app from the examples:
The examples war uses servlet 2.5. I am still required to use 2.4.
My application uses SSO and requires the user to login.
The application is a portal application.
Also, in the code of the application, I am setting a simple string in the attribute, so nothing fancy.
So, I was wondering if anyone has some tips to get this working?
Thanks
Here is the cluster section within of my server.xml:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="6">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.104"
port="45564"
frequency="500"
dropTime="10000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="7000"
maxThreads="6"
timeout="15000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
timeout="70000"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/apache-tomcat-6.0.37/war-deploy/war-temp/"
deployDir="/apache-tomcat-6.0.37/webapps/"
watchDir="/apache-tomcat-6.0.37/war-deploy/war-listen/"
watchEnabled="true"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Sorry. I found the issue. I was expecting to see messages in the log regarding the creation of the session attributes. I didn't realize that the examples project had a session listener that was outputting the messages to the log. I was thinking that it was simply from the log level that I had set.
Thanks to anyone who read this post.
I tested Tomcat Clustering for session replication on ubuntu servers with apache as front end load balancers. From my testing experience I say it's better not using tomcat clustering but running each node as standalone not knowing one another without any session replication as I felt it's slow, takes much time to startup tomcat service and consumes more memory. And the FarmDeployer is not reliable always in deploying and whole configuration should be placed under<Host></Host> element for farm deployer to work and also for each virtual hosting and thus a huge server.xml file. Below is the tomcat virtual hosting with cluster configuration from one of the node I used.
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger"
directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.8"
port="4001"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
<!-- <Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/> -->
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>
Is Tomcat clustering good to use on production or is there any alternate way for session replication?. Or I'm missing anything in the above configuration which could be fine tuned?
Any ideas are welcome. Thanks!
One session-failover / session-replication solution for tomcat is memcached-session-manager (msm), supporting both sticky and non-sticky sessions. msm uses memcached (or any backend speaking the memcached protocol) as backend for session backup/storage.
In sticky mode sessions are still kept in tomcat, and memcached is only used as an additional backup - for session failover.
In non-sticky mode sessions are only stored in memcached and no longer in tomcat, as with non-sticky sessions the session-store must be external (to avoid stale data).
There's also special support for membase / membase buckets, which is useful for hosted solutions where you get access to a certain bucket with the appropriate authentication.
Session serialization is pluggable, so you're not tied to java serialization (and classes implementing Serializable). E.g. there's a kryo serializer available, which is one of the fastest serialization strategies available.
The msm home page mainly describes the sticky session approach, for details regarding non-sticky sessions you might search or ask on the mailing list.
Details and examples regarding the configuration can be found in the msm wiki (SetupAndConfiguration).
The iBatis framework has been significantly tweaked between versions 2 & 3, so much that even the config file (now often referred to as MapperConfig.xml) is different.
That being said, there are lots of examples online on how to create a JDBC connection pool with iBatis, but I couldn't find one example on how to do it with JNDI. There is an updated user guide at: http://svn.apache.org/repos/asf/ibatis/java/ibatis-3/trunk/doc/en/iBATIS-3-User-Guide.pdf which does refer to the JNDI settings on page 19, but I still couldn't it get it correctly communicate with the database.
A working example of a JDNI (container managed connection pool) in iBatis 3 would be greatly appreciated!!
Assuming you've already got a JNDI database resource set up, the following environment for iBatis 3's configuration XML file works for me (running on Tomcat):
<environment id="development">
<transactionManager type="JDBC"/>
<dataSource type="JNDI">
<property name="data_source" value="java:comp/env/jdbc/webDb"/>
</dataSource>
</environment>
This is what I have in my config file, works well in Glassfish and WebSphere:
<dataSource type="JNDI">
<property name ="data_source" value="jdbc/cpswebmon"/>
</dataSource>
"jdbc/cpswebmon" is the JNDI resource name on my application server