Don's stop Infinispan cache containers in WildFly (jBoss) on redeploy - caching

I use WildFly 8.2 with Immutant 2.1 (application is in Clojure)
Every time I redeploy my application in WildFly cluster, its Infinispan Web cache container is restarted, and all users sessions are lost.
Is it possible to not restart the cache container, or flush data to disk before redeploy?
Thanks in advance
UPD:
Thanks to sprockets answer, I could find configuration for web container that would do what I needed:
<cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan" aliases="standard-session-cache">
<transport lock-timeout="60000"/>
<replicated-cache name="repl" batching="true" mode="ASYNC">
<file-store preload="true" passivation="false" purge="false" relative-to="jboss.home.dir" path="infinispan-file-store">
<write-behind/>
</file-store>
</replicated-cache>
</cache-container>
Problem is that web container is very picky about its configuration and throws not very informative exceptions if you use incompatible settings.
So basically, you only need to add
<file-store preload="true" passivation="false" purge="false" relative-to="jboss.home.dir" path="infinispan-file-store">
<write-behind/>
</file-store>
To container configuration.

Ifninispan provides cache passivation and cache activation to do this:
https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores#CacheLoadersandStores-CachePassivation
A cache loader can be used to enforce entry passivation and
activation on eviction in a cache. Cache passivation is the process of
removing an object from in-memory cache and writing it to a secondary
data store (e.g., file system, database) on eviction. Cache Activation
is the process of restoring an object from the data store into the
in-memory cache when it's needed to be used. In both cases, the
configured cache loader will be used to read from the data store and
write to the data store.
We use this to passivate data from a clustered hibernate-search cache-container to a file-store like this (in our standalone-full-ha.xml):
<cache-container name="hibernate-search" jndi-name="java:jboss/infinispan/container/hibernate-search" start="EAGER">
<transport lock-timeout="330000"/>
<replicated-cache name="LuceneIndexesMetadata" start="EAGER" mode="SYNC" remote-timeout="330000">
<locking striping="false" acquire-timeout="330000" concurrency-level="500"/>
<transaction mode="NONE"/>
<eviction strategy="NONE" max-entries="-1"/>
<expiration max-idle="-1"/>
<state-transfer enabled="true" timeout="480000"/>
<file-store preload="true" passivation="false" purge="false" relative-to="jboss.home.dir" path="infinispan-file-store">
<write-behind/>
</file-store>
<indexing index="NONE"/>
</replicated-cache>
The data is then available after the node is restarted.
The schema for the subsystem, describing all valid elements and attributes, can be found in the Wildfly distribution, in the docs/schema directory.
See also:
https://docs.jboss.org/author/display/WFLY8/Infinispan+Subsystem
http://infinispan.org/cache-store-implementations/

Related

What is the Wildfly infinispan web cache container for?

I'm using Wildfly 12.
The default configuration of the infinispan subsystem defines a cache-container named "web". I tried to find out why this container is defined and who uses it, but could not find any explanation in the documentation or anywhere on google so far.
standalone-full-ha-custom.xml:
<cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">
<transport lock-timeout="60000"/>
<distributed-cache name="dist">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="BATCH"/>
<file-store/>
</distributed-cache>
</cache-container>
What kind of data is stored in this cache and which components need it?
The web cache container is used for storing HTTP session information. WildFly's High Availability Guide contains information on all clustered services.

How can I enable versioning?

I'd like to enable versioning for a replicated cache in a locally-running Infinispan server (8.2.4 final, two Infinispan servers form a cluster).
This is documented in the user guide.
Quote:
10.2.5. Configuration
By default versioning will be disabled.
and the user guide contains the following snippet:
<versioning scheme="SIMPLE|NONE" />
I am using locally-running Infinispan servers, the configuration is in clustered.xml.
A fragment thereof:
<subsystem xmlns="urn:infinispan:server:core:8.2" default-cache-container="clustered">
<cache-container name="clustered" default-cache="default" statistics="true">
[...]
<replicated-cache name="demoCache" mode="ASYNC" >
<versioning scheme="SIMPLE"/>
</replicated-cache>
So when I add the versioning element, starting fails with
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[186,6]
Message: WFLYCTL0198: Unexpected element '{urn:infinispan:server:core:8.2}versioning' encountered
The XML element versioning indeed exists in urn:infinispan:config:8.2, but not in urn:infinispan:server:core:8.2 (which is used in clustered.xml).
urn:infinispan:config:8.2 is defined in infinispan-server-8.2.4.Final/docs/schema/infinispan-config-8.2.xsd.
urn:infinispan:server:core:8.2 is defined in infinispan-server-8.2.4.Final/docs/schema/jboss-infinispan-core_8_2.xsd
How can I enable (cluster aware) versioning when running Infinispan as a separate server?
Versioning does not make sense when using Infinispan remotely since versioning is purely used to detect write skew situations with repeteable read transactions, and that functionality is not really available to users in server mode.

Reestarting JVM wont reload ehcache objects

I am facing issues with ehcache, I have set my cache to be eternal and I am bouncing the JVM's one by one to "not to have downtime", and this actually not clearing my ehcache and reloading new objects..
<cache name="SampleServerStartupCache" maxElementsInMemory="10000" eternal="true" overflowToDisk="true" maxElementsOnDisk="10000000"
diskPersistent="false" memoryStoreEvictionPolicy="LRU" diskExpiryThreadIntervalSeconds="120" />
I was having a belief that when the cache is set to eternal="true" the restart of JVM should flush and load the new objects. However in order to avoid downtime, we are bouncing the JVM'S one by one, will that a make difference between a clean restart ?
eternal="true" only means that elements will not expire from the cache. It has no impact on the life of elements across a JVM restart.
If you desire persistence of cache elements, you need to set diskPersistent="true", define a disk path in your config file. You must also do a proper shutdown of the CacheManager when bouncing the JVM otherwise the data saved on disk might be discarded on restart.
Added after comment
In your scenario, restarting a JVM will clear its cache. But loading of the cache will only happen when your application puts to it. Ehcache is not configured to do pre-loading. If that's what you desire, have a look at the documentation on BootstrapCacheLoader.

Caching settings in WSO2 DSS

We have different services deployed in DSS and we have a different way of caching:
no cache
1 hour cache
1 day cache
Is there any way to set this caching directly to each dbs file without using the administration console?
Another way would be to set these three caches through a configuration file and then to refers to them from the dbs files.
The solution we are looking for, is without using the administration console.
It is indeed possible to configure caching for dataservices via a configuration file without using the management console. Each dataservice is deployed as form of a axis2 service. Therefore you can use the "services.xml" file which you would typically use to configure axis2 service related parameters, with dataservices too with a slight modification. That is, if the name of your dataservice is "TestDS" then you have to name your services.xml file as "TestDS_services.xml" and place it inside the dataservices deployment directory which can be located at "DSS_HOME/repository/deployment/server/dataservices". Then you can include a caching policy having your own values as the parameters inside the aforementioned configuration file. Also it is important to note that, you can engage caching in three levels for a dataservies namely, per service group/per service/per operation.
A sample services.xml is show below.
<serviceGroup>
<service name="TestDS">
<!--parameter name="ServiceObjectSupplier">org.apache.axis2.engine.DefaultObjectSupplier</parameter-->
<Description>Enabling caching through sevices.xml</Description>
<operation name="op1">
<messageReceiver class="org.wso2.carbon.dataservices.core.DBInOutMessageReceiver"/>
<module ref="wso2caching"/>
<wsp:Policy
wsu:Id="WSO2CachingPolicy"
xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsch:CachingAssertion xmlns:wsch="http://www.wso2.org/ns/2007/06/commons/caching">
<wsp:Policy>
<wsp:All>
<wsch:XMLIdentifier>org.wso2.caching.digest.DOMHASHGenerator</wsch:XMLIdentifier>
<wsch:ExpireTime>70000</wsch:ExpireTime>
<wsch:MaxCacheSize>1000</wsch:MaxCacheSize>
<wsch:MaxMessageSize>1000</wsch:MaxMessageSize>
</wsp:All>
</wsp:Policy>
</wsch:CachingAssertion>
</wsp:Policy>
</operation>
<operation name="op2">
<messageReceiver class="org.wso2.carbon.dataservices.core.DBInOutMessageReceiver"/>
<module ref="wso2caching"/>
<wsp:Policy
wsu:Id="WSO2CachingPolicy"
xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy"
xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsch:CachingAssertion xmlns:wsch="http://www.wso2.org/ns/2007/06/commons/caching">
<wsp:Policy>
<wsp:All>
<wsch:XMLIdentifier>org.wso2.caching.digest.DOMHASHGenerator</wsch:XMLIdentifier>
<wsch:ExpireTime>600000</wsch:ExpireTime>
<wsch:MaxCacheSize>1000</wsch:MaxCacheSize>
<wsch:MaxMessageSize>1000</wsch:MaxMessageSize>
</wsp:All>
</wsp:Policy>
</wsch:CachingAssertion>
</wsp:Policy>
</operation>
<operation name="op3">
</operation>
</service>
</serviceGroup>
After placing your "data_service_name"_services.xml file inside the aforesaid directory, you have to comment out the following entry from the axis2.xml configuration file that can be located at "DSS_HOME/repository/conf" directory.
<listener class="org.wso2.carbon.core.deployment.DeploymentInterceptor">
Now you're good to go with your deployment. Restart the server and you'll be able to get the aforementioned functionality working.
NOTE: You would also want to be informed that a lot of improvements have been done on this space in DSS for our immediate upcoming DSS release. (DSS 3.0.0).
Regards,
Prabath

Is tomcat clustering only way for session replication?

I tested Tomcat Clustering for session replication on ubuntu servers with apache as front end load balancers. From my testing experience I say it's better not using tomcat clustering but running each node as standalone not knowing one another without any session replication as I felt it's slow, takes much time to startup tomcat service and consumes more memory. And the FarmDeployer is not reliable always in deploying and whole configuration should be placed under<Host></Host> element for farm deployer to work and also for each virtual hosting and thus a huge server.xml file. Below is the tomcat virtual hosting with cluster configuration from one of the node I used.
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger"
directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.8"
port="4001"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
<!-- <Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/> -->
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>
Is Tomcat clustering good to use on production or is there any alternate way for session replication?. Or I'm missing anything in the above configuration which could be fine tuned?
Any ideas are welcome. Thanks!
One session-failover / session-replication solution for tomcat is memcached-session-manager (msm), supporting both sticky and non-sticky sessions. msm uses memcached (or any backend speaking the memcached protocol) as backend for session backup/storage.
In sticky mode sessions are still kept in tomcat, and memcached is only used as an additional backup - for session failover.
In non-sticky mode sessions are only stored in memcached and no longer in tomcat, as with non-sticky sessions the session-store must be external (to avoid stale data).
There's also special support for membase / membase buckets, which is useful for hosted solutions where you get access to a certain bucket with the appropriate authentication.
Session serialization is pluggable, so you're not tied to java serialization (and classes implementing Serializable). E.g. there's a kryo serializer available, which is one of the fastest serialization strategies available.
The msm home page mainly describes the sticky session approach, for details regarding non-sticky sessions you might search or ask on the mailing list.
Details and examples regarding the configuration can be found in the msm wiki (SetupAndConfiguration).

Resources