Reestarting JVM wont reload ehcache objects - ehcache

I am facing issues with ehcache, I have set my cache to be eternal and I am bouncing the JVM's one by one to "not to have downtime", and this actually not clearing my ehcache and reloading new objects..
<cache name="SampleServerStartupCache" maxElementsInMemory="10000" eternal="true" overflowToDisk="true" maxElementsOnDisk="10000000"
diskPersistent="false" memoryStoreEvictionPolicy="LRU" diskExpiryThreadIntervalSeconds="120" />
I was having a belief that when the cache is set to eternal="true" the restart of JVM should flush and load the new objects. However in order to avoid downtime, we are bouncing the JVM'S one by one, will that a make difference between a clean restart ?

eternal="true" only means that elements will not expire from the cache. It has no impact on the life of elements across a JVM restart.
If you desire persistence of cache elements, you need to set diskPersistent="true", define a disk path in your config file. You must also do a proper shutdown of the CacheManager when bouncing the JVM otherwise the data saved on disk might be discarded on restart.
Added after comment
In your scenario, restarting a JVM will clear its cache. But loading of the cache will only happen when your application puts to it. Ehcache is not configured to do pre-loading. If that's what you desire, have a look at the documentation on BootstrapCacheLoader.

Related

Don's stop Infinispan cache containers in WildFly (jBoss) on redeploy

I use WildFly 8.2 with Immutant 2.1 (application is in Clojure)
Every time I redeploy my application in WildFly cluster, its Infinispan Web cache container is restarted, and all users sessions are lost.
Is it possible to not restart the cache container, or flush data to disk before redeploy?
Thanks in advance
UPD:
Thanks to sprockets answer, I could find configuration for web container that would do what I needed:
<cache-container name="web" default-cache="repl" module="org.wildfly.clustering.web.infinispan" aliases="standard-session-cache">
<transport lock-timeout="60000"/>
<replicated-cache name="repl" batching="true" mode="ASYNC">
<file-store preload="true" passivation="false" purge="false" relative-to="jboss.home.dir" path="infinispan-file-store">
<write-behind/>
</file-store>
</replicated-cache>
</cache-container>
Problem is that web container is very picky about its configuration and throws not very informative exceptions if you use incompatible settings.
So basically, you only need to add
<file-store preload="true" passivation="false" purge="false" relative-to="jboss.home.dir" path="infinispan-file-store">
<write-behind/>
</file-store>
To container configuration.
Ifninispan provides cache passivation and cache activation to do this:
https://docs.jboss.org/author/display/ISPN/Cache+Loaders+and+Stores#CacheLoadersandStores-CachePassivation
A cache loader can be used to enforce entry passivation and
activation on eviction in a cache. Cache passivation is the process of
removing an object from in-memory cache and writing it to a secondary
data store (e.g., file system, database) on eviction. Cache Activation
is the process of restoring an object from the data store into the
in-memory cache when it's needed to be used. In both cases, the
configured cache loader will be used to read from the data store and
write to the data store.
We use this to passivate data from a clustered hibernate-search cache-container to a file-store like this (in our standalone-full-ha.xml):
<cache-container name="hibernate-search" jndi-name="java:jboss/infinispan/container/hibernate-search" start="EAGER">
<transport lock-timeout="330000"/>
<replicated-cache name="LuceneIndexesMetadata" start="EAGER" mode="SYNC" remote-timeout="330000">
<locking striping="false" acquire-timeout="330000" concurrency-level="500"/>
<transaction mode="NONE"/>
<eviction strategy="NONE" max-entries="-1"/>
<expiration max-idle="-1"/>
<state-transfer enabled="true" timeout="480000"/>
<file-store preload="true" passivation="false" purge="false" relative-to="jboss.home.dir" path="infinispan-file-store">
<write-behind/>
</file-store>
<indexing index="NONE"/>
</replicated-cache>
The data is then available after the node is restarted.
The schema for the subsystem, describing all valid elements and attributes, can be found in the Wildfly distribution, in the docs/schema directory.
See also:
https://docs.jboss.org/author/display/WFLY8/Infinispan+Subsystem
http://infinispan.org/cache-store-implementations/

Liferay custom entity caching

I am using liferay 6.1 version.
I have created custom entities for portlet using service builder. I want to cache that custom entities.
I have set following properties in my portal-ext.properties to enable cache.
ehcache.statistics.enabled=true
value.object.entity.cache.enabled=true
value.object.finder.cache.enabled=true
velocity.engine.resource.manager.cache.enabled=true
layout.template.cache.enabled=true
net.sf.ehcache.configurationResourceName=/custom_cache/hibernate-clustered.xml
log4j.logger.net.sf.ehcache=DEBUG
log4j.logger.net.sf.ehcache.config=DEBUG
log4j.logger.net.sf.ehcache.distribution=DEBUG
log4j.logger.net.sf.ehcache.code=DEBUG
I created ehcache.xml file to override the ehcache-failsafe.xml to configure my custom entities so that it can enable for caching.
my ehcache.xml file is in my classpath [classpath:liferay-portal-6.1.1-ce-ga2/tomcat-7.0.27/webapps/ROOT/WEB-INF/classes].
<diskStore path="java.io.tmpdir/ehcache"/>
<defaultCache
maxElementsInMemory="10000"
eternal="false"
timeToIdleSeconds="120"
timeToLiveSeconds="120"
overflowToDisk="true"
maxElementsOnDisk="10000000"
diskPersistent="false"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU"
/>
<cache
eternal="false"
maxElementsInMemory="10000"
name="com.pr.test.model.impl.StudentImpl"
overflowToDisk="false"
timeToIdleSeconds="600"
timeToLiveSeconds="300"
statistics="true"
copyOnRead="true"
copyOnWrite="true"
clearOnFlush="true"
transactionalMode="off"
/>
Also create hibernate-clustered.xml file under src path [/docroot/WEB-INF/src] which is same as my ehcache.xml file.
since I am using service builder, cache-enable="true" is enough to cache the entities?
I use Jconsole to monitor the cache hits, But what the problem is the percentage for cache Misses is more than cache hits. Below is my statistics for caching :
Any help will be appreciated.
Caching is enabled by default for services built using liferay service builder.
I believe none of the steps mentioned above are required as cache is default enabled.
Below properties are set to true in default portal.properties and applies to all entities, not just for custom entities.
value.object.entity.cache.enabled=true
value.object.finder.cache.enabled=true
You can open *PersistenceImpl.java class for your custom entities to observe the caching code. Debugging this class could give you details on why its not hitting cache.
For example, calling API with cache off argument won't hit cache.

ehcache.xml configuration for cachedecorator SelfPopulatingCache in Spring framework

After a little research on ehcache we found that 'net.sf.ehcache.constructs.blocking.UpdatingSelfPopulatingCache' is the best match for our requirement.
I have followed suggestions given in the forum post (https://sourceforge.net/forum/message.php?msg_id=7382649) and realized that I can only replace the decorator cache with the actual ehcache with the help of a helper method on the CacheManager programmatically.
The question I wanted to ask is that using ehcache.xml file I was able to describe all the Ehcaches i wanted to use in my system. I feel its intuitive to declare the ehcache decorators as well in the XML itself so that I can completely centralize all my configuration at one place itself. i.e. ehcache XML. Is there a way we can add configuration for a decorator in the ehcache.xml itself ?
I found a good example of using cusom cache decorators
Specifying global EhCache capacity
<cache name="singleSharedCache" maxElementsInMemory="2000"
eternal="false" overflowToDisk="false">
<cacheDecoratorFactory class="com.xyz.util.CustomEhcacheDecoratorFactory"
properties="name=org.hibernate.tutorial.domain.Person" />
<cacheDecoratorFactory class="com.xyz.util.CustomEhcacheDecoratorFactory"
properties="name=org.hibernate.tutorial.domain.Event" />
</cache>
You should find all the information on this in the cache decorators documentation.
And it is indeed possible to declare decorated caches from XML.

Ehcache RMI Replication Not Working.Need to know how to visualise replication?

I have setup my ehcache replication using RMI.However, I don't see any replication happening neither do I get any error.
Can you please take a look and let me know where I'm going wrong?I have tried both automatic and manual mode of discovery but to no avail.
I have enabled TRACE logging for net.sf hierarchy but I don't see any activity*.Can you please let me know how to visualise the replication,via logs?What category to add etc.*
My test scenario
I access jsp on host1 and thereafter on host2.When I then go back to host1 I expect to see some logs of replication or the replicated value coming back which I don't see.
Any help is appreciated.I have been struggling with this for quite sometime now.
My config is as follows
on host 1
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//host2:40001/reportsCache"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostname=host1, port=40001, socketTimeoutMillis=3000"/>
on host 2
<cacheManagerPeerProviderFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory"
properties="peerDiscovery=manual,
rmiUrls=//host1:40001/reportsCache"/>
<cacheManagerPeerListenerFactory
class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory"
properties="hostname=host2, port=40001, socketTimeoutMillis=3000"/>
My ehcache config is as follows
<cache name="reportsCache"
maxElementsInMemory="1000"
maxElementsOnDisk="100"
eternal="false"
overflowToDisk="true"
timeToIdleSeconds="15"
timeToLiveSeconds="15"
statistics="true">
<cacheEventListenerFactory class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicateAsynchronously=true,
replicatePuts=true,
replicatePutsViaCopy=true,
replicateUpdates=true,
replicateUpdatesViaCopy=true,
replicateRemovals=true" />
<bootstrapCacheLoaderFactory
class="net.sf.ehcache.distribution.RMIBootstrapCacheLoaderFactory"/>
The relevant logger used is called net.sf.ehcache.distribution.RMICachePeer, where you have to enable DEBUG level.
How to configure the logger really depends on your logging framework. For log4j you would use something like the following in your log4j.properties file:
log4j.logger.net.sf.ehcache.distribution.RMICachePeer=DEBUG
For log4j2 put the following in your log4j2.xml:
<loggers>
...
<logger name="net.sf.ehcache.distribution.RMICachePeer" level="debug" />
</logger>
There was no problem as such.I think my testing was a bit off the mark.
I had the same jsp being accessed from two nodes(by 2 diff browser sessions) and was relying on my logs to see for cues regarding replication.I just changed my approach to see the content/data on the jsp and i see that the other node fetches the same data after a hit on the first node has been updated.So this is enough for my test.
however,I'm still curious to know how to see any logs about replication events?
I turned on a instance of ehcache-debugger too but i didn't see any thing coming up.
If anyone here can tell me how to see the logs pertaining to replication events that would be great.For the record,I have set the log level at TRACE for the net.sf package.

Is tomcat clustering only way for session replication?

I tested Tomcat Clustering for session replication on ubuntu servers with apache as front end load balancers. From my testing experience I say it's better not using tomcat clustering but running each node as standalone not knowing one another without any session replication as I felt it's slow, takes much time to startup tomcat service and consumes more memory. And the FarmDeployer is not reliable always in deploying and whole configuration should be placed under<Host></Host> element for farm deployer to work and also for each virtual hosting and thus a huge server.xml file. Below is the tomcat virtual hosting with cluster configuration from one of the node I used.
<Host name="site1.mydomain.net" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true">
<Logger className="org.apache.catalina.logger.FileLogger"
directory="logs" prefix="virtual_log1." suffix=".log" timestamp="true"/>
<Context path="" docBase="/usr/share/tomcat/webapps/myapp" debug="0" reloadable="true"/>
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.1.8"
port="4001"
selectorTimeout="100"
maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/>
<!-- <Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4002"
securePort="-1"
host="192.168.1.9"
domain="staging-cluster"
uniqueId="{0,1,2,3,4,5,6,7,8,9}"/> -->
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/usr/share/tomcat/temp/"
deployDir="/usr/share/tomcat/webapps/"
watchDir="/usr/share/tomcat/watch/"
watchEnabled="true"/>
</Cluster>
</Host>
Is Tomcat clustering good to use on production or is there any alternate way for session replication?. Or I'm missing anything in the above configuration which could be fine tuned?
Any ideas are welcome. Thanks!
One session-failover / session-replication solution for tomcat is memcached-session-manager (msm), supporting both sticky and non-sticky sessions. msm uses memcached (or any backend speaking the memcached protocol) as backend for session backup/storage.
In sticky mode sessions are still kept in tomcat, and memcached is only used as an additional backup - for session failover.
In non-sticky mode sessions are only stored in memcached and no longer in tomcat, as with non-sticky sessions the session-store must be external (to avoid stale data).
There's also special support for membase / membase buckets, which is useful for hosted solutions where you get access to a certain bucket with the appropriate authentication.
Session serialization is pluggable, so you're not tied to java serialization (and classes implementing Serializable). E.g. there's a kryo serializer available, which is one of the fastest serialization strategies available.
The msm home page mainly describes the sticky session approach, for details regarding non-sticky sessions you might search or ask on the mailing list.
Details and examples regarding the configuration can be found in the msm wiki (SetupAndConfiguration).

Resources