Spring annotation-driven ehcache note evicting using timeToLiveSeconds - spring

Sorry in advance for the long post however I wanted to be precise in my post.
I have a spring mvc 3.1 web application and have just built ehcache into my web application which I am using to cache all my list of values (drop downs) built from the database.
Here is an example of my settings...
<!-- Cache -->
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.5.0</version>
</dependency>
...
<ehcache>
<diskStore path="java.io.tmpdir"/>
<cache
name="lovStore"
maxElementsInMemory="512"
eternal="false"
timeToIdleSeconds="60"
timeToLiveSeconds="60"
overflowToDisk="false"
memoryStoreEvictionPolicy="LRU"/>
</ehcache>
...
<cache:annotation-driven />
<bean id="cacheManager"
class="org.springframework.cache.ehcache.EhCacheCacheManager"
p:cache-manager-ref="ehcache"/>
<bean id="ehcache"
class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
p:config-location="classpath:ehcache.xml"/>
...
#Cacheable(value = "lovStore", key = "#code")
public Map<String, String> getLov(String code) throws ReportingManagerException {
return MockLOVHelper.getInstance().getLov(code);
}
A lot of the tutorials on the net talk about evicting the cache using a #cacheEvict method however this does not suit me. I think I am better suited off using the timeToLiveSeconds option to evict the cache.
When i look in the logs the cache is definitely working however the eviction of the cache is not. I've read some other articles on the net about how timeToLiveSeconds doesn't truly evict the cache but other articles like this one http://code.google.com/p/ehcache-spring-annotations/wiki/EvictExpiredElements which say there are special settings you have to create to get the cache to evict.
Can someone please help me understand if my cache should be evicting and also how I can evict because what is mentioned in the article is not something i was able to understand how to implement.
Here are what my logs look like. But there are no signs of eviction...
2014-01-20 13:32:41,791 DEBUG [AnnotationCacheOperationSource] - Adding cacheable method 'getLov' with attribute: [CacheableOperation[public java.util.Map com.myer.reporting.dao.mock.MockLovStoreDaoImpl.getLov(java.lang.String) throws com.myer.reporting.exception.ReportingManagerException] caches=[lovStore] | condition='' | key='#code']
thanks

Ecache does not evict elements until there's need to do that. And that's reasonable since there is no need to waste CPU resources for operation(evicting elemets) that wouldn't make much difference.
If someone would try to get element from cache and the element's time to live/idle would be expired, then ehcache would evict that element and return null. In case max elements or memory for cache/pool would be reached then ehcache would evict elements that are expired, based on eviction policy.
And if I understand correctly it's not guaranteed that all expired element's would be evicted since only sample elements are selected to evict(from documentation):
#param sampledElements this should be a random subset of the population

Related

ehcache write-behind behaviour

I am using an EhCache based cacheWriter for write-behind cache implementation.
here is the config:
<cache name="CACHE_JOURNALS"
maxElementsInMemory="1000"
eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120"
overflowToDisk="true" maxElementsOnDisk="10000000" diskPersistent="false"
diskExpiryThreadIntervalSeconds="120" memoryStoreEvictionPolicy="LRU">
<cacheWriter writeMode="write-behind"
maxWriteDelay="2"
rateLimitPerSecond="20"
writeCoalescing="true"
writeBatching="false"
writeBatchSize="1"
retryAttempts="2"
retryAttemptDelaySeconds="2">
<cacheWriterFactory
class="JournalCacheWriterFactory"
properties="just.some.property=test; another.property=test2"
propertySeparator=";" />
</cacheWriter>
</cache>
after I do a cache.putWithWriter
cache.putWithWriter(new Element(key, newvalue));
another thread tends to read from cache with 'key'
observation:
if < 2s then I get the old value
if > 2s then I get the updated value (newvalue)
It seems that cache is updated with 'key':newvalue only after write to datastore.
Q1.Is this the expected behaviour for write-behind?
Q2.Is there any way the get it to update the cache with 'key':newvalue just as soon
as the 'putWithWriter' call completes and then have a deferred write
behind.
From the documentation, it seems that the later is what is implied.
Q1: No. I don't even see how that would actually happen!
Q2: n/a as what you observe isn't the expected behavior, but the new value should be observable in the cache right away.
Could it be you use this cache with some read through of some kind and actually observe the Cache's Entry being evicted/expired and actually repopulate with the old value from the Datastore?
This was a naive error on my side, the code calling the #Cacheble method was from the same spring service.
Spring does not intercept calls from-to the same service.
I refactored the cache enabled code out and it works as expected.

HazelCast IMap.values() giving OutofMemory on Tomcat

I'm still trying to get to know hazelcast and have to make a decision on whether to use it or not.
I wrote a simple application where in I startup the cache on (single node) server startup and load the Map at the same time with about 400 entries.The Object itself has two String fields. I have a service class that looksup the cache and tries to get all the values from the map.
However, I'm getting a OutofMemoryError on Java Heap Space while trying to get values out of the hazelcast map. Eventually we plan to move to a 5 node cluster to start with.
Following is the cache spring config:
<hz:hazelcast id="instance">
<hz:config>
<hz:group name="dev" password=""/>
<hz:properties>
<hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
</hz:properties>
<hz:network port="5701" port-auto-increment="false">
<hz:join>
<hz:multicast enabled="true" />
</hz:join>
</hz:network>
</hz:config>
</hz:hazelcast>
<hz:map instance-ref="instance" id="statusMap" name="statuses" />
Following is where the error occurs:
map = instance.getMap("statuses");
Set<Status> statuses = (Set<Status>) map.values();
return statuses;
Any other method of IMap works fine. I tried getting the keySet and size and both worked fine. It is only when I try to get the values that the OutofMemory error shows up.
java.lang.OutOfMemoryError: Java heap space
I've tried the above with a standalone java application and it works fine. I've also monitored with visual VM and don't see any spike in used Heap Memory when the error occurs which is all the more confusing. Available Heap is 1G and the used Heap was about 70MB when the error occurred.
However, when I take out cache implementation from the application, it works fine going to the Database and getting the data.
I've also tried playing around with the tomcat vm args to no success. It is the same OutofMemoryError when I access IMap.values() with or without SQLPredicate. Any help or direction in this matter will be greatly appreciated.
Thanks.
As the exception mentions you're running out of heap space since the values method tries to return all deserialized values at once. If they don't fit into memory you'll likely to get an OOME.
You can use paging to prevent this from happening: http://hazelcast.org/docs/latest/manual/html-single/hazelcast-documentation.html#paging-predicate-order-limit-
How big are your 400 entries?
And like Chris said, the whole data is being pulled in memory.
In the future we'll replace this by an iteration based approach where we'll only pull small chunks in memory instead of the whole thing.
I figured out the issue. The Status object was implementing "com.hazelcast.nio.serialization.Portable" for Serialization. I did not configure the corresponding serialization factory. After I configured the factory as follows, it worked fine:
<hz:serialization>
<hz:portable-factories>
<hz:portable-factory factory-id="1" class-name="ApplicationPortableFactory" />
</hz:portable-factories>
</hz:serialization>
Apologize for not giving the complete background initially as I myself noticed it later on. Thanks for replying though. I wasn't aware of the Paging Predicate and now I'm using it for sorting and paging results. Thanks again.

JPA 2 + EclipseLink : Caching Issue

I have a strange behavior with caching and JPA Entities (EclipseLink 2.4.1 ) + GUICE PERSIST
I will not use caching, nevertheless I get randomly an old instance that has already changed in MySQL database.
I have tried the following:
Add # Cacheable (false) to the JPA Entity.
Disable Cache properties in the persistence.xml file :
<class>MyEntity</class>
<shared-cache-mode>NONE</shared-cache-mode>
<properties>
<property name="eclipselink.cache.shared.default" value="false"/>
<property name="eclipselink.cache.size.default" value="0"/>
<property name="eclipselink.cache.type.default" value="None"/>
<property name="eclipselink.refresh" value="true"/>
<property name="eclipselink.query-results-cache" value="false"/>
<property name="eclipselink.weaving" value="false"/>
</properties>
Even activating trace EclipseLink, i see the JPQL query:
ReadObjectQuery Execute query (name = "readObject" referenceClass = XX sql = "... (just making a call" find "the entityManager
but, However randomly returns an old value of that class.
Note
Perhaps happens for using different instances of EntityManager and everyone has their cache?
I have seen the following related post : Disable JPA EclipseLink 2.4 cache
If so, is possible to clear the cache of ALL EntityManager whithout using : ????
em.getEntityManagerFactory().getCache().evictAll();
Is it possible to clear ALL caches whithout using evictALL ??
Evict all is for the shared cache which you have disabled already anyway. EntityManager instances are required by default to have a first level cache of their own to keep track of all managed instances they have created. An entityManager is meant to represent logical transactions and so should not be long lived. You need to throw away your EntityManagers and re obtain them or just clear them at logical points rather than let the number of managed entitites in its cache grow endlessly. This will also help limit the stale data issue, though nothing other than pessimistic locking can eliminate it. I recommend using optimistic locking if you aren't already to avoid overwriting with stale data.

Infinispan : Responses contains more than 1 not equal elements

I am using Infinispan as L2 cache and I have two application nodes. The L2 cache in two apps are replicated. The two apps are not identical.
One of my app fill the database using web services while other app run GUI for the database.
The both app do the extensively read and write to the database. After running the app I have seen following error. I do not know which cause this error.
I wonder why
- My cache instances are not properly replicated each change to other
L2 cache got a two reposes
L2 responses are not equal
ERROR org.infinispan.interceptors.InvocationContextInterceptor - ISPN000136: Execution error
2013-05-29 06:32:32 ERROR - Exception while processing event, reason: org.infinispan.loaders.CacheLoaderException: Responses contains more than 1 element and these elements are not equal, so can't decide which one to use:
[SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152081} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#57991642}} ,
SuccessfulResponse{responseValue=TransientCacheValue{maxIdle=100000, lastUsed=1369809152116} TransientCacheValue {value=MarshalledValue{instance=, serialized=ByteArray{size=1911, array=0x0301fe0409000000..}, cachedHashCode=1816114786}#6cdaa731}} ]
My Infinispan configuration is
<globalJmxStatistics enabled="true" jmxDomain="org.infinispan" allowDuplicateDomains="true"/>
<transport
transportClass="org.infinispan.remoting.transport.jgroups.JGroupsTransport"
clusterName="infinispan-hibernate-cluster"
distributedSyncTimeout="50000"
strictPeerToPeer="false">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
</global>
<default>
</default>
<namedCache name="my-cache-entity">
<clustering mode="replication">
<stateRetrieval fetchInMemoryState="false" timeout="60000"/>
<sync replTimeout="20000"/>
</clustering>
<locking isolationLevel="READ_COMMITTED" concurrencyLevel="1000"
lockAcquisitionTimeout="15000" useLockStriping="false"/>
<eviction maxEntries="10000" strategy="LRU"/>
<expiration maxIdle="100000" wakeUpInterval="5000"/>
<lazyDeserialization enabled="true"/>
<!--<transaction useSynchronization="true"
transactionMode="TRANSACTIONAL" autoCommit="false"
lockingMode="OPTIMISTIC"/>-->
<loaders passivation="false" shared="false" preload="false">
<loader class="org.infinispan.loaders.cluster.ClusterCacheLoader"
fetchPersistentState="false"
ignoreModifications="false" purgeOnStartup="false">
<properties>
<property name="remoteCallTimeout" value="20000"/>
</properties>
</loader>
</loaders>
</namedCache>
Replicated entity caches should be configured with state retrieval, as already indicated in the default Infinispan configuration file and you've already done so. ClusterCacheLoader should only used in special situations (for query caching). Why not just use the default Infinsipan configuration provided? In fact, if you don't configure a config file, it'll use the default one.

Coherence Flush Delay Setting

I want a cache that checks its own items if they are expired or not. My cache config is below:
<?xml version="1.0" encoding="UTF-8"?>
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>subscriberinfo</cache-name>
<scheme-name>distributed-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-scheme</scheme-name>
<lease-granularity>member</lease-granularity>
<service-name>DistributedCache</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>rbm-shovel-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
<backing-map-scheme>
<local-scheme>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>24h</expiry-delay>
<flush-delay>180</flush-delay>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
But the thing is, flush-delay can not be set. Any ideas?
Thanks
which version of Coherence do you use ?
In Coherence 3.7, the flush-delay has been removed from dtd as deprecated since version 3.5.
Flushing is just active when inserting new objects (have a look at eviction-policy) or accessing expired objects (look at expiry-delay).
Coherence deprecated the FlushDelay and related settings in 3.5. All of that work is done automatically now:
Expired items are automatically removed, and the eviction / expiry events are raised accordingly
You will never see expired data in the cache; even if you try to access it just as it expires, the expiry will occur as an event before the data access occurs
Eviction (for memory limits) is now done asynchronously, so that the "sharp edges" of the side-effects of eviction (such as event storms) are spread out across natural cycles (with the cycle length calculated based on the estimated rate of eviction)
I hope that helps.
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
But the thing is, flush-delay can not be set. Any ideas?
What do you mean by this? Does the system throw errors or the expired items not getting removed from cache. Based on the configuration you have, the entry should be removed from cache after 24hours and 180seconds since last update to the entry.

Resources