ehcache write-behind behaviour - ehcache

I am using an EhCache based cacheWriter for write-behind cache implementation.
here is the config:
<cache name="CACHE_JOURNALS"
maxElementsInMemory="1000"
eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120"
overflowToDisk="true" maxElementsOnDisk="10000000" diskPersistent="false"
diskExpiryThreadIntervalSeconds="120" memoryStoreEvictionPolicy="LRU">
<cacheWriter writeMode="write-behind"
maxWriteDelay="2"
rateLimitPerSecond="20"
writeCoalescing="true"
writeBatching="false"
writeBatchSize="1"
retryAttempts="2"
retryAttemptDelaySeconds="2">
<cacheWriterFactory
class="JournalCacheWriterFactory"
properties="just.some.property=test; another.property=test2"
propertySeparator=";" />
</cacheWriter>
</cache>
after I do a cache.putWithWriter
cache.putWithWriter(new Element(key, newvalue));
another thread tends to read from cache with 'key'
observation:
if < 2s then I get the old value
if > 2s then I get the updated value (newvalue)
It seems that cache is updated with 'key':newvalue only after write to datastore.
Q1.Is this the expected behaviour for write-behind?
Q2.Is there any way the get it to update the cache with 'key':newvalue just as soon
as the 'putWithWriter' call completes and then have a deferred write
behind.
From the documentation, it seems that the later is what is implied.

Q1: No. I don't even see how that would actually happen!
Q2: n/a as what you observe isn't the expected behavior, but the new value should be observable in the cache right away.
Could it be you use this cache with some read through of some kind and actually observe the Cache's Entry being evicted/expired and actually repopulate with the old value from the Datastore?

This was a naive error on my side, the code calling the #Cacheble method was from the same spring service.
Spring does not intercept calls from-to the same service.
I refactored the cache enabled code out and it works as expected.

Related

Drools loading session seems to fire rules

I am at a loss with this and can't seem to find an answer in the docs. I am observing the following behaviour. I have this rule:
import function util.CSVParser.parse;
declare Passenger
#role(event)
#expires(24h)
end
rule "Parse and Insert CSV"
when
CSVReadyEvent( $csv_location : reader ) from entry-point "CSVReadyEntryPoint";
$p : Passenger() from parse($csv_location);
then
insert( $p );
end
I can then enter my CSVReadyEvent into my session and call fireAllRules and it executes correctly. It hits the safe point at the end, and all is cool.
I then restart my app and load the session like this:
KieSession loadedKieSession = kieServices.getKieService().getStoreServices().loadKieSession(session.getId(), kieBase, ksConf, kieServices.getEnvironment());
The base and config I take from my kmodule.xml.
What happens now is that WITHOUT calling fireAllRules() loading the session somehow triggers fireing all rules.
I do not understand how unmarshalling triggers rule execution but this is obviously wrong. I have already executed that rule, and it should not be executed twice.
In a test case (my tests do NOT create persistent sessions because I only want the rules to be tested) I can call fireAllRules() twice, and the second time does not trigger any matched rules. I am not exactly sure what goes wrong, but the persistent session seems to be loaded in an odd way. Or the persisting of the session is wonky and forgets that it had executed the rule already.
Does anyone have inside in this? I am more than happy to share any code.
Here's my persistence.xml:
<persistence-unit name="org.jbpm.persistence.jpa" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<class>org.drools.persistence.info.SessionInfo</class>
<class>org.drools.persistence.info.WorkItemInfo</class>
<exclude-unlisted-classes>true</exclude-unlisted-classes>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.MySQLDialect" />
<property name="hibernate.max_fetch_depth" value="30" />
<property name="hibernate.hbm2ddl.auto" value="update" />
<property name="hibernate.show_sql" value="true" />
<property name="hibernate.transaction.jta.platform" value="org.hibernate.service.jta.platform.internal.JBossStandAloneJtaPlatform" />
</properties>
</persistence-unit>
Thanks!
An update/answer from a painful painful painful day of debugging and testing and running stuff:
I suspected my hibernate setup was wrong, so the wrong thing got persisted. I ended up throwing that approach away and writing a manual marshalling/de-marshalling thing.
After creating/loading/recreating/loading I can confirm the session NEVER changes on file.
This was interesting to me because I could swear that the rules are executed, and I was half right:
The WHEN part is executed when the session is loaded. Why? I have not the slightest idea...
I was chasing a red hearing because I am calling a function in my when part (as you can see in the rule) to iterate and insert all facts based on that event I am receiving.
My parse function obviously has logging, so each time I reload the session, I get a storm of log flying through my terminal hinting that my rules are being executed.
I then changed my rules to be very very specific (as in output everywhere I possible can). I debugged as deep as I could and I still can't seem to be able to pinpoint as to why on earth recreating the session is executing the when part of a rule. I settled on this: Magic. And with a little more detail:
The documentation of drools persistence https://docs.jboss.org/jbpm/v6.2/userguide/jBPMPersistence.html states that the guys implemented their own serialze/deserialize strategy in order to speed up the process. I resolve to blame this custom strategy on what I am seeing.
Lesson learned:
Do NOT create objects in the when part (because this will slow you down when loading a session since all when parts are executed)
Chasing red herrings is a pain in my butt.
So to sum up: I believe (up to say 99%) that loading a session is NOT executing the rules.
Using events in real mode and in a STREAM session running due to fireUntilHalt on the one hand and saving and restarting sessions with fireAllRules are somewhat contradictory paradigms.
If you have events, I suggest that you use the API to set up and start a (stateful) session in a thread, and insert facts (events) as they arrive.

Mule - Caching Strategy - Session Clear

I am using Mule 3.5.0 and trying to implement the Cache Strategy. The cache is supposed to be hit by APIs for grabbing a Sugar CRM OAuth Token. Multiple endpoints are hitting this cache.
My requirement is to keep only one active element which in the queue which serves this active token to every API call for 5 minutes. When the TTL expires, the cache should grab another token and cache it for subsequent calls.
The problem arises, when multiple inbound endpoints are hitting the cache, old values are also being spit out by the cache. Is all I need to do is change the maxEntries to 1? OR is there a better way of achieving this?
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy">
<in-memory-store name="sugar-cache-in-memory" maxEntries="500" entryTTL="300000" expirationInterval="300000"/>
</ee:object-store-caching-strategy>
<flow name="get-oauth-token-cache" doc:name="get-oauth-token-cache" tracking:enable-default-events="true">
<ee:cache cachingStrategy-ref="Caching_Strategy" doc:name="Cache">
..............................
..............................
..............................
<logger message="------------------------ Direct Call for Token----------------------" level="INFO" doc:name="Logger"/>
<DATAMAPPER to set #payload.access_token />
</ee:cache>
<set-session-variable variableName="access_token" value="#[payload.access_token]" doc:name="Session Variable"/>
</flow>
The problem was that in the first line after ee:cache I had the Set Payload function. Had to take it outside the Cache Scope.
Sorry.

HazelCast IMap.values() giving OutofMemory on Tomcat

I'm still trying to get to know hazelcast and have to make a decision on whether to use it or not.
I wrote a simple application where in I startup the cache on (single node) server startup and load the Map at the same time with about 400 entries.The Object itself has two String fields. I have a service class that looksup the cache and tries to get all the values from the map.
However, I'm getting a OutofMemoryError on Java Heap Space while trying to get values out of the hazelcast map. Eventually we plan to move to a 5 node cluster to start with.
Following is the cache spring config:
<hz:hazelcast id="instance">
<hz:config>
<hz:group name="dev" password=""/>
<hz:properties>
<hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
<hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
</hz:properties>
<hz:network port="5701" port-auto-increment="false">
<hz:join>
<hz:multicast enabled="true" />
</hz:join>
</hz:network>
</hz:config>
</hz:hazelcast>
<hz:map instance-ref="instance" id="statusMap" name="statuses" />
Following is where the error occurs:
map = instance.getMap("statuses");
Set<Status> statuses = (Set<Status>) map.values();
return statuses;
Any other method of IMap works fine. I tried getting the keySet and size and both worked fine. It is only when I try to get the values that the OutofMemory error shows up.
java.lang.OutOfMemoryError: Java heap space
I've tried the above with a standalone java application and it works fine. I've also monitored with visual VM and don't see any spike in used Heap Memory when the error occurs which is all the more confusing. Available Heap is 1G and the used Heap was about 70MB when the error occurred.
However, when I take out cache implementation from the application, it works fine going to the Database and getting the data.
I've also tried playing around with the tomcat vm args to no success. It is the same OutofMemoryError when I access IMap.values() with or without SQLPredicate. Any help or direction in this matter will be greatly appreciated.
Thanks.
As the exception mentions you're running out of heap space since the values method tries to return all deserialized values at once. If they don't fit into memory you'll likely to get an OOME.
You can use paging to prevent this from happening: http://hazelcast.org/docs/latest/manual/html-single/hazelcast-documentation.html#paging-predicate-order-limit-
How big are your 400 entries?
And like Chris said, the whole data is being pulled in memory.
In the future we'll replace this by an iteration based approach where we'll only pull small chunks in memory instead of the whole thing.
I figured out the issue. The Status object was implementing "com.hazelcast.nio.serialization.Portable" for Serialization. I did not configure the corresponding serialization factory. After I configured the factory as follows, it worked fine:
<hz:serialization>
<hz:portable-factories>
<hz:portable-factory factory-id="1" class-name="ApplicationPortableFactory" />
</hz:portable-factories>
</hz:serialization>
Apologize for not giving the complete background initially as I myself noticed it later on. Thanks for replying though. I wasn't aware of the Paging Predicate and now I'm using it for sorting and paging results. Thanks again.

Spring annotation-driven ehcache note evicting using timeToLiveSeconds

Sorry in advance for the long post however I wanted to be precise in my post.
I have a spring mvc 3.1 web application and have just built ehcache into my web application which I am using to cache all my list of values (drop downs) built from the database.
Here is an example of my settings...
<!-- Cache -->
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache-core</artifactId>
<version>2.5.0</version>
</dependency>
...
<ehcache>
<diskStore path="java.io.tmpdir"/>
<cache
name="lovStore"
maxElementsInMemory="512"
eternal="false"
timeToIdleSeconds="60"
timeToLiveSeconds="60"
overflowToDisk="false"
memoryStoreEvictionPolicy="LRU"/>
</ehcache>
...
<cache:annotation-driven />
<bean id="cacheManager"
class="org.springframework.cache.ehcache.EhCacheCacheManager"
p:cache-manager-ref="ehcache"/>
<bean id="ehcache"
class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
p:config-location="classpath:ehcache.xml"/>
...
#Cacheable(value = "lovStore", key = "#code")
public Map<String, String> getLov(String code) throws ReportingManagerException {
return MockLOVHelper.getInstance().getLov(code);
}
A lot of the tutorials on the net talk about evicting the cache using a #cacheEvict method however this does not suit me. I think I am better suited off using the timeToLiveSeconds option to evict the cache.
When i look in the logs the cache is definitely working however the eviction of the cache is not. I've read some other articles on the net about how timeToLiveSeconds doesn't truly evict the cache but other articles like this one http://code.google.com/p/ehcache-spring-annotations/wiki/EvictExpiredElements which say there are special settings you have to create to get the cache to evict.
Can someone please help me understand if my cache should be evicting and also how I can evict because what is mentioned in the article is not something i was able to understand how to implement.
Here are what my logs look like. But there are no signs of eviction...
2014-01-20 13:32:41,791 DEBUG [AnnotationCacheOperationSource] - Adding cacheable method 'getLov' with attribute: [CacheableOperation[public java.util.Map com.myer.reporting.dao.mock.MockLovStoreDaoImpl.getLov(java.lang.String) throws com.myer.reporting.exception.ReportingManagerException] caches=[lovStore] | condition='' | key='#code']
thanks
Ecache does not evict elements until there's need to do that. And that's reasonable since there is no need to waste CPU resources for operation(evicting elemets) that wouldn't make much difference.
If someone would try to get element from cache and the element's time to live/idle would be expired, then ehcache would evict that element and return null. In case max elements or memory for cache/pool would be reached then ehcache would evict elements that are expired, based on eviction policy.
And if I understand correctly it's not guaranteed that all expired element's would be evicted since only sample elements are selected to evict(from documentation):
#param sampledElements this should be a random subset of the population

Coherence Flush Delay Setting

I want a cache that checks its own items if they are expired or not. My cache config is below:
<?xml version="1.0" encoding="UTF-8"?>
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>subscriberinfo</cache-name>
<scheme-name>distributed-scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-scheme</scheme-name>
<lease-granularity>member</lease-granularity>
<service-name>DistributedCache</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>rbm-shovel-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
<backing-map-scheme>
<local-scheme>
<unit-calculator>BINARY</unit-calculator>
<expiry-delay>24h</expiry-delay>
<flush-delay>180</flush-delay>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
But the thing is, flush-delay can not be set. Any ideas?
Thanks
which version of Coherence do you use ?
In Coherence 3.7, the flush-delay has been removed from dtd as deprecated since version 3.5.
Flushing is just active when inserting new objects (have a look at eviction-policy) or accessing expired objects (look at expiry-delay).
Coherence deprecated the FlushDelay and related settings in 3.5. All of that work is done automatically now:
Expired items are automatically removed, and the eviction / expiry events are raised accordingly
You will never see expired data in the cache; even if you try to access it just as it expires, the expiry will occur as an event before the data access occurs
Eviction (for memory limits) is now done asynchronously, so that the "sharp edges" of the side-effects of eviction (such as event storms) are spread out across natural cycles (with the cycle length calculated based on the estimated rate of eviction)
I hope that helps.
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.
But the thing is, flush-delay can not be set. Any ideas?
What do you mean by this? Does the system throw errors or the expired items not getting removed from cache. Based on the configuration you have, the entry should be removed from cache after 24hours and 180seconds since last update to the entry.

Resources