EhCache eternal is not behaving as expected - caching

My requirement is to have disk based cache. If cache is memory is full, i want LRU element to be pushed to the disk. And then if file on disk is full then I want LRU element on disk to evicted. This is pretty simple requirement, however I could not achieve this using EhCache.
I made use of EhCache ( 2.10.1) with following config :
<defaultCache name="default"
maxBytesLocalHeap="50m"
maxBytesLocalDisk="100g"
eternal="true"
timeToIdleSeconds="0"
timeToLiveSeconds="0"
diskExpiryThreadIntervalSeconds="120"
memoryStoreEvictionPolicy="LRU">
<persistence strategy="localTempSwap"/>
</defaultCache>
My expectation here is, when cache is filled up (i.e. cache size is exceeding 50M) , i want LRU element(s) to be pushed to the file and hence creating some space for new element in memory.
However this is not how EhCache is working, I did a sample test to check element count in cache:
public static void main(String[] args) throws Exception {
ArrayList<String> keys = new ArrayList<String>();
CacheManager cacheManager;
FileInputStream fis = null;
try {
fis = new FileInputStream(
new File("src/config/ehcache.xml").getAbsolutePath());
cacheManager = CacheManager.newInstance(fis);
}
finally {
fis.close();
}
java.lang.Runtime.getRuntime().addShutdownHook(new Thread(){
#Override
public void run() {
try
{
System.out.println("Shutting down Eh Cache manager !!");
cacheManager.clearAll();
cacheManager.shutdown();
System.out.println("done !!");
}catch(Exception e)
{
e.printStackTrace();
}
}
});
System.out.println("starting ...");
System.out.println(System.getProperty("java.io.tmpdir"));
cacheManager.addCache("work_item_111");
Cache ehCache = cacheManager.getCache("work_item_111");
long start_outer = System.currentTimeMillis();
for(int i =0;i<30;i++)
{
long start = System.currentTimeMillis();
String key = UUID.randomUUID().toString();
ehCache.put(new Element(key, getNextRandomString()));
keys.add(key);
//System.out.println("time taken : " + (System.currentTimeMillis()- start));
System.out.println((System.currentTimeMillis()- start) +" - " + (ehCache.getStatistics().getLocalDiskSizeInBytes()/1024/1024) + " - " +(ehCache.getStatistics().getLocalHeapSizeInBytes()/1024/1024));
}
System.out.println("time taken-total : " + (System.currentTimeMillis()- start_outer));
System.out.println(ehCache.getSize());
System.out.println("disk size : " +ehCache.getStatistics().getLocalDiskSizeInBytes()/1024/1024);
System.out.println("memory size : " +ehCache.getStatistics().getLocalHeapSizeInBytes()/1024/1024);
Iterator<String> itr = keys.iterator();
int count =0;
while(itr.hasNext())
{
count++;
String key = itr.next();
if(ehCache.get(key) == null)
{
System.out.println("missingg key : " + key);
}
}
System.out.println("checked for count :" + count);
}
The outcome is quite disappointing, after putting 30 elements in cache ( each element of size appro. 4mb) , I can see only 7 elements in the cache ( ehCache.getSize() returns 7 ) and also i dont see file on disk growing.
Can any EhCache expert help me out here if I am missing anything here. Thanks.

First, regarding the Ehcache topology:
The tiering model behind Ehcache enforces since at least 2.6 that all mappings are present in the lowest tier, disk in your case, at all times.
This means it is no longer an overflow model.
Now as to why your test behaves differently than what you expect:
Ehcache 2.x disk tier leaves the keys in memory and when your heap is memory sized, the amount of space occupied by the key is subtracted from that capacity.
Ehcache 2.x writes asynchronously to the disk and has limited queues for doing that. While this is not a problem in most use cases, in such a tight loop in a test, you may hit these limits and have the cache drop puts by evicting inline.
So in order to better understand what's happening, have a look at the Ehcache logs in debug and see what is really going on.
If you see evictions, simply loop more slowly or increase some of the settings for the disk write queue such as diskSpoolBufferSizeMB attribute on the cache element.

facing the same problem. In cycle I am adding 10k elements with cache configuration
cacheConfiguration.maxBytesLocalHeap(1, MemoryUnit.MEGABYTES);
cacheConfiguration.diskSpoolBufferSizeMB(10);
// cacheConfiguration.maxEntriesLocalHeap(1);
cacheConfiguration.name("smallcache");
config.addCache(cacheConfiguration);
cacheConfiguration.setDiskPersistent(true);
cacheConfiguration.overflowToDisk(true);
cacheConfiguration.eternal(true);
When increasing maxBytesLocalHeap to 10, it is fine, when using maxEntriesLocalHeap instead of maxBytes and it is even set to value 1 (item), it works without problem.
Version 2.6
This is answer Ehcache set to eternal but forgets elements anyway?

Related

When an object was added to Caffeine cache

I'm using Spring Boot caching in a project and am using Caffiene. I've added some default configuration for Caffeine to the project, and I can get the most recent object from the cache using the following code:
private final CaffeineCache caffeineCache = (CaffeineCache) caffeineCacheManager.getCache("myCacheName");
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
cache.policy().eviction().get().hottest(1);
I don't actually want the object itself, but I'd want to know when it was added to the cache. Is there a way to find out when this object was added to the cache?
Thanks to Ben Maines comment, I found a solution to my problem. I calculated the cache time using the expiration timestamp:
CacheResponse cacheResponse = new CacheResponse();
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
if(cache.policy().eviction().get().hottest(1).keySet().iterator().hasNext()) {
// Time in milliseconds since object was cached
OptionalLong time = cache.policy().expireAfterWrite().get().ageOf(cache.policy().eviction().get().hottest(1).keySet().iterator().next(), TimeUnit.MILLISECONDS);
LOGGER.info("Calculating last UTC cached date");
// Calculate last cached time and set the object in the cache response
cacheResponse.setTime(Instant.now().minusMillis(time.getAsLong()).atZone(ZoneOffset.UTC).toLocalDateTime());
} else {
LOGGER.info("Cache appears to be empty.");
}
cacheResponse.setNumCachedItems(cache.estimatedSize());
return cacheResponse;

Hazelcast Near Cache: Evict if changed on different node

I am using Spring + Hazelcast 3.8.2 and have configured a map like this using the Spring configuration:
<hz:map name="test.*" backup-count="1"
max-size="0" eviction-percentage="30" read-backup-data="true"
time-to-live-seconds="900"
eviction-policy="NONE" merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy">
<hz:near-cache max-idle-seconds="300"
time-to-live-seconds="0"
max-size="0" />
</hz:map>
I've got two clients connected (both on same machine [test env], using different ports).
When I change a value in the map on one client the other client still has the old value until it will get evicted from the near cache due to the expired idle time.
I found a similar issue like this here: Hazelcast near-cache eviction doesn't work
But I'm unsure if this is really the same issue, at least it is mentioned that this was a bug in version 3.7 and we are using 3.8.2.
Is this a correct behaviour or am I doing something wrong? I know that there is a property invalidate-on-change, but as a default this is true, so I don't expect I have to set this one.
I also tried setting the read-backup-data to false, doesn't help.
Thanks for your support
Christian
I found the solution myself.
The issue is that Hazelcast sends the invalidations by default in batches and thus it waits a few seconds until the invalidations will be sent out to all other nodes.
You can find more information about this here: http://docs.hazelcast.org/docs/3.8/manual/html-single/index.html#near-cache-invalidation
So I had to set the property hazelcast.map.invalidation.batch.enabled to false which will immediately send out invalidations to all nodes. But as mentioned in the documentation this should only be used when there aren't too many put/remove/... operations expected, as this will then make the event system very busy.
Nevertheless, even though this property is set it will not guarantee that all nodes will directly invalidate the near cache entries. I noticed that after directly accessing the values on the different node sometimes it's fine, sometimes not.
Here is the JUnit test I built up for this:
#Test
public void testWithInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "true");
doTest();
}
#Test
public void testWithoutInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "false");
doTest();
}
#After
public void shutdownNodes() {
Hazelcast.shutdownAll();
}
protected void doTest() throws Exception {
// first config for normal cluster member
Config c1 = new Config();
c1.getNetworkConfig().setPort(5709);
// second config for super client
Config c2 = new Config();
c2.getNetworkConfig().setPort(5710);
// map config is the same for both nodes
MapConfig testMapCfg = new MapConfig("test");
NearCacheConfig ncc = new NearCacheConfig();
ncc.setTimeToLiveSeconds(10);
testMapCfg.setNearCacheConfig(ncc);
c1.addMapConfig(testMapCfg);
c2.addMapConfig(testMapCfg);
// start instances
HazelcastInstance h1 = Hazelcast.newHazelcastInstance(c1);
HazelcastInstance h2 = Hazelcast.newHazelcastInstance(c2);
IMap<Object, Object> mapH1 = h1.getMap("test");
IMap<Object, Object> mapH2 = h2.getMap("test");
// initial filling
mapH1.put("a", -1);
assertEquals(mapH1.get("a"), -1);
assertEquals(mapH2.get("a"), -1);
int updatedH1 = 0, updatedH2 = 0, runs = 0;
for (int i = 0; i < 5; i++) {
mapH1.put("a", i);
// without this short sleep sometimes the nearcache is updated in time, sometimes not
Thread.sleep(100);
runs++;
if (mapH1.get("a").equals(i)) {
updatedH1++;
}
if (mapH2.get("a").equals(i)) {
updatedH2++;
}
}
assertEquals(runs, updatedH1);
assertEquals(runs, updatedH2);
}
testWithInvalidationBatchEnabled finishs only sometimes successfully, testWithoutInvalidationBatchEnabled finishs always successfully.

Apache Ignite Cache eviction still in memory

During a stability test of our apache ignite cluster we got a memory related problem, where the used memory heap space increased to 100% and didnt go down like we expected. This is what we did:
Created a cache with eviction policy to FifoEvictionPolicy( max: 10000, batchSize:100)
20 simultaneous threads that executed the following scenario over and over again for a couple of hours:
Added a unique entry to the cache and then fetched the value to verify that it was added.
This scenario created about 2.3 million entries during the test.
Our expectation was due to our quite restricted eviction policy at maximum 10000 entries, the memory should been stabilized. However, the memory just kept rising until it reached max heap size. See attached memory graph:
Our question is:
Why is the memory used by the entries still allocated, even though eviction is done?
One thing to add to this, is that we executed the same test but with a deletion of the entry after that we added it. The memory was now stable:
Update with testcase and comment.
Below you will find a simple junit test to prove the memory leak. #a_gura seems to be correct - if we disable the ExpiryPolicy things work as expected. But if we enable ExpiryPolicy the heap seems to get filled up within the ExpiryPolicy-duration. Testcase:
public class IgniteTest {
String cacheName = "my_cache";
#Test
public void test() throws InterruptedException {
IgniteConfiguration configuration = new IgniteConfiguration();
Ignite ignite = Ignition.start(configuration);
//create a large string to use as test value.
StringBuilder testValue = new StringBuilder();
for (int i = 0; i < 10*1024; i ++) {
testValue.append("a");
}
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setEvictionPolicy(new FifoEvictionPolicy<>(10_000, 100));
Duration duration = new Duration(TimeUnit.HOURS, 12);
cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(duration));
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setBackups(0);
Cache<Object, Object> cache = ignite.getOrCreateCache(cacheCfg);
String lastKey = "";
for (int i = 0; i < 10_000_101; i++){
String key = "key#"+i;
String value = testValue + "value#"+i;
log.trace("storing {} {}", key, value);
if (i % 1_000 == 0) {
log.debug("storing {}", key);
}
cache.put(key, value);
lastKey = key;
Thread.sleep(1);
}
String verifyKey = "key#1";
Assert.assertThat("first key should be evicted", cache.containsKey(verifyKey), CoreMatchers.is(false));
Assert.assertThat("last key should NOT be evicted", cache.containsKey(lastKey), CoreMatchers.is(true));
ignite.destroyCache(cacheName);
}
}
This has been fixed in Ignite 1.8: https://issues.apache.org/jira/browse/IGNITE-3948.
Credit to #a_gura who filed the bug and to the development team.

Why Ehcache 2.8.8 ignore my maxBytesLocalHeap parameter when using -Dnet.sf.ehcache.use.classic.lru=true

I'm using Ehcache 2.8.8's LRU policy in my webapp,
when there is no -Dnet.sf.ehcache.use.classic.lru=true
Ehcache respects my maxBytesLocalHeap parameter;
but it doesn't do so when the system property is set.
In Class Cache:
if (useClassicLru && onfiguration.getMemoryStoreEvictionPolicy().
equals(MemoryStoreEvictionPolicy.LRU)) {
Store disk = createDiskStore();
store = new LegacyStoreWrapper(new LruMemoryStore(this, disk),
disk, registeredEventListeners, configuration);
} else {
if (configuration.isOverflowToDisk()) {
store = DiskStore.createCacheStore(this, onHeapPool,
onDiskPool);
} else {
store = MemoryStore.create(this, onHeapPool);
}
}
And in Class LruMemoryStore:
public LruMemoryStore(Ehcache cache, Store diskStore) {
status = Status.STATUS_UNINITIALISED;
this.maximumSize =
cache.getCacheConfiguration().getMaxEntriesLocalHeap();
this.cachePinned =
determineCachePinned(cache.getCacheConfiguration());
this.elementPinningEnabled =
!cache.getCacheConfiguration().isOverflowToOffHeap();
this.cache = cache;
this.diskStore = diskStore;
if (cache.getCacheConfiguration().isOverflowToDisk()) {
evictionObserver = null;
} else {
evictionObserver =
StatisticBuilder.operation(EvictionOutcome.class).
named("eviction").of(this).build();
}
map = new SpoolingLinkedHashMap();
status = Status.STATUS_ALIVE;
copyStrategyHandler = MemoryStore.getCopyStrategyHandler(cache);
}
So I guess only MaxEntriesLocalHeap has effect?
Is it possible to set it as jvm system property?
When you explicitly request classic LRU you actually get an older version of internal code that was preserved because some users depend on its behaviour.
This means that you are effectively not able to use features introduced after this, including sizing the heap tier in bytes.
So you are right, only maxEntriesLocalHeap will allow you to size the heap tier. And this cannot be set via a system property.

Setting Time To Live (TTL) from Java - sample requested

EDIT:
This is basically what I want to do, only in Java
Using ElasticSearch, we add documents to an index bypassing IndexRequest items to a BulkRequestBuilder.
I would like for the documents to be dropped from the index after some time has passed (time to live/ttl)
This can be done either by setting a default for the index, or on a per-document basis. Either approach is fine by me.
The code below is an attempt to do it per document. It does not work. I think it's because TTL is not enabled for the index. Either show me what Java code I need to add to enable TTL so the code below works, or show me different code that enables TTL + sets default TTL value for the index in Java I know how to do it from the REST API but I need to do it from Java code, if at all possible.
logger.debug("Indexing record ({}): {}", id, map);
final IndexRequest indexRequest = new IndexRequest(_indexName, _documentType, id);
final long debug = indexRequest.ttl();
if (_ttl > 0) {
indexRequest.ttl(_ttl);
System.out.println("Setting TTL to " + _ttl);
System.out.println("IndexRequest now has ttl of " + indexRequest.ttl());
}
indexRequest.source(map);
indexRequest.operationThreaded(false);
bulkRequestBuilder.add(indexRequest);
}
// execute and block until done.
BulkResponse response;
try {
response = bulkRequestBuilder.execute().actionGet();
Later I check in my unit test by polling this method, but the document count never goes down.
public long getDocumentCount() throws Exception {
Client client = getClient();
try {
client.admin().indices().refresh(new RefreshRequest(INDEX_NAME)).actionGet();
ActionFuture<CountResponse> response = client.count(new CountRequest(INDEX_NAME).types(DOCUMENT_TYPE));
CountResponse countResponse = response.get();
return countResponse.getCount();
} finally {
client.close();
}
}
After a LONG day of googling and writing test programs, I came up with a working example of how to use ttl and basic index/object creation from the Java API. Frankly most of the examples in the docs are trivial, and some JavaDoc and end-to-end examples would go a LONG way to help those of us who are using the non-REST interfaces.
Ah well.
Code here: Adding mapping to a type from Java - how do I do it?

Resources