How can I configure an EhCache cache to use an LRU eviction strategy in version 3.8 of ehcache? - ehcache

In version 3.8, how can I configure an EhCache cache to use an LRU eviction strategy?
I've looked at the EvictionAdvisor, but it only seems to get called for the most recently inserted item. So I can in essence say "yes" or "no" on evicting the most recently added item. But it is not useful in identifying other items that should be evicted.
I seem to recall that in EhCache 2.8 (it's been awhile), I could provide information in the ehcache.xml configuration file to specify that the cache use an LRU eviction strategy.

int these two documentation mentioned that ehcache is using LRU as default eviction strategy :
A cache eviction algorithm is a way of deciding which element to evict when the cache is full. In Ehcache, the MemoryStore may be limited in size (see How to Size Caches for more information). When the store gets full, elements are evicted. The eviction algorithms in Ehcache determine which elements are evicted. The default is LRU.
https://www.ehcache.org/documentation/2.8/apis/cache-eviction-algorithms.html
Ehcache uses Last Recently Used (LRU) as the default eviction strategy for the memory stores. The eviction strategy determines which cache entry is to be evicted when the cache is full.
https://springframework.guru/using-ehcache-3-in-spring-boot/

Related

Redis CRDB Eviction Policy

I have read in the redis documentation that caching eviction policy for CRDB should be set to No Eviction .
"Note: Geo-Distributed CRDBs always operate in noeviction mode."
https://docs.redislabs.com/latest/rs/administering/database-operations/eviction-policy/
Reasoning for that is the garbage collection might cause inconsistencies as both the data center will have bidirectional synch.
I am not getting this point, can someone explain by giving a real world problem that might occur if suppose we have cache eviction policy LRU .
I got to know after doing some research that it is often a trouble to handle eviction when we have active replication. For example if one of the master runs out of memory and cache is trying to evict the keys to make some room for latest data, what might happen is - it will delete those keys from the other master even if there are no memory issues there. So until and unless there is really a good way to handle this ,eviction is not supported.

How does EhCache3 handle eviction when cache is full?

Eviction policies seem to be removed in EhCache3. There is an EvictionAdvisor interface that can be implemented, but what would be the default behaviour?
I am using EhCache3 with SpringBoot.
The exact eviction algorithm depends on the tier configuration. As such Ehcache does not detail it explicitly to enable tweaking it in the future.
Eviction advisors are an opt-in way of saying that some elements should really remain in the cache over others. It effectively means they are not considered by the eviction algorithm unless no eviction candidates can be found.
Note that it is an advanced feature that can have quite severe performance effect if you end up advising against eviction a large portion of the cache.
And as said in another answer, there is no default - that is by default all entries are treated equal and subject to the internal eviction algorithm
The default is NO_ADVICE
From the javadoc :
Returns an {#link EvictionAdvisor} where no mappings are advised
against eviction.

Cache eviction in Mondrian

It is not clear from the docs how Mondrian behaves regarding cache eviction.
The "Out of memory" section on configuration is very vague. Is it correct to say that Mondrian never evicts anything from cache? And if the user performs too diverse queries cache eventually grows to infinity?

Distrubuted Caching algorithms/tutorials

What is the best mechanism to understand how caching frameworks/ caching algorithms works , is there any book which covers following topics in details.
cache hits
cache miss
LFU
LRU
LRU2
Two Queues
ARC
MRU
FIFO
Second Chance
Distributed caching

Configuring redis to consistently evict older data first

I'm storing a bunch of realtime data in redis. I'm setting a TTL of 14400 seconds (4 hours) on all of the keys. I've set maxmemory to 10G, which currently is not enough space to fit 4 hours of data in memory, and I'm not using virtual memory, so redis is evicting data before it expires.
I'm okay with redis evicting the data, but I would like it to evict the oldest data first. So even if I don't have a full 4 hours of data, at least I can have some range of data (3 hours, 2 hours, etc) with no gaps in it. I tried to accomplish this by setting maxmemory-policy=volatile-ttl, thinking that the oldest keys would be evicted first since they all have the same TTL, but it's not working that way. It appears that redis is evicting data somewhat arbitrarily, so I end up with gaps in my data. For example, today the data from 2012-01-25T13:00 was evicted before the data from 2012-01-25T12:00.
Is it possible to configure redis to consistently evict the older data first?
Here are the relevant lines from my redis.cnf file. Let me know if you want to see any more of the cofiguration:
maxmemory 10gb
maxmemory-policy volatile-ttl
vm-enabled no
AFAIK, it is not possible to configure Redis to consistently evict the older data first.
When the *-ttl or *-lru options are chosen in maxmemory-policy, Redis does not use an exact algorithm to pick the keys to be removed. An exact algorithm would require an extra list (for *-lru) or an extra heap (for *-ttl) in memory, and cross-reference it with the normal Redis dictionary data structure. It would be expensive in term of memory consumption.
With the current mechanism, evictions occur in the main event loop (i.e. potential evictions are checked at each loop iteration before each command is executed). Until memory is back under the maxmemory limit, Redis randomly picks a sample of n keys, and selects for expiration the most idle one (for *-lru) or the one which is the closest to its expiration limit (for *-ttl). By default only 3 samples are considered. The result is non deterministic.
One way to increase the accuracy of this algorithm and mitigate the problem is to increase the number of considered samples (maxmemory-samples parameter in the configuration file).
Do not set it too high, since it will consume some CPU. It is a tradeoff between eviction accuracy and CPU consumption.
Now if you really require a consistent behavior, one solution is to implement your own eviction mechanism on top of Redis. For instance, you could add a list (for non updatable keys) or a sorted set (for updatable keys) in order to track the keys that should be evicted first. Then, you add a daemon whose purpose is to periodically check (using INFO) the memory consumption and query the items of the list/sorted set to remove the relevant keys.
Please note other caching systems have their own way to deal with this problem. For instance with memcached, there is one LRU structure per slab (which depends on the object size), so the eviction order is also not accurate (although more deterministic than with Redis in practice).

Resources