How does ehcache write-behind handle: shutdown, eviction b/c cache is full, eviction b/c TTL expired? - ehcache

We're switching to using EhCache's write-behind feature but I can't tell from the documentation how these three cases are handled.
If I've put something in the cache via putWithWriter() and my cacheWriter hasn't yet been called, what happens if the element is evicted (due to space or due to ttl)? Is my cacheWriter automatically called with this item prior to eviction?
Similar question at program exit time....if I call getCacheManager.shutdown() are all of the unwrite items sent to my cache writer?

Related

Does Cache Coherence always prevent reading a stale value? Do invalidation queues allow it?

In MESI protocol you write to the cache line only when holding it in the Exclusive/Modified state. To acquire the Exclusive state, you send an Invalidate request to all the cores holding the same cache line.
But is there an micro-architecture where some core will respond with acknowledgement before actually invalidating the cache line? If it's a case, isn't it a violation of Cache Coherence?
The reason I'm asking this question is because I'm confused by this answer - Memory barriers force cache coherency?. It says:
Placing an entry into the invalidate queue is essentially a promise by
the CPU to process that entry before transmitting any MESI protocol
messages regarding that cache line. So invalidation queues are the
reason why we may not see the latest value even when doing a simple
read of a single variable.
But how can we read a "stale" variable if there are no new value yet? I mean the writing core will not write a new value until receiving Invalidation acknowledgedment from all the other cores.

Varnish: Save objects from being evicted from the cache

Is it possible to mark objects in the varnish cache in a way that they will not get evicted from the cache if the cache is full?
Some of the requests on our server take a very long time to render, and result in a small xml response. This resource isn't called that often and we want to make sure that it stays in the cache.
When the space is running out in the cache varnish starts removing objects that are old and not called often. We would like to assign a priority to the cached objects and influence the algorithm that removes objects from the cache.
Is that possible? And if yes, how?

Redis cache lru start softlimit

I know redis can be used as LRU cache, but is there softlimit flag, where we can state after specific criteria is reached "redis will start cleaning LRU items".
Actually I'm getting OOM errors on redis, I've set redis to LRU cache, but it hits OOM limit and application stops.
I know of "maxmemory " flag, but is there a softlimit, where we've some 10% space left, and we can start eviction of some items, so that application doesn't stop !
Did you set a specific eviction policy?
See: Eviction policies http://redis.io/topics/lru-cache
I would then check, to make sure that you are not inadvertently setting PERSIST on your redis objects. PERSISTED objects, I believe, cannot be LRU'd out.
You can use http://redis.io/commands/ttl TTL to find out the time limit on your keys. And "Keys" to get a list of keys (this is dangerous on a production server, as the list could be very long and blocking). http://redis.io/commands/keys
-daniel

Guava cache 'expireAfterWrite' does not seem to always work

private Cache<Long, Response> responseCache = CacheBuilder.newBuilder()
.maximumSize(10000)
.expireAfterWrite(10, TimeUnit.MINUTES)
.build();
I am expecting that response objects that are not send to client within 10 minutes are expired and removed from cache automatically but I notice that Response objects are not always getting expired even after 10, 15, 20 minutes. They do get expire when cache is being populated in large numbers but when the system turn idle, something like last 500 response objects, it stops removing these objects.
Can someone help to understand this behavior? Thank you
This is specified in the docs:
If expireAfterWrite or expireAfterAccess is requested entries may be evicted on each cache modification, on occasional cache accesses, or on calls to Cache.cleanUp(). Expired entries may be counted by Cache.size(), but will never be visible to read or write operations.
And there's more detail on the wiki:
Caches built with CacheBuilder do not perform cleanup and evict values "automatically," or instantly after a value expires, or anything of the sort. Instead, it performs small amounts of maintenance during write operations, or during occasional read operations if writes are rare.
The reason for this is as follows: if we wanted to perform Cache
maintenance continuously, we would need to create a thread, and its
operations would be competing with user operations for shared locks.
Additionally, some environments restrict the creation of threads,
which would make CacheBuilder unusable in that environment.
Instead, we put the choice in your hands. If your cache is
high-throughput, then you don't have to worry about performing cache
maintenance to clean up expired entries and the like. If your cache
does writes only rarely and you don't want cleanup to block cache
reads, you may wish to create your own maintenance thread that calls
Cache.cleanUp() at regular intervals.
If you want to schedule regular cache maintenance for a cache which
only rarely has writes, just schedule the maintenance using
ScheduledExecutorService.

Terracotta timeToIdleSeconds verse timeToLiveSeconds

All,
Here's my understanding of the two elements, I wanted to clarify.
timeToIdleSeconds = An object will be evicted if it's idle for more than X seconds.
From documentation
If a client accesses an element in myCache that has been idle for more
than an hour (timeToIdleSeconds), it evicts that element. The element
is also evicted from the Terracotta Server Array.
If the object in cache is not requested again, will it ever get evicted? Will the cache sizing constraints be the only way to clean up this cached object that's not requested again?
timeToLiveSeconds = An object will be evicted if it's been cached for more than X seconds. Does this work the same way as timeToIdleSeconds? Only evicted when requested again? Or will this get cleaned up by a background process?
Thanks
For your final question: " Or will this get cleaned up by a background process?"
http://terracotta.org/apidocs/terracotta-toolkit/3.2.0/org/terracotta/cache/CacheConfig.html
in both setMaxTTISeconds and setMaxTTLSeconds has:
"The background eviction thread sleep interval is based on this value and the Max TTL, so a side effect of changing this value is to change the sleep interval of the eviction thread."
I'm assuming that the different versions of Terracotta will be similar; I believe that there will be a background thread doing cleanup.
If that's correct, then it would seem to imply that a request is not necessary to trigger an eviction, somewhat contrary to the documentation you quote.

Resources