Terracotta timeToIdleSeconds verse timeToLiveSeconds - caching

All,
Here's my understanding of the two elements, I wanted to clarify.
timeToIdleSeconds = An object will be evicted if it's idle for more than X seconds.
From documentation
If a client accesses an element in myCache that has been idle for more
than an hour (timeToIdleSeconds), it evicts that element. The element
is also evicted from the Terracotta Server Array.
If the object in cache is not requested again, will it ever get evicted? Will the cache sizing constraints be the only way to clean up this cached object that's not requested again?
timeToLiveSeconds = An object will be evicted if it's been cached for more than X seconds. Does this work the same way as timeToIdleSeconds? Only evicted when requested again? Or will this get cleaned up by a background process?
Thanks

For your final question: " Or will this get cleaned up by a background process?"
http://terracotta.org/apidocs/terracotta-toolkit/3.2.0/org/terracotta/cache/CacheConfig.html
in both setMaxTTISeconds and setMaxTTLSeconds has:
"The background eviction thread sleep interval is based on this value and the Max TTL, so a side effect of changing this value is to change the sleep interval of the eviction thread."
I'm assuming that the different versions of Terracotta will be similar; I believe that there will be a background thread doing cleanup.
If that's correct, then it would seem to imply that a request is not necessary to trigger an eviction, somewhat contrary to the documentation you quote.

Related

Is there a Caffeine feature that will purge a particular item from the cache after defined time and recreates it at the same time?

ExpireAfter will only purge the item but will not re-create the item. So what I need to do is, after a predefined interval, I need to purge a particular item from the cache and at the same time I need to recreate it. It might recreate with same data if there is no change in the data. Assuming the data was changed, the recreating will give the latest object.
My idea was to retrieve latest item form the cache all the time. In contrast, the Refresh feature (https://github.com/ben-manes/caffeine/wiki/Refresh) will provide the stale item for the first request and does an asynchronous loading. So for the second request the cache will provide the latest object.
Asynchronous removal listener that re-fetches the expired entry
should work in my case. Can you please provide me some information on
how to achieve this?
I'm also curious to know how the scheduled task can do it?
Assuming cache can address the following two cases:
Subsequent requests case:
I understand the refreshAfterWrite will provide the stale entry for
the first time but for the second request, what happens if the cache
hasn't yet completed loading the expired entry?
Does cache blocks the second request, completes the re-fetch, and
then provide the latest value to the second request?.
The idea is to make the cache provides the latest data after the
defined entry expiry time.
In the case where the cache has to load values equal to its capacity at one shot:
Let say the cache size is 100 and the time to load all the 100 items
is 2 minutes.
Assuming the first request would load 100 items into the cache at the
same time, after the defined expiry time, the cache should evict and
re-fetch all the 100 elements.
For the second request to access items from those 100 items, how can
I make the cache smart enough so that it returns the entries that
have been re-loaded and asynchronously re-loads the other entries?.
The idea is not to block any request for an existing entry. Serve the
request for an existing entry and do the re-load for the remaining
expired entries.
Asynchronous removal listener that re-fetches the expired entry should work in my case. Can you please provide me some information on how to achieve this?
The removal listener requires a reference to the cache, but that is not available during construction. If it calls a private method instead then the uninitialized field isn't captured and it can be resolved at runtime.
Cache<K, V> cache = Caffeine.newBuilder()
.expireAfterWrite(1, TimeUnit.HOURS)
.removalListener((K key, V value, RemovalCause cause) -> {
if (cause == RemovalCause.EXPIRED) {
reload(key);
}
}).build();
private void reload(K key) {
cache.get(key, k -> /* load */);
}
I'm also curious to know how the scheduled task can do it?
If you are reloading all entries then you might not even need a key-value cache. In that case the simplest approach would be to reload an immutable map.
volatile ImmutableMap<K, V> data = load();
scheduledExecutorService.scheduleAtFixedRate(() -> data = load(),
/* initial */ 1, /* period */ 1, TimeUnit.HOURS);
I understand the refreshAfterWrite will provide the stale entry for the first time but for the second request, what happens if the cache hasn't yet completed loading the expired entry?
The subsequent requests obtain the stale entry until either (a) the refresh completes and updates the mappings or (b) the entry was removed and the caller must reload. The case of (b) can occur if the entry expired while the refresh was in progress, where returning the stale value is no longer an option.
Does cache blocks the second request, completes the re-fetch, and then provide the latest value to the second request?.
No, the stale but valid value is returned. This is to let the refresh hide the latency of reloading a popular entry. For example an application configuration that is used by all requests would block when expired, causing periodic delays. The refresh would be triggered early, reload, and the callers would never observe it absent. This hides latencies, while also allowing idle entries to expire and fade away.
In the case where the cache has to load values equal to its capacity at one shot... after the defined expiry time, the cache should evict and re-fetch all the 100 elements.
The unclear part of your description is if the cache reloads only the entries being accessed within the refresh period or if it reloads the entire contents. The former is what Caffeine offers, while the latter is better served with an explicit scheduling thread.

Varnish: Save objects from being evicted from the cache

Is it possible to mark objects in the varnish cache in a way that they will not get evicted from the cache if the cache is full?
Some of the requests on our server take a very long time to render, and result in a small xml response. This resource isn't called that often and we want to make sure that it stays in the cache.
When the space is running out in the cache varnish starts removing objects that are old and not called often. We would like to assign a priority to the cached objects and influence the algorithm that removes objects from the cache.
Is that possible? And if yes, how?

Redis cache lru start softlimit

I know redis can be used as LRU cache, but is there softlimit flag, where we can state after specific criteria is reached "redis will start cleaning LRU items".
Actually I'm getting OOM errors on redis, I've set redis to LRU cache, but it hits OOM limit and application stops.
I know of "maxmemory " flag, but is there a softlimit, where we've some 10% space left, and we can start eviction of some items, so that application doesn't stop !
Did you set a specific eviction policy?
See: Eviction policies http://redis.io/topics/lru-cache
I would then check, to make sure that you are not inadvertently setting PERSIST on your redis objects. PERSISTED objects, I believe, cannot be LRU'd out.
You can use http://redis.io/commands/ttl TTL to find out the time limit on your keys. And "Keys" to get a list of keys (this is dangerous on a production server, as the list could be very long and blocking). http://redis.io/commands/keys
-daniel

APC User Cache Entries not being removed after timeout

I'm running APC mainly to cache objects and query data as user cache entries, each item it setup with a specific time relevant to the amount of time it's required in the cache, some items are 48 hours but more are 2-5 minutes.
It's my understanding that when the timeout is reached and the current time passes the created at time then the item should be automatically removed from the user cache entries?
This doesn't seem to be happening though and the items are instead staying in memory? I thought maybe the garbage collector would remove these items but it doesn't seem to have done even though it's running once an hour at the moment.
The only other thing I can think is that the default apc.user_ttl = 0 overrides the individual timeout values and sets them to never be removed even after individual timeouts?
In general, a cache manager SHOULD keep your entries for as long as possible, and MAY delete them if/when necessary.
The Time-To-Live (TTL) mechanism exists to flag entries as "expired", but expired entries are not automatically deleted, nor should they be, because APC is configured with a fixed memory size (using apc.shm_size configuration item) and there is no advantage in deleting an entry when you don't have to. There is a blurb below in the APC documentation:
If APC is working, the Cache full count number (on the left) will
display the number of times the cache has reached maximum capacity and
has had to forcefully clean any entries that haven't been accessed in
the last apc.ttl seconds.
I take this to mean that if the cache never "reached maximum capacity", no garbage collection will take place at all, and it is the right thing to do.
More specifically, I'm assuming you are using the apc_add/apc_store function to add your entries, this has a similar effect to the apc.user_ttl, for which the documentation explains as:
The number of seconds a cache entry is allowed to idle in a slot in
case this cache entry slot is needed by another entry
Note the "in case" statement. Again I take this to mean that the cache manager does not guarantee a precise time to delete your entry, but instead try to guarantee that your entries stays valid before it is expired. In other words, the cache manager puts more effort on KEEPING the entries instead of DELETING them.
apc.ttl doesn't do anything unless there is insufficient allocated memory to store new coming variables, if there is sufficient memory the cache will never expire!!. so you have to specify your ttl for every variable u store using apc_store() or apc_add() to force apc to regenerate it after end of specified ttl passed to the function. if u use opcode caching it will also never expire unless the page is modified(when stat=1) or there is no memory. so apc.user_ttl or apc.ttl are actually have nothing to do.

How does ehcache write-behind handle: shutdown, eviction b/c cache is full, eviction b/c TTL expired?

We're switching to using EhCache's write-behind feature but I can't tell from the documentation how these three cases are handled.
If I've put something in the cache via putWithWriter() and my cacheWriter hasn't yet been called, what happens if the element is evicted (due to space or due to ttl)? Is my cacheWriter automatically called with this item prior to eviction?
Similar question at program exit time....if I call getCacheManager.shutdown() are all of the unwrite items sent to my cache writer?

Resources