Varnish: Save objects from being evicted from the cache - caching

Is it possible to mark objects in the varnish cache in a way that they will not get evicted from the cache if the cache is full?
Some of the requests on our server take a very long time to render, and result in a small xml response. This resource isn't called that often and we want to make sure that it stays in the cache.
When the space is running out in the cache varnish starts removing objects that are old and not called often. We would like to assign a priority to the cached objects and influence the algorithm that removes objects from the cache.
Is that possible? And if yes, how?

Related

Is it possible to get a notification when there's a key eviction from a cache application

memcached, redis, varnish, ...
Is there a way to check thousands and thousands of keys every second to verify they were not evicted from the cache ?
Is there a specific cache application for such purpose ?
I don't think it would be possible to scale that way... but correct me if I am wrong.
Are there cache apps or modules that can send you a notice when a key is evicted ? That would probably make more sense.

What is it called when two requests are being served from the same cache?

I'm trying to find the technical term for the following (and potential solutions), in a distributed system with a shared cache:
request A comes in, cache miss, so we begin to generate the response
for A
request B comes in with the same cache key, since A is not
completed yet and hasn't written the result to cache, B is also a
cache miss and begins to generate a response as well
request A completes and stores value in cache
request B completes and stores value in cache (over-writing request A's cache value)
You can see how this can be a problem at scale, if instead of two requests, you have many that all get a cache miss and attempt to generate a cache value as soon as the cache entry expires. Ideally, there would be a way for request B to know that request A is generating a value for the cache, and wait until that is complete and use that value.
I'd like to know the technical term for this phenomenon, it's a cache race of sorts.
It's a kind of Thundering Herd
Solution: when first request A comes and fills a flag, if request B comes and finds the flag then wait... After A loaded the data into the cache, remove flag.
If all other request are waked up by the cache loaded event, would trigger all thread "Thundering Herd". So also need to care about the solution.
For example in Linux kernel, only one process would be waked up, even several process depends on the event.

Terracotta timeToIdleSeconds verse timeToLiveSeconds

All,
Here's my understanding of the two elements, I wanted to clarify.
timeToIdleSeconds = An object will be evicted if it's idle for more than X seconds.
From documentation
If a client accesses an element in myCache that has been idle for more
than an hour (timeToIdleSeconds), it evicts that element. The element
is also evicted from the Terracotta Server Array.
If the object in cache is not requested again, will it ever get evicted? Will the cache sizing constraints be the only way to clean up this cached object that's not requested again?
timeToLiveSeconds = An object will be evicted if it's been cached for more than X seconds. Does this work the same way as timeToIdleSeconds? Only evicted when requested again? Or will this get cleaned up by a background process?
Thanks
For your final question: " Or will this get cleaned up by a background process?"
http://terracotta.org/apidocs/terracotta-toolkit/3.2.0/org/terracotta/cache/CacheConfig.html
in both setMaxTTISeconds and setMaxTTLSeconds has:
"The background eviction thread sleep interval is based on this value and the Max TTL, so a side effect of changing this value is to change the sleep interval of the eviction thread."
I'm assuming that the different versions of Terracotta will be similar; I believe that there will be a background thread doing cleanup.
If that's correct, then it would seem to imply that a request is not necessary to trigger an eviction, somewhat contrary to the documentation you quote.

APC User Cache Entries not being removed after timeout

I'm running APC mainly to cache objects and query data as user cache entries, each item it setup with a specific time relevant to the amount of time it's required in the cache, some items are 48 hours but more are 2-5 minutes.
It's my understanding that when the timeout is reached and the current time passes the created at time then the item should be automatically removed from the user cache entries?
This doesn't seem to be happening though and the items are instead staying in memory? I thought maybe the garbage collector would remove these items but it doesn't seem to have done even though it's running once an hour at the moment.
The only other thing I can think is that the default apc.user_ttl = 0 overrides the individual timeout values and sets them to never be removed even after individual timeouts?
In general, a cache manager SHOULD keep your entries for as long as possible, and MAY delete them if/when necessary.
The Time-To-Live (TTL) mechanism exists to flag entries as "expired", but expired entries are not automatically deleted, nor should they be, because APC is configured with a fixed memory size (using apc.shm_size configuration item) and there is no advantage in deleting an entry when you don't have to. There is a blurb below in the APC documentation:
If APC is working, the Cache full count number (on the left) will
display the number of times the cache has reached maximum capacity and
has had to forcefully clean any entries that haven't been accessed in
the last apc.ttl seconds.
I take this to mean that if the cache never "reached maximum capacity", no garbage collection will take place at all, and it is the right thing to do.
More specifically, I'm assuming you are using the apc_add/apc_store function to add your entries, this has a similar effect to the apc.user_ttl, for which the documentation explains as:
The number of seconds a cache entry is allowed to idle in a slot in
case this cache entry slot is needed by another entry
Note the "in case" statement. Again I take this to mean that the cache manager does not guarantee a precise time to delete your entry, but instead try to guarantee that your entries stays valid before it is expired. In other words, the cache manager puts more effort on KEEPING the entries instead of DELETING them.
apc.ttl doesn't do anything unless there is insufficient allocated memory to store new coming variables, if there is sufficient memory the cache will never expire!!. so you have to specify your ttl for every variable u store using apc_store() or apc_add() to force apc to regenerate it after end of specified ttl passed to the function. if u use opcode caching it will also never expire unless the page is modified(when stat=1) or there is no memory. so apc.user_ttl or apc.ttl are actually have nothing to do.

When to Use Azure Caching Local Cache

I want to start using the Azure Distributed Caching and came across the concept of LocalCache. But the fact that it can go out of sync with the Distributed Cache, makes me wonder, why I would want to use it and how I could use it safely.
When enabled, items retrieved from the cache cluster are locally stored in memory on the client machine. This improves performance of subsequent get requests, but it can result in inconsistency of data between the locally cached version and the actual item in the cache cluster.
Calling DataCache.GetIfNewer is one option to ensure that I get the latest version, but that requires that I still do a call to the Distributed Cache, passing in the object that I want to check, in order to see if the two versions differ.
I could use Notifications to invalidate the LocalCache object, but that is done on a polling basis, which opens up the opportunity for an update to occur within the poll period leaving me with stale data.
So,why would I ever use LocalCache, and if there is a reason to do so, how do I use it safely?
"There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton
You would use LocalCache when a) performance is critical b) you don't care that the retrieved object might be stale.
There are many cases where the object is never going to be out of date (e.g. list of public/bank holidays), or when you are not too worried about being 100% up-to-date (e.g. if item has > 1000 units in stock, use local cache, otherwise re-fetch from database).
Don't try and invalidate the local cache. If you need more up-to-date objects, get them from the cluster. If you cannot tolerate out-of-sync data, get it from the database. Caching is always a performance-inconsistency compromise — LocalCache more than the server cache, but the server cache is still a compromise.

Resources