Wouldnt it make sense to put an on-heap cache (eg guava cache) in front of a persistent off-heap chronicle cache? So, use the chronicle cache when you get a guava cache miss?
Thanks
When a Chronicle-Map is updated either in process or via replication, you will be called by the MapEventListener which you define when you build the map.
It is a known issue that you don't get events triggered when another process updates the map, although you will be able to get the updated value.
Note: if the cost of deserialization is high, there is often something you can do to reduce this cost, such as using BytesMarshallable or a generated DataValue reference to use the data in place i.e. without deserializing it.
Related
I have a springboot project used infinispan to run the invalidation mode under the cluster for cache.
The questtion is about infinispan.
In fact, I read the official document: "In invalidation, the caches on different nodes do not actually share any data" and now I am in this situation.
I use the method a provided: Cache.putForExternalRead(key, value) and this method can solve the problem that when I puts the data into the cache of the Node A, the B node invalidates it, But I can't use the springboot annotations, such as #Cacheable.
I also read "Invalidation mode can be used with a shared cache store." from document but I don't know how to do this and I hope you can provide some help.
The goal I hope to achieve is that in the invalidation mode, I put a data into the cache of Node A, Node B will accept a copy data from A.Can I do this with invalidation mode ?
I try use invalidation mode with opening CLusterLoader but there is a risk of getting old value when node get data from other nodes.
I use replicated mode now. However, "replication practically only performs well in small clusters(under 10 nodes)" and "Asynchronous replication is not recommended".So I just can use synchronous replication.
Which performance will be better for invalidation and synchronous replication ?
Looking forward to your help. Thanks
Spring annotations won't fully support INVALIDATION mode unless you use a ClusterLoader. Under the hood annotations use put, we might consider adding a feature to support putForExternalRead behavior in the future, but it won't be there very soon.
Annotations work well with LOCAL, REPL and DIST modes.
ConfigurationBuilder b = new ConfigurationBuilder();
b.persistence()
.addClusterLoader()
.remoteCallTimeout(500);
If you are afraid about getting stale values and not being performant enough with a replicated cache, you might consider using a distributed cache.
Is it possible to mark objects in the varnish cache in a way that they will not get evicted from the cache if the cache is full?
Some of the requests on our server take a very long time to render, and result in a small xml response. This resource isn't called that often and we want to make sure that it stays in the cache.
When the space is running out in the cache varnish starts removing objects that are old and not called often. We would like to assign a priority to the cached objects and influence the algorithm that removes objects from the cache.
Is that possible? And if yes, how?
I want to start using the Azure Distributed Caching and came across the concept of LocalCache. But the fact that it can go out of sync with the Distributed Cache, makes me wonder, why I would want to use it and how I could use it safely.
When enabled, items retrieved from the cache cluster are locally stored in memory on the client machine. This improves performance of subsequent get requests, but it can result in inconsistency of data between the locally cached version and the actual item in the cache cluster.
Calling DataCache.GetIfNewer is one option to ensure that I get the latest version, but that requires that I still do a call to the Distributed Cache, passing in the object that I want to check, in order to see if the two versions differ.
I could use Notifications to invalidate the LocalCache object, but that is done on a polling basis, which opens up the opportunity for an update to occur within the poll period leaving me with stale data.
So,why would I ever use LocalCache, and if there is a reason to do so, how do I use it safely?
"There are only two hard things in Computer Science: cache invalidation and naming things" - Phil Karlton
You would use LocalCache when a) performance is critical b) you don't care that the retrieved object might be stale.
There are many cases where the object is never going to be out of date (e.g. list of public/bank holidays), or when you are not too worried about being 100% up-to-date (e.g. if item has > 1000 units in stock, use local cache, otherwise re-fetch from database).
Don't try and invalidate the local cache. If you need more up-to-date objects, get them from the cluster. If you cannot tolerate out-of-sync data, get it from the database. Caching is always a performance-inconsistency compromise — LocalCache more than the server cache, but the server cache is still a compromise.
We are using infinispan and in our system we have a big object in which we have to push small changes per transaction. I have implemented the DeltaAware interface for this object and also the Delta. The problem i am facing is that the changes are not getting propagated to other nodes and only the initial object state is prapogated to other nodes. Also the delta and commit methods are not called on the big object which implements DeltaAware. Do i need to register this object somewhere other than simply putting it in the cache ?
Thanks
It's probably better if you simply use an AtomicHashMap, which is a construction within Infinispan. This allows you to group a series of key/value pairs as a single value. Infinispan can detect changes in this AtomicHashMap because it implements the DeltaAware interface. AHM is a higher level construct than DeltaAware, and one that probably suits you better.
To give you an example where AtomicHashMaps are used, they're heavily used by JBoss AS7 HTTP session replication, where each session id is mapped to an AtomicHashMap. This means that we can detect when individual session data changes and only replicate that.
Cheers,
Galder
I am looking into using Enterprise Caching Block for my .NET 3.5 service to cache a bunch of static data from the database.
From everything I have read, it seems that FileDependency is the best option for storing static data that does not expire too often. However, when the file changes and the cache is flushed, I need to get a callback once to do some post processing for that particular cache. If I implement ICacheItemRefreshAction and register it during adding an item to the cache, I get a callback for each one of them.
Is there a way to register a callback for the entire cache so that I dont see thousands of callbacks being invoked when the cache flushes?
Thanks
To address your follow up for a better way than FileDependency: you could wrap a SqlDependency in an ICacheItemExpiration. See SqlCacheDependency with the Caching Application Block for sample code.
That approach would only work with SQL Server and would require setting up Service Broker.
In terms of a cache level callback, I don't see an out of the box way to achieve that; almost everything is geared to the item level. What you could do would be to create your own CacheManager Implementation that features a cache level callback.
Another approach might be to have a ICacheItemRefreshAction that only performs any operations when the cache is empty (i.e. the last item has been removed).