May I know different between CaffeineCacheManager and SimpleCacheManager?
As description CaffeineCacheManager is Lazy cache, but what is lazy cache, in what situation I should pick CaffeineCacheManager?
Have a read of all the different cache providers first and notice how their APIs differ. The Simple Cache manager is Spring's default cache manager that's used if you don't specify a cache manager. It's 'simple' because its underlying implementation uses a Java ConcurrentHashMap and it doesn't give you a whole lot of customization options.
The Caffeine Cache manager is slightly different in that there a more configuration-driven customization options such as having the ability to specify the cache timeout expiry limit (in order to 'bust' the cache after a certain time period) and the cache maximum size limit in order to limit the capacity of the cache. The default cache manager does not give you this configurability.
My team used the Caffeine Cache manager on a project recently and I can definitely recommend it.
In terms of your question about a 'lazy' cache. Have a read of lazy instatiation more broadly. Essentially it only loads what it needs when it needs it (upon a cache access) rather than loading the entire thing all at once.
Related
Are there any best practices around key invalidation for caches like Redis? Should I let the key fall out after the TTL expires or should I call a delete on the key in question to remove that key from cache?
Would the actively calling a delete operation put more load and hurt the throughput of my cache?
It seems you are asking around how to handle memory management in Redis when using it as a cache. Redis implements some configurations maxmemory , maxmemory-policy, maxmemory-samples, replica-ignore-maxmemory, and active-expire-effort. For more details on these see the self documentation for your version.
Redis configs allow you to manage tradeoffs on CPU, memory and latency without needing to do external memory management. With that said DEL can be used to invalidate keys when your application is aware of state changes. And the TTLs set can be used with some of the memory management policies.
I'm using MemoryCache in a dotnet core C# project. I'm using it store a list of enums that I read out of a collection (I want to store it since it uses reflection to load the enums which takes time).
However, since my collection isn't going to change, I don't want it to expire at all. How do I do that? If I don't set any Expiration (SlidingExpiration or Absolute Expiration), will my cache never expire?
If you do not specify an absolute and/or sliding expiration, then the item will theoretically remain cached indefinitely. In practical terms, persistence is dependent on two factors:
Memory pressure. If the system is resource-constrained, and a running app needs additional memory, cached items are eligible to be removed from memory to free up RAM. You can however disable this by setting the cache priority for the entry to CacheItemPriority.NeverRemove.
Memory cache is process-bound. That means if you restart the server or your application restarts for whatever reason, anything stored in memory cache is gone. Additionally, this means that in web farm scenarios, each instance of your application will have it's own memory cache, since each is a separate process (even if you're simply running multiple instances on the same server).
If you care about only truly doing the operation once, and persisting the result past app shutdown and even across multiple instances of your app, you need to employ distributed caching with a backing store like SQL Server or Redis.
I am using Infinispan 6.0.2 via the Wildfly 8.2 sub-system. I have configured a transactional cache that uses a String Based JDBC Cache Store to persist content placed in the infinispan cache.
My concern is that after reading the following in the Infinispan documentation that there is potential for the cache and cache store to become out of sync when putting/updating/removing multiple entries into the cache in the same transaction due to the transaction committing/rolling-back in the cache but only partial succeeding/failing in the cache store.
4.5. Cache Loaders and transactional caches
When a cache is transactional and a cache loader is present, the cache loader won’t be enlisted in the transaction in which the cache is part. That means that it is possible to have inconsistencies at cache loader level: the transaction to succeed applying the in-memory state but (partially) fail applying the changes to the store. Manual recovery would not work with caches stores.
Could some one please clarify if the above statement only refers to loading from a cache store if it also refers to writing to a store as well.
If this is also the case when writing to a cache store are there any recommended strategies/solutions for ensuring a cache and cache store remain in sync?
The driving factors behind this for me is that I am using Infinispan both for write-through and over-flow of business critical data and need confidence that the cache store correctly represents the state of the data.
I have also asked this question on the Infinispan Forums
Many thanks in advance.
It applies to writes as well, failure to write to the store does not affect rest of the transaction.
The reason for this is that the actual persistence API is not transactional (edit: newer versions of Infinispan support transactional persistence, too). Therefore, with 2-phase commits (in first phase - prepare - all locks are acquired, in second one - commit - the write is executed) the write to the store is executed in the second phase. Therefore, the failure cannot rollback changes on different nodes.
Although Infinispan is trying to get close to strongly consistent in-memory database, it is still rather a cache given the guarantees. If you are more interested in the design limitations (and some of them also theoretical limitations), I recommend reading Infinispan wiki.
In my application I use ehcache with several caches that are backed by a terracotta server.
I noticed that there is a correlation between the size of the data that is saved in the server and the time it takes to create a cache manager instance in the client (the bigger the size the longer it takes).
I couldn't find any info about what actually happens when the cache manager is created.
To my understanding, the data would only be pulled when it is actually requested and not when creating the manager, so what is the overhead?
Any thoughts or references to relevent reading would be much appreciated.
First of all, CacheManager is not related to any data pushing or pulling, it create the caches which contains the elements as name value pairs and holds the data for put/get and other operations. Actually CacheManager do creation, access and removal of Caches.
In-fact when you create a CacheManager that has caches that participates in the terracotta cluster you might see a difference in the time it loads up. The cache manager will establish a connection to the server specified in the config. If there is any pre cache loaders like classes extending BootstrapCacheLoader will affect the load time too. The cache consistency attribute in caches that participate in the cluster has also impact on the load time. Terracotta server by design will push the most hit data to clients in order to reduce cache misses on local and also if the cache is identified for pinning.
I'm using Windows Azure Shared Caching. I encountered a few problems:
How to know what keys are present in the cache? Is there something like a GetAllKeys() method?
Is it possible to call clearAll()?
Why can't I use regions?
Thanks.
This section applies to Windows Azure Caching
Windows Azure provides two types of cache modes:
Dedicated Role caching - The role instances are used exclusively for
caching (there is no other code running in that instance).
Co-located Role caching - The cache shares the VM resources
(bandwidth, CPU, and memory) with the application.
How to know what is in the cache? Is there something like "GetAllKeys()" method?
Do you need that information for your application of more for reporting / auditing?
I think, Microsoft did not provide that method for one good reason: the information it returned could be obsolete shortly after. See, cache items may expire any time (depends on expiration time and time of adding item to cache) so information you would receive from GetAllKeys() method could be invalid seconds or even milliseconds later.
Cache usage standard pattern would be
Get item from cache by a key
If cache return Null then create that item and put / add into the cache
Perform operation on the item (either taken from cache or recreated)
Co-located Role caching
Is it possible to clearAll()?
I do not think you should worry about purging your cache. If you set the cache eviction policy to LRU (Last Recently Used) then the least recently used items are discarded first. So you will never get anything like "no space in cache".
Why can't I use regoins?
You can but only with cache locate on the same instance. Dedicated Role caching does not support it.
This section applies to Windows Azure Shared Caching
Windows Azure Shared Caching is very similar to Windows Azure Caching (described above) from client side point of view and all of the explanations applies to Shared Caching too.
There is a small change to items eviction:
In Shared Caching, items without a specific expiration time will expire after 48 hours. However, you can add items to the cache (via various overloads of the Add and Put methods) with an explicit expiration time, such as X minutes or Y days.
When you exceed the size of your cache (cache sizes you chose during creation), the caching service will start "evict items" in the cache until the memory issue is resolved (you have enough memory to add new cache items). During "eviction" LRU mechanism is used - the least recently used items in the cache are removed.
Get, check, and recreate approach (described above) of dealing with cache items will work for Shared Caching too.
I hope that will help you to better understand Azure Caching and Shared Caching.
Following method clears all the data in a cache.
public static void InvalidateCache(string cacheName)
{
DataCache desiredCache = new DataCache(cacheName);
foreach (string regionName in desiredCache.GetSystemRegions())
{
desiredCache.ClearRegion(regionName);
}
}