Is there a provider agnostic way of getting up to date cache statistics in Spring framework? - spring

Spring provides a useful feature of Cache Abstraction
But what I could not find is a provider agnostic way to get live cache statistics. Essentially I just want to show a list of all the cache names and their corresponding keys with the count of hits, misses, and sizes (in kb) either on a web page or via JMX. I know Ehcache does provide this feature and if I use ehcache API inside the code I can get it (have already used it in the past). But I believe using Ehcache API inside the code takes away the whole notion of the Spring framework's cache abstraction.

The only common, provider-agnostic thing you have is CacheManager interface, which provides the following method:
Collection<String> getCacheNames()
It returns a collection of the caches known by the cache manager.

Related

Spring Cache to Disable Cache by cacheName configuration

I am using spring boot, and it's very easy to integrate spring cache with other cache component.
By caching data, we can use #Cachable annotation, but still we need configure and add cacheName to the cacheManager, without this step, we will get an exception while accessing the method:
java.lang.IllegalArgumentException: Cannot find cache named 'xxxx' for Builder
My question is, is that able to disable the cache instead of raising the error if we not configure the cacheName? I raised this because spring cache provide a configuration spring.cache.cacheNames in CacheProperties.
Not sure if the condition attribute in #Cachable works for this.
Any idea is appreciate!! Thanks in advance!
It really depends on your "caching provider" and the implementation of the CacheManager interface, in particular. Since Spring's Cache Abstraction is just that, an "abstraction" it allows you to plugin different providers and backend data stores to support the caches required by your application (i.e. as determined by Spring's caching annotations, or alternatively, the JSR-107-JCache annotations; see here).
For instance, if you were to use the Spring Framework's provided ConcurrentMapCacheManager implementation (not recommended for production except for really simple UCs), then if you choose to not declare your caches at configuration/initialization time (using the default, no-arg constructor) then the "Caches" are lazily created. However, if you do declare your "Caches" at configuration/initialization time (using the constructor accepting cache name arguments), then if your application uses a cache (e.g. #Cacheable("NonExistingCache")) not explicitly declared, then Exception would be thrown because the getCache(name:String):Cache method would return null and the CacheInterceptor initialization logic would throw an IllegalArgumentException for no Cache available for the caching operation (follow from the CacheIntercepter down, here, here, here, here and then here).
There is no way to disable this initialization check (i.e. throw Exception) for non-existing caches, currently. The best you can do is, like the ConcurrentMapCacheManager implementation, lazily create Caches. However, this heavily depends on your caching provider implementation. Obviously, some cache providers are more sophisticated than others and creating a Cache on the fly (i.e. lazily) is perhaps more expensive and costly, so is not supported by the caching provider, or not recommended.
Still, you could work around this limitation by wrapping any CacheManager implementation (of your choice), and delegate to the underlying implementation for "existing" Caches and "safely" handle "non-existing" Caches by treating it as a Cache miss simply by providing some simple wrapper implementations of the core Spring CacheManager and Cache interfaces.
Here is an example integration test class that demonstrates your current problem. Note the test/assertions for non-existing Caches.
Then, here is an example integration test class that demonstrates how to effectively disable caching for non-existing Caches (not provided by the caching provider). Once again, note the test/assertions for safely accessing non-existing Caches.
This is made possible by the wrapper delegate for CacheManager (which wraps and delegates to an existing caching provider, which in this case is just the ConcurrentMapCacheManager again (see here), but would work for any caching provider supported by Spring Cache Abstraction) along with the NoOpNamedCache implementation of the Spring Cache interface. This no-op Cache instance could be a Singleton and reused for all non-existing Caches if you did not care about the name. But, it would give you a degree of visibility into which "named" Caches are not configured with an actual Cache since this most likely will have an impact on your services (i.e. service methods without caching enabled because the "named" cache does not exist).
Anyway, this may not be one you exactly want, and I would even caution you to take special care if you pushed this to production since (I'd argue) it really ought to fail fast for missing Caches, but this does achieve what you want.
Clearly, it is configurable and you could make it conditional based on cache name or other criteria, in that, if your really don't care or don't want caching on certain service methods in certain contexts, then it is up to you and this approach is flexible and completely give you that choice, if needed.
Hope this gives you some ideas.

Is Spring Cache clusterable?

I've got an application that uses Spring Cache (Ehcache), but now we need to add a 2nd node (same application). Can the cache be shared between the nodes or each with their own instance but synched?
Or do I need to look at a different solution?
Thanks.
That depends on your cache implementation - not on Spring, which only provides an abstract caching API. You are using EhCache as your caching implementation, which comes with a Terracotta server for basic clustering support and is open source. See http://www.ehcache.org/documentation/3.1/clustered-cache.html#clustering-concepts for more details

Spring Cache Abstraction for Write - Behind Caching Strategy

I am new to Spring cache abstraction. I have explored it using ehcache and apache ignite caching providers.
I want to know if spring cache abstraction supports the caching strategies of Write-behind and write-through.
Thanks,
bs
There is no direct support for cache-through in the declarative Spring abstraction.
And in a way it makes sense, since the abstraction lets you surround methods with caching related annotations. But with a cache-through pattern, the whole method would only be a cache interaction: a get for read, or a put for write. Not the if-then-else that the annotation abstracts.
However, if you use the CacheManager and Cache interfaces provided by Spring directly in your code, you can perfectly use them in a cache-through way.
Ignite cache has a notion of CacheStore interface that used in cases when there is a need to wire the cache with a persistent store (RDBMS, MongoDB, Hadoop, etc.). This interface provides write-through/write-behind and read-through semantics. Please refer to this documentation for more details.
Also I would recommend taking look at various examples that demonstrates how particular CacheStore implementations are used in Ignite. The examples are available in Ignite release bundles.

What are the different parameters for comparing the various caching frameworks?

I am currently aware of the following Caching Frameworks:
EHCache, MemCache, Redis, OSCache, DynaCache, JBoss Cache, JCS, Cache4J.
Apart from time taken for accessing the data from the cache, What are the different parameters/attributes for comparing these frameworks. And which framework should one use, and when?
Few things on broad level can be :
- Technology you are using
- API available for the chosen framework
- Each of the framework has a unique feature so depending on your application requirement you can pick one of the frameworks.
Description of few as picked from source mentioned below
Ehcache:
Ehcache is a java distributed cache for general purpose caching, J2EE and light-weight containers tuned for large size cache objects. It features memory and disk stores, replicate by copy and invalidate, listeners, a gzip caching servlet filter, Fast, Simple.
Java Caching System (JCS):
JCS is a distributed caching system written in java for server-side java applications. It is intended to speed up dynamic web applications by providing a means to manage cached data of various dynamic natures. Like any caching system, the JCS is most useful for high read, low put application
OSCache:
OSCache is a caching solution that includes a JSP tag library and set of classes to perform fine grained dynamic caching of JSP content, servlet responses or arbitrary objects. It provides both in memory and persistent on disk caches, and can allow your site to continue functioning normally even if the data source is down(for example if an error occurs like your db goes down, you can serve the cached content so people can still surf the site only)
Cache4J:
Cache4j is a cache for Java objects that stores objects only in memory (suitable for Russian speaking guys only as there is not documentation in English and the JavaDoc is in Russian also :D).
Redis:
Redis can be used for caching sessions and storing simple data structures for fast retrievals which when needed can be used for persistence as well.
It is mainly useful for caching POJO objects only.
Here is an interesting article for further insights :
http://javalandscape.blogspot.in/2009/03/intro-to-cachingcaching-algorithms-and.html

Advantage of using ehcahce over a static HashMap

I have always used the java singleton class for my basic caching needs.
Now the project is using ehcache and without looking deeply into source code, I am not able to figure out what was wrong with the singleton pattern.
i.e What are the benefits of using the ehcahce framework except that the caching can be done by using xml configuration and annotation without writing the boilerplate code (i.e a static HashMap)
It depends on what you need from your caching mechanism. Ehcache provides a lot of cool features, which require a lot of well designed code to do it manually:
LRU, LFU and FIFO cache eviction policies
Flexible configuration
Persistence
Replication
many more ...
I would recommend you go through them at http://ehcache.org/about/features and decide do you really need something in your project.
The most important one:
The ability to overflow to disk - this is something you don't have in normal HashMap and writing something like that is far from trivial. EhCache can function as simple to configure key-value database.
Even if you don't use overflow to disk, there's a large boilerplate to write with your own implementation. If loading the whole database would be possible, that using memory database with persistence on write and restoring on startup would be the solution. But memory is limited and you have to remove the elements from memory. But which one, based on what? Also, you must assert cache elements are not too old. Older elements should be replaced. If you need to remove elements from cache, you should start from the outdated ones. But should you do it when user requests something? It will slow down the request. Or start your own thread?
With EhCache you have the library in which all those issues are addressed and tested.
Also there is a clustered closed source version of ehcache, which allows you to have a distributed cache. That might be one reason you might want to consider using ehcache.

Resources