spring ehcache integration with self-populating-cache-scope - ehcache

I have to integrate spring and ehcache, and trying to implement it with blockingCache pattern
<ehcache:annotation-driven/>
there is one option for self-populating-cache-scope for shared (default) and method. could you please explain what is the difference?
There is also the annotation #Cacheable with selfPopulating flag
As per what I read on some post
http://groups.google.com/group/ehcache-spring-annotations/browse_thread/thread/7dbc71ce34f6ee19/b057610167dfb815?lnk=raot
it says when shared is used only one instance is created and the same is used everytime the same cache name is used so if I use the selfPopulating flag as true for one method,
all the threads trying to access other methods annotated with
#Cacheable with selfPopulating flag set to true will go on hold which
I dont want
<ehcache:annotation-driven/>
when self-populating-cache-scope = method on other hand creates separate instances for all methods annotated with #Cacheable with selfPopulating flag set to true so it doesn't create a problem.
But in this case when I try to remove a element using #TriggerRemove and giving the cache name used in #Cacheable will it search in each of those separate instances to find the value? Isnt this an overhead?

Answered by Eric on the ehcache google group above
In all cases there is one underlying Ehcache instance. What happens
when you set selfPopulating=true is a SelfPopulatingCache wrapper is
created.
If cache-scope=shared then all annotations using that named cache will
use the same SelfPopulatingCache wrapper If cache-scope=method then
one wrapper is created per method
Note in both cases the SelfPopulatingCache is a wrapper, there is
still only one actual cache backing the wrapper(s)
As for blocking, If you read the docs for SelfPopulatingCache and
BlockingCache you'll notice that ehcache does a compromise between
cache level locking and per-key locking via key striping.
http://ehcache.org/apidocs/net/sf/ehcache/constructs/blocking/BlockingCache.html

Related

How to set ttl based on response object in spring Cacheable

I have below use case for Cache. I am using #Cacheable annotation and it looks something like this:
#Cacheable(cacheNames = "sampleCache", key = "#customerId", cacheResolver = "sampleCacheResolver")
public #ResponseBody SampleResponse getSampleMethod(String customerId) {
}
TTL is set in sampleCacheResolver. However I have a requirement where I have to change the TTL based on the response object. For example, in the response object if say SampleResponse.month is current month, I want to set the ttl to 1 min, else I want to leave the default value of 3 mins.
Since the sampleCacheResolver is getting called at the request level, I am not sure how I can update the ttl based on the response.
This is not possible with Spring's Cache Abstraction out of the box.
Spring is very clear that neither TTL (Time To Live) nor TTI (Idle Timeout) expiration policies (or even Eviction policies for that matter) are handled by the abstraction, and for good reason.
Expiration and Eviction implementations are caching provider specific and many have different possible actions when entries expire or are evicted.
It's rather peculiar to apply TTL to the response, particular given that TTL is a set timeframe.
At any rate, you have 2 options.
First, some caching providers and their supporting Spring bits offer capabilities close to what you are seeking.
For instance, I work on Spring Boot, Session and Data for Apache Geode, which can serve as a caching provider in Spring's Cache Abstraction.
1 of the features I support is entry level (that is, entity) expiration policies via an annotation-based configuration model. See here. Of course, this is made possible by Apache Geode itself given it's ability to express custom expiration policies. That is, the implementation of entry level TTL using annotations is based on Apache Geode's CustomExpiry interface.
Another way to handle entry/entity or response TTL is to override/extend Spring's Cache Abstraction. The 2 primary interfaces in the abstraction is the CacheManager and Cache interfaces.
In your case, you would implement the Cache interface to include the TTL logic you need. You Cache (TTL) implementation would wrap the actual caching provider implementation (e.g. EhCache Cache). Then you must implement the CacheManager interface to wrap the caching provider CacheManager implementation (again, the EhCache CacheManager). Your CacheManager implementation exists soley to create instances of your TTL enabled Cache implementation wrapping the provider's Cache implementation, when called upon by the framework. Essentially, your TTL enabled Cache implementation is enhancing, or decorating the caching provider's Cache implementation with the TTL logic you require.
Yet another possible solution would be to use Spring AOP. Indeed, many of Spring's features like Caching or Transaction Management (demarcation) is handled by Spring AOP. A carefully crafted AOP Aspect could wrap the Caching Aspect and contain the TTL logic you require. You must be sure that the TTL AOP Aspect comes after Spring's Caching Aspect, which can be handled by bean ordering.
I did not provide an example for your exact use case this time. However, I have answered questions that required a "custom" implementation of Spring's Cache and CacheManager interfaces before. For your reference, you can refer to my example to see how you might implement your own Cache/CacheManager combination.
Before going down the paths I presented, I encourage you to explore your caching provider to see if it affords you the TTL functionality you are after. You should choose an appropriate caching provider based on your needs and requirements.
I know this is a lot of information, but I simply wanted you to have options and to think about the problem from multiple angles.

CacheLoader is not getting called while trying to find an entity using GemfireRepository

CacheLoader is not getting called while trying to find an entity using GemfireRepository.
As a solution, I am using Region<K,V> for looking up, which is calling CacheLoader. So wanted to know whether there is any restriction for Spring Data Repository which doesn't allow CacheLoader to be called when entry is not present in the cache.
And, is there any other alternative? Because I have one more scenario where my cache key is combination of id1 & id2 and I want to get all entries based on id1. And if there is no entry present in cache, then it will call CacheLoader to load all entries from Cassandra store.
There are no limitations nor restrictions in SDG when using the SD Repository abstraction (and SDG's Repository extension) that would prevent a CacheLoader from being invoked so long as the CacheLoader was properly registered on the target Region. Once control is handed over to GemFire/Geode to complete the data access operation (CRUD), it is out of SDG's hands.
However, you should know that GemFire/Geode only invokes CacheLoaders on gets (i.e. Region.get(key) operations), never on (OQL) queries. OQL queries are invoked from derived query methods or custom, user-defined query methods using #Query annotated methods declared in the application Repository interface.
NOTE: See Apache Geode CacheLoader Javadoc and User Guide for more details.
For a simple CrudRepository.findById(key) call, the call stack follows from...
SimplyGemfireRepository.findById(key)
GemfireTemplate.get(key)
And then, Region.get(key) (from here).
By way of example, and to illustrate this behavior, I added the o.s.d.g.repository.sample.RepositoryDataAccessOnRegionUsingCacheLoaderIntegrationTests to the SDG test suite as part of DATAGEODE-308. You can provide additional feedback in this JIRA ticket, if necessary.
Cheers!

Using #Cacheable Spring annotation and manually add to Infinispan Cache

I am trying to load my cache off of a cold start, prior to application startup. This would be done so values are available as soon as a user accesses a server, rather than having to hit my database.
#Cacheable functionality from Spring all works great, the problem is how I manually store objects in the Cache so that they can be read when the function is executed.
Spring is storing these objects in bytes, somehow -- and I need to mimic this while I manually load the cache. I'm just trying to figure out how they process the return objects, in the function, to store into the cache in a key,val pair.
You can programmatically access any cache by using Spring's CacheManager.
See https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/CacheManager.html
var cache = cacheManager.getCache("foo");
cache.put(key, value);
I was able to solve this problem by storing values as a string key and object value -- which works wonderfully with Spring #Cacheable annotations. Objects are casted into the return types by Spring if they are found within the cache.

Method caching with Spring boot and Hazelcast.How and where do I specify my refresh/reload intervals?

I realise #Cacheable annotation helps me with caching the result of a particular method call and subsequent calls are returned from the cache if there are no changes to arguments etc.
I have a requirement where I'm trying to minimise the number of calls to a db and hence loading the entire table. However,I would like to reload this data say every day just to ensure that my cache is not out of sync with the underlying data on the database.
How can I specify such reload/refresh intervals.
I'm trying to use Spring boot and hazelcast.All the examples I have seen talk about specifying LRU LFU etc policies on the config file for maps etc but nothing at a method level.
I can't go with the LRU/LFU etc eviction policies as I intend to reload the entire table data every x hrs or x days.
Kindly help or point me to any such implementation or docs etc.
Spring #Cacheable doesn't support this kind of policies at method level. See for example the code for CacheableOperation.
If you are using hazelcast as your cache provider for spring, you can explicitly evict elements or load datas by using the corresponding IMap from your HazelcastInstance.

JPA2 Entities Caching

As it stands I am using a JSF request scoped bean to do all my CRUD operations. As I'm sure you most likely know Tomcat doesn't provide container managed persistence so in my CRUD request bean I am using EnityManagerFactory to get fold of enity manager. Now about the validity of my choice to use request scoped bean for this task, it's probably open for a discussion (again) but I've been trying to put it in the context of what I've read in the articles you gave me links to, specifically the first and second one. From what I gather EclipseLink uses Level 2 cache by default which stored cached entity. On ExlipseLink Examples - JPA Caching website it says that:
The shared cache exists for the duration of the persistence unit ( EntityManagerFactory, or server)
Now doesn't that make my cached entities live for a fraction of time during the call that is being made to the CRUD request bean because the moment the bean is destroyed and with it EntityManagerFactory then so is the cache. Also the last part of the above sentence "EntityManagerFactory, or server" gets me confused .. what precisely is meant by or server in this context and how does one control it. If I use the #Cache annotation and set appropriate amount of expire attribute, will that do the job and keep the entities stored on the servers L2 cache than, regardless of whether my EntityManagerFactory has been destroyed ?
I understand there is a lot of consideration to do and each application has specific requirements . From my point of view configuring L2 cache is probably the most desirable (if not only, on Tomcat) option to get things optimized. Quoting from your first link:
The advantages of L2 caching are:
avoids database access for already loaded entities
faster for reading frequently accessed unmodified entities
The disadvantages of L2 caching are:
memory consumption for large amount of objects
stale data for updated objects
concurrency for write (optimistic lock exception, or pessimistic lock)
bad scalability for frequent or concurrently updated entities
You should configure L2 caching for entities that are:
read often
modified infrequently
not critical if stale
Almost all of the above points apply to my app. At the heart of it, amongst other things, is constant and relentless reading of entities and displaying them on the website (the app will serve as a portal for listing properties). There's also a small shopping cart being build in the application but the products sold are not tangible items that come as stock but services. In this case stale entities are no problem and also, so I think, isn't concurrency as the products (here services) will never be written to. So the entities will be read often, and they will be modified infrequently (and those modified are not part of the cart anyway, an even those are modified rarely) and therefore not critical if stale. Finally the first two points seem to be exactly what I need, namely avoidance of database access to already loaded entities and fast reading of frequently accessed unmodified enteties. But there is one point in disadvantages which still concerns me a bit: memory consumption for large amount of objects. Isn't it similar to my original problem?
My current understanding is that there are two options, only one of which applies to my situation:
To be able to delegate the job of longer term caching to the persistence layer than I need to have access to PersistenceContext and create a session scoped bean and set PersistenceContextType.EXTENDED. (this options doesn't apply to me, no access to PersistenceContext).
Configure the L2 #Cache annotation on entities, or like in option 1 above create a session scoped bean that will handle long term caching. But aren't these just going back to my original problem?
I'd really like to hear you opinion and see what do you think could be a reasonable way to approach this, or perhaps how you have been approaching it in your previous projects. Oh, and one more thing, just to confirm.. when annotating an entity with #Cache all linked entities will be cached along so I don't have to annotate all of them?
Again all the comments and pointers much appreciated.
Thanks for you r answer .. when you say
"In Tomcat you would be best to have some static manager that holds onto the EntityManagerFactory for the duration of the server."
Does it mean I could for example declare and initialize static EntityManagerFactory field in an application scoped been to be later used by all the beans throughout the life of the application ?
EclipseLink uses a shared cache by default. This is shared for all EntityManagers accessed from an EntityManagerFactory. You do not need to do anything to enable caching.
In general, you do not want to be creating a new EntityManagerFactory per request, only a new EntityManager. Creating a new EntityManagerFactory is quite expensive, so not a good idea, even ignoring caching (it has its own connection pool, must initialize the meta-data, etc.).
In Tomcat you would be best to have some static manager that holds onto the EntityManagerFactory for the duration of the server. Either never close it, or close it when a Servlet is destroyed.

Resources