I have recently started working on caching the result from a method. I am using #Cacheable and #CachePut to implement the desired the functionality.
But somehow, the save operation is not updating the cache for findAll method. Below is the code snippet for the same:
#RestController
#RequestMapping(path = "/test/v1")
#CacheConfig(cacheNames = "persons")
public class CacheDemoController {
#Autowired
private PersonRepository personRepository;
#Cacheable
#RequestMapping(method = RequestMethod.GET, path="/persons/{id}")
public Person getPerson(#PathVariable(name = "id") long id) {
return this.personRepository.findById(id);
}
#Cacheable
#RequestMapping(method = RequestMethod.GET, path="/persons")
public List<Person> findAll() {
return this.personRepository.findAll();
}
#CachePut
#RequestMapping(method = RequestMethod.POST, path="/save")
public Person savePerson(#RequestBody Person person) {
return this.personRepository.save(person);
}
}
For the very first call to the findAll method, it is storing the the result in the "persons" cache and for all the subsequent calls it is returning the same result even if the save() operation has been performed in between.
I am pretty new to caching so any advice on this would be of great help.
Thanks!
So, a few things come to mind regarding your UC and looking at your code above.
First, I am not a fan of users enabling caching in either the UI or Data tier of the application, though it makes more sense in the Data tier (e.g. DAOs or Repos). Caching, like Transaction Management, Security, etc, is a service-level concern and therefore belongs in the Service tier IMO, where your application consists of: [Web|Mobile|CLI]+ UI -> Service -> DAO (a.k.a. Repo). The advantage of enabling Caching in the Service tier is that is is more reusable across your application/system architecture. Think, servicing Mobile app clients in addition to Web, for instance. Your Controllers for you Web tier may not necessarily be the same as those handling Mobile app clients.
I encourage you to read the chapter in the core Spring Framework's Reference Documentation on Spring's Cache Abstraction. FYI, Spring's Cache Abstraction, like TX management, is deeply rooted in Spring's AOP support. However, for your purposes here, let's break your Spring Web MVC Controller (i.e. CacheDemoController) down a bit as to what is happening.
So, you have a findAll() method that you are caching the results for.
WARNING: Also, I don't generally recommend that you cache the results of a Repository.findAll() call, especially in production! While this might work just fine locally given a limited data set, the CrudRepository.findAll() method returns all results in the data structure in the backing data store (e.g. the Person Table in an RDBMS) for that particular object/data type (e.g. Person) by default, unless you are employing paging or some LIMIT on the result set returned. When it comes to caching, always think a high degree of reuse on relatively infrequent data changes; these are good candidates for caching.
Given your Controller's findAll() method has NO method parameters, Spring is going to determine a "default" key to use to cache the findAll() method's return value (i.e. List<Person).
TIP: see Spring's docs on "Default Key Generation" for more details.
NOTE: In Spring, as with caching in general, Key/Value stores (like java.util.Map) are the primary implementation's for Spring's notion of a Cache. However, not all "caching providers" are equal (e.g. Redis vs. a java.util.concurrent.ConcurrentHashMap, for instance).
After calling the findAll() Controller method, your cache will have...
KEY | VALUE
------------------------
abc123 | List of People
NOTE: the cache will not store each Person in the list individually as a separate cache entry. That is not how method-level caching works in Spring's Cache Abstraction, at least not by default. However, it is possible.
Then, suppose your Controller's cacheable getPerson(id:long) method is called next. Well, this method includes a parameter, the Person's ID. The argument to this parameter will be used as the key in Spring's Cache Abstraction when the Controller getPerson(..) method is called and Spring attempts to find the (possibly existing) value in the cache. For example, say the method is called with controller.getPerson(1). Except a cache entry with key 1 does not exist in the cache, even if that Person (1) is in list mapped to key abc123. Thus, Spring is not going to find Person 1 in the list and return it, and so, this op results in a cache miss. When the method returns the value (the Person with ID 1) will be cached. But, the cache now looks like this...
KEY | VALUE
------------------------
abc123 | List of People
1 | Person(1)
Finally, a user invokes the Controller's savePerson(:Person) method. Again, the savePerson(:Person) Controller method's parameter value is used as the key (i.e. a "Person" object). Let's say the method is called as so, controller.savePerson(person(1)). Well, the CachePut happens when the method returns, so the existing cache entry for Person 1 is not updated since the "key" is different, so a new cache entry is created, and your cache again looks like this...
KEY | VALUE
---------------------------
abc123 | List of People
1 | Person(1)
Person(1) | Person(1)
None of which is probably what you wanted nor intended to happen.
So, how do you fix this. Well, as I mentioned in the WARNING above, you probably should not be caching an entire collection of values returned from an op. And, even if you do, you need to extend Spring's Caching infrastructure OOTB to handle Collection return types, to break the elements of the Collection up into individual cache entries based on some key. This is intimately more involved.
You can, however, add better coordination between the getPerson(id:long) and savePerson(:Person) Controller methods, however. Basically, you need to be a bit more specific about your key to the savePerson(:Person) method. Fortunately, Spring allows you to "specify" the key, by either providing s custom KeyGenerator implementation or simply by using SpEL. Again, see the docs for more details.
So your example could be modified like so...
#CachePut(key = "#result.id"
#RequestMapping(method = RequestMethod.POST, path="/save")
public Person savePerson(#RequestBody Person person) {
return this.personRepository.save(person);
}
Notice the #CachePut annotation with the key attribute containing the SpEL expression. In this case, I indicated that the cache "key" for this Controller savePerson(:Person) method should be the return value's (i.e. the "#result") or Person object's ID, thereby matching the Controller getPerson(id:long) method's key, which will then update the single cache entry for the Person keyed on the Person's ID...
KEY | VALUE
---------------------------
abc123 | List of People
1 | Person(1)
Still, this won't handle the findAll() method, but it works for getPerson(id) and savePerson(:Person). Again, see my answers to the posting(s) on Collection values as return types in Spring's Caching infrastructure and how to handle them properly. But, be careful! Caching an entire Collection of values as individual cache entries could reck havoc on your application's memory footprint, resulting in OOME. You definitely need to "tune" the underlying caching provider in this case (eviction, expiration, compression, etc) before putting a large deal of entires in the cache, particular at the UI tier where literally thousands of requests maybe happening simultaneously, then "concurrency" becomes a factor too! See Spring's docs on sync capabilities.
Anyway, hope this helps aid your understanding of caching, with Spring in particular, as well as caching in general.
Cheers,
-John
Related
I am using Redis cache for API response caching. I want to store data to the cache only if the HTTP response status code is 200. If the HTTP response status code is 500, then I do not want to cache the data. And, also if my response has certain data which I want to read and then decide if I want to cache (my response data is always json response). Can I do it? How do I achieve this?
I am trying something like this:
#Cacheable(value = "EmployeeInfo", key = "#emp_name")
public EmployeeInformation getEmplInfo(HttpServletRequest request,
HttpServletResponse response, String emp_name) {
return serviceImpl.getEmployeeInfor(emp_name);
}
What you want to do is generally possible and supported by the Spring Cache Abstraction's, Conditional Caching feature.
Based on your short code snippet above, it would look similar to the following:
#Cacheable(value = "EmployeeInfo", key = "#emp_name",
unless = "#response.status != 200")
public EmployeeInformation getEmplInfo(HttpServletRequest request,
HttpServletResponse response, String emp_name) {
// ...
}
In this case, you would use the #Cacheable annotation's unless attribute since the getEmplInfo(..) method's result (and logic) would be needed to determine and set the appropriate HTTP status code of the HttpServletResponse object passed into the method.
Given the JSON returned by the #Cacheable, getEmplInfo(..) method is supposedly converted and represented by an EmployeeInformation type, and additionally, given the #Cacheable annotation's condition-based attributes, such as condition and unless, accept SpEL expressions, then you could compose the necessary SpEL expressions using logical operators.
For example:
unless = "#response.status != 200 && !#result.fullTime"
In this configuration, it assumes the employee, represented by your EmployeeInformation type, has a boolean property called fullTime.
The unless condition effectively says only cache the EmployeeInformation by the emp_name key if the HTTP status code set in the method is 200 (SUCCESS) AND the person is a full-time employee.
It isn't too difficult to further imagine more complex configurations and SpEL expressions made possible by the SpEL language, itself.
Of course, if you need some seriously low-level, really complex conditional logic, beyond what is possible with SpEL, then you would need to decorate the caching provider implementation (i.e. Redis in your case) using Spring's Cache API.
Every single caching provider implementation supplied by Spring is represented by 2 interfaces in the Cache Abstraction: Cache and CacheManager, no exceptions.
Since Cache and CacheManager are interfaces, you can either supply your own implementations, or alternatively (and recommended), you can create "Wrappers" around an existing cache provider's (e.g. Redis's) implementation to enhance (i.e. "decorate") the existing caching behavior with additional (perhaps, conditional logic). Make sense?
The later is a rather advanced topic and something I have demonstrated in past SO spring-cache posts before for other users' Use Cases. You can usually get pretty far with just SpEL by itself, so let's start there and if you need more assistance I am happy to share working examples in the comments. Just let me know.
I hope this helps kick start some ideas for you.
Cheers!
When using partitioned caching in gemfire and integrating with spring data using cacheable annotation, it puts the data in cache properly but when retrieving from cache, if the key is on a different partition it is throwing PartionedRegionException saying the hashCode is inconsistent between cache peers. I have overridden equals and hashCode method in the class whose objects are keys for the cache. Any idea where i could be going wrong? The two cache peers are on the same machine. And the locator is started externally.
I'm starting cache using the following method.
#Bean
#Primary
Cache getGemfireCache() {
Cache cache = new CacheFactory().create();
RegionFactory<Object,Object> regionFactory = cache.createRegionFactory(RegionShortcut.PARTITION);
allCacheNames.forEach(cacheName -> regionFactory.create(cacheName));
return cache;
}
Any help would be appreciated.
Thanks!
Hmmm.
First, it is hard to describe exactly what problem you are experiencing, but I am nearly certain it has very little to do with Spring Data, or technically, Spring's Cache Abstraction in this case (especially since you mention "caching" using the #Cacheable annotation) than it does with say, Pivotal GemFire itself, or more likely in your application domain model, specifically.
Second, the problem you are experiencing has very little do with your configuration shown above. Essentially, in your configuration, you are creating a "peer" Cache instance along with Regions for each of your caches identified in the #Cacheable annotations declared on your application service methods, which is not particularly interesting in this case.
TIP: Regarding configuration, it would have been better to do this:
#SpringBootApplication
#EnableCachingDefinedRegions
public class MyCachingSpringBootApplication { ... }
See here, here and here for more information.
NOTE: SBDG creates a ClientCache instance by default, not a "peer" Cache instance. If you truly want your Spring application to contain an embedded peer Cache instance and be part of the server cluster, then you would additionally override SBDG's preference of auto-configuring a ClientCache instance by declaring the #PeerCacheApplication annotation. See here for more details.
Next, you mention that you "overrode" equals and hashCode, which seems to suggest you are using some complex key. In general, it is better to keep with simple key types when using Pivotal GemFire, such as Long, Integer, String, etc, for reasons like what you are experiencing.
A better option if you need to influence your partitioning strategy or data organization across the cluster (e.g. perhaps for collocation) is to implement GemFire's PartitionResolver and register it with the PR.
However, it is not uncommon for you cacheable service methods to look like the following:
#Cacheable("CustomersByAccount")
Account findBy(Customer customer) { ... }
As you may well know, the "key" to the #Cacheable "findBy" service method shown above is Customer, which is clearly a complex object and must have a valid equals and hashCode method when used as a key in a GemFire cache Region, used to back the application cache "CustomersByAccount".
A few questions:
Is it possible that A) your complex key's class definition (e.g. like Customer) changed, such as by adding/removing a [new] field or by changing a field type (?) and B) your PARTITION Region backing the cache (e.g. "CustomersByAccount") is persistent?
Is your equals and hashCode methods consistent? That is they declare and use the same fields to determine the result of equals and hashCode?
For example, this would not be valid:
class Customer {
private Long id;
private String firstName;
private String lastName;
...
#Override
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
if (!(obj instanceof Customer)) {
return false;
}
Customer that = (Customer) obj;
return this.id.equals(that.id);
}
#Override
public int hashCode() {
int hashValue = 17;
hashValue = 37 * hashValue + this.firstName.hashCode();
hashValue = 37 * hashValue + this.lastName.hashCode();
return hashValue;
}
...
}
Or any other combination where equals/hashCode could potentially yield a different result depending on state previously stored in GemFire.
You might also try clearing the cache and rehydrating (eagerly or lazily as necessary), particularly if your class definitions have changed and especially if some of those class types are used as keys.
Also, in general, I would recommend immutable keys as much as possible if it is not possible to strictly stick to simple/scalar types (e.g. like Long or String).
Perhaps, if you could share a bit more details into your application domain model classes, such as the types used as keys, along with your use of Spring's Cache Abstraction on your service methods, that might help.
Also, any examples or test cases reproducing the problem are greatly appreciated.
Thanks!
I want to use Spring's Cache abstraction to annotate methods as #Cacheable. However, some methods are designed to take an array or collection of parameters and return a collection. For example, consider this method to find entites:
public Collection<Entity> getEntities(Collection<Long> ids)
Semantically, I need to cache Entity objects individually (keyed by id), not based on the collection of IDs as a whole. Similar to what this question is asking about.
Simple Spring Memcached supports what I want, via its ReadThroughMultiCache, but I want to use Spring's abstraction in order to support easy changing of the cache store implementation (Guava, Coherence, Hazelcast, etc), not just memcached.
What strategies exist for caching this kind of method using Spring Cache?
Spring's Cache Abstraction does not support this behavior out-of-the-box. However, it does not mean it is not possible; it's just a bit more work to support the desired behavior.
I wrote a small example demonstrating how a developer might accomplish this. The example uses Spring's ConcurrentMapCacheManager to demonstrate the customizations. This example will need to be adapted to your desired caching provider (e.g. Hazelcast, Coherence, etc).
In short, you need to override the CacheManager implementation's method for "decorating" the Cache. This varies from implementation to implementation. In the ConcurrentMapCacheManager, the method is createConcurrentMapCache(name:String). In Spring Data GemFire, you would override the getCache(name:String) method to decorate the Cache returned. For Guava, it would be the createGuavaCache(name:String) in the GuavaCacheManager, and so on.
Then your custom, decorated Cache implementation (perhaps/ideally, delegating to the actual Cache impl, from this) would handle caching Collections of keys and corresponding values.
There are few limitations of this approach:
A cache miss is all or nothing; i.e. partial keys cached will be considered a miss if any single key is missing. Spring (OOTB) does not let you simultaneously return cache values and call the method for the diff. That would require some very extensive modifications to the Cache Abstraction that I would not recommend.
My implementation is just an example so I chose not to implement the Cache.putIfAbsent(key, value) operation (here).
While my implementation works, it could be made more robust.
Anyway, I hope it provides some insight in how to handle this situation properly.
The test class is self-contained (uses Spring JavaConfig) and can run without any extra dependencies (beyond Spring, JUnit and the JRE).
Cheers!
Worked for me. Here's a link to my answer.
https://stackoverflow.com/a/60992530/2891027
TL:DR
#Cacheable(cacheNames = "test", key = "#p0")
public List<String> getTestFunction(List<String> someIds) {
My example is with String but it also works with Integer and Long, and probably others.
I am using spring MVC with Hibernate
Generic Method
// getAllById
#SuppressWarnings("unchecked")
public <T> List<T> getAllById(Class<T> entityClass, long id)
throws DataAccessException {
Criteria criteria = sessionFactory.getCurrentSession().createCriteria(entityClass)
.add(Restrictions.eq("id", id));
return criteria.list();
}
In controller
List<GenCurrencyModel> currencyList=pt.getAllById(GenCurrencyModel.class,1);
Question
How we can use #Cacheable("abc") annotation in Generic method and destroy the cache on demand using spring mvc + hibernate with generic DAO
According to the example in spring doc it specify annotation on simple method !
#Cacheable("books")
public Book findBook(ISBN isbn) {...}
I actually required, when Id pass to generic method ,it should first look up in cache, and I should also destroy cache on demand !
First of all think about the implications of using Generics for a moment:
You don't know which types you will use in the future. You don't know the cache names either for that matter.
You (may) have no type information, so there is no chance of choosing a specific cache.
The last point can be solved by always providing type information, like entityClass in your method.
Solution 1: One cache
Use one cache and generate a key based on the type.
#Cacheable(value="myCache", key="#entityClass.name + #id")
Solution 2: Use #Caching
While you can use expressions for the key you can't use them for the cache names. #Caching allows you to use multiple #Cachable annotations, each with another cache name.
#Caching (
#Cacheable(value="books", key="#id", condition="#entityClass.name == 'Book'"),
#Cacheable(value="students", key="#id", condition="#entityClass.name == 'Student')
)
Solution 3: Write your own cache provider
This is not much of an effort to do. The Spring default cache provider is just a map after all. Your implementation could use different 'subcaches' for each type.
Clearing the cache is more difficult. The solutions 1 and 3 have only one cache. You cannot clear only 'books' but not 'students'. Solution 2 has that option but you have to provide all possible caches and types.
You could use solution 3 and talk to the cache directly instead of using #CacheEvict.
I am using #Cacheable with Spring 3.1. I little bit confused with value and key mapping parameters in Cacheable.
Here is what I am doing:
#Cacheable(value = "message", key = "#zoneMastNo")
public List<Option> getAreaNameOptionList(String local, Long zoneMastNo) {
//..code to fetch data form database..
return list;
}
#Cacheable(value = "message", key = "#areaMastNo")
public List<Option> getLocalityNameOptionList(String local, Long areaMastNo) {
//..code to fetch data form database..
return list;
}
What happening here, second method is dependent on selected value of first method,
but issue is suppose when I pass zoneMastNo = 1 and areaMastNo = 1 then second method returns first methods result.
Actually, I have lots of services hence, I am looking to use common value for cacheable for specific use cases.
Now my questions are:
How can I solve this issue?
Is it good idea that use cacheable for every services?
After specified time will cache completely remove from memory without
using #CacheEvict ?
How can I solve this issue?
I assume zoneMastNo and areaMastNo are completely different keys, by which I mean List<Option> for zoneMastNo = 1 is not the same as List<Option> for areaMastNo = 1. This means you need two caches - one keyed by zone and the other by area. However you are explicitly using only one cache named message. Quoting 29.3.1 #Cacheable annotation:
#Cacheable("books")
public Book findBook(ISBN isbn) {...}
In the snippet above, the method findBook is associated with the cache named books.
So if I understand correctly, you should basically use two different caches:
#Cacheable(value = "byZone", key = "#zoneMastNo")
public List<Option> getAreaNameOptionList(String local, Long zoneMastNo)
//...
#Cacheable(value = "byArea", key = "#areaMastNo")
public List<Option> getLocalityNameOptionList(String local, Long areaMastNo)
Also are you sure these methods won't have a different result depending on local parameter? If not, what is it used for?
Is it good idea that use cacheable for every services?
No, for the following reasons:
some methods are just fast enough
...and caching introduced some overhead on its own
some services call other services, do you need caching on every level of hierarchy
caching needs memory, a lot of it
cache invalidation is hard
After specified time will cache completely remove from memory without using #CacheEvict ?
That totally depends on your cache implementation. But every sane implementation has such an option, e.g. EhCache.
question 3:
it depends on your cache expiration configuration. if you use ehcache, change the settings in ehcache.xml.