I am trying to load my cache off of a cold start, prior to application startup. This would be done so values are available as soon as a user accesses a server, rather than having to hit my database.
#Cacheable functionality from Spring all works great, the problem is how I manually store objects in the Cache so that they can be read when the function is executed.
Spring is storing these objects in bytes, somehow -- and I need to mimic this while I manually load the cache. I'm just trying to figure out how they process the return objects, in the function, to store into the cache in a key,val pair.
You can programmatically access any cache by using Spring's CacheManager.
See https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/cache/CacheManager.html
var cache = cacheManager.getCache("foo");
cache.put(key, value);
I was able to solve this problem by storing values as a string key and object value -- which works wonderfully with Spring #Cacheable annotations. Objects are casted into the return types by Spring if they are found within the cache.
Related
I have a Spring Boot application that is caching some values such as <UserId, MyPojo.class>.
Example MyPojo fields:
id
code
If a change is made to MyPojo.code then I want to clear the cache for all users that have that MyPojo.id.
I've looked at #CacheEvict annotation with a condition, but that seems to be only for deciding whether to evict cache on some condition without checking what is present in the cache itself.
Another option would be to iterate through every cache entry and check the value and clear that key, but that seems expensive.
Any suggestions?
While implementing Hazelcast for the first time in set of web APIs, the usage of Map and Cache is inconsistent.
For example, creating a cache using SpringCacheManager results in the creation of a map
var sCache = springCacheManager.getCache("testCache");
sCache.putIfAbsent("test", "test2");
However, creating a cache using the CachingProvider CacheManager results in the creation of an actual cache that must be opened and closed (as per the documentation)
try (var cache = Caching.getCachingProvider().getCacheManager(null, null,
HazelcastCachingProvider.propertiesByInstanceName("hazelcache")).createCache("actualCache", config)) {
cache.putIfAbsent("test", "test");
}
Another example, using the #Cacheable annotation will create a map, even though the documentation outlines the usage of a Cache. The following code will successfully return the first computed value using a Map in hazelcast. A cache is never used.
#Cacheable(value = "counter")
public Boolean test(Integer addTo) {
counter += addTo;
return counter % 2 != 0;
}
Is there a formal definition within Hazelcast of a cache vs a map? Are both usable for the same purpose?
The image below contains a view into a test Hazelcast Management Center that shows the above components, namely the maps and caches. These are all generated by the same client.
test
There are Cache, Spring Cache and Map to consider here.
For Cache, Hazelcast is an implementation provider for the Java caching standard, JSR107.
These show as "Cache" on the Management Center, and if you run hazelcastInstance.getDistributedObjects() they'll be of type ICache. It's in the Hazelcast documentation here.
For Map, Hazelcast provides a data structure IMap which is mostly a superset of java.util.Map. These show as "Map" on the Management Center.
Spring also provides caching, and you can set CacheType for JSR107 or directly with Hazelcast, or allow Spring to pick. When Spring uses Hazelcast directly, it will use IMap not ICache for storage.
If you pick JCache or configure Spring to use JCache, then you get standards compliant behaviour. You have caching, and can easily swap caching provider from Hazelcast to something else should you want to.
Map gives you operations such as executeOnKey to update one or more entries in situ. If the entry is a compound object but on a small part is changing, this can be a more efficient way to update it than sending the whole value.
I realise #Cacheable annotation helps me with caching the result of a particular method call and subsequent calls are returned from the cache if there are no changes to arguments etc.
I have a requirement where I'm trying to minimise the number of calls to a db and hence loading the entire table. However,I would like to reload this data say every day just to ensure that my cache is not out of sync with the underlying data on the database.
How can I specify such reload/refresh intervals.
I'm trying to use Spring boot and hazelcast.All the examples I have seen talk about specifying LRU LFU etc policies on the config file for maps etc but nothing at a method level.
I can't go with the LRU/LFU etc eviction policies as I intend to reload the entire table data every x hrs or x days.
Kindly help or point me to any such implementation or docs etc.
Spring #Cacheable doesn't support this kind of policies at method level. See for example the code for CacheableOperation.
If you are using hazelcast as your cache provider for spring, you can explicitly evict elements or load datas by using the corresponding IMap from your HazelcastInstance.
This question is with reference to Simple Spring memcached.
I have a scenario where a list of deals are cached for user using the userId as the key. Now in case a deal data is updated I need to flush the cache for all users since this would affect deals data for all the users.
How can I achieve this with SSM annotations. The invalidate*cache and update*cache options seem to invalidate/update key specific cache entries whereas I need to clear the entire cache.
Currently it's impossible in plain SSM to flush entire cache using annotations, if you require such option please create a feature request on: https://code.google.com/p/simple-spring-memcached/issues/list
There is another way to flush entire cache by using SSM with Spring Cache as describer here: https://code.google.com/p/simple-spring-memcached/wiki/Getting_Started#Spring_3.1_Cache_Integration.
Just change allowClear to 'true' and use #CacheEvict(value = YOUR_CACHE_NAME, allEntries = true)
I have to integrate spring and ehcache, and trying to implement it with blockingCache pattern
<ehcache:annotation-driven/>
there is one option for self-populating-cache-scope for shared (default) and method. could you please explain what is the difference?
There is also the annotation #Cacheable with selfPopulating flag
As per what I read on some post
http://groups.google.com/group/ehcache-spring-annotations/browse_thread/thread/7dbc71ce34f6ee19/b057610167dfb815?lnk=raot
it says when shared is used only one instance is created and the same is used everytime the same cache name is used so if I use the selfPopulating flag as true for one method,
all the threads trying to access other methods annotated with
#Cacheable with selfPopulating flag set to true will go on hold which
I dont want
<ehcache:annotation-driven/>
when self-populating-cache-scope = method on other hand creates separate instances for all methods annotated with #Cacheable with selfPopulating flag set to true so it doesn't create a problem.
But in this case when I try to remove a element using #TriggerRemove and giving the cache name used in #Cacheable will it search in each of those separate instances to find the value? Isnt this an overhead?
Answered by Eric on the ehcache google group above
In all cases there is one underlying Ehcache instance. What happens
when you set selfPopulating=true is a SelfPopulatingCache wrapper is
created.
If cache-scope=shared then all annotations using that named cache will
use the same SelfPopulatingCache wrapper If cache-scope=method then
one wrapper is created per method
Note in both cases the SelfPopulatingCache is a wrapper, there is
still only one actual cache backing the wrapper(s)
As for blocking, If you read the docs for SelfPopulatingCache and
BlockingCache you'll notice that ehcache does a compromise between
cache level locking and per-key locking via key striping.
http://ehcache.org/apidocs/net/sf/ehcache/constructs/blocking/BlockingCache.html