Ehcache jsr107:defaults not applying to programmatically created caches - spring-boot

Based on my findings in my previous SO question, I'm trying to setup JCaches in a mix of declarative and imperative configuration, to limit the max size of caches declaratively.
I keep a list of the caches and the duration (TTL) for their entries in my application.yaml, which I get with a property loader. I then create my caches with the code below:
#Bean
public List<javax.cache.Cache<Object, Object>> getCaches() {
javax.cache.CacheManager cacheManager = this.getCacheManager();
List<Cache<Object, Object>> caches = new ArrayList();
Map<String, String> cacheconfigs = //I populate this with a list of cache names and durations;
Set<String> keySet = cacheconfigs.keySet();
Iterator i$ = keySet.iterator();
while(i$.hasNext()) {
String key = (String)i$.next();
String durationMinutes = (String)cacheconfigs.get(key);
caches.add((new GenericDefaultCacheConfigurator.GenericDefaultCacheConfig(key, new Duration(TimeUnit.MINUTES, Long.valueOf(durationMinutes)))).getCache(cacheManager));
}
return caches;
}
#Bean
public CacheManager getCacheManager() {
return Caching.getCachingProvider().getCacheManager();
}
private class GenericDefaultCacheConfig {
public GenericDefaultCacheConfig(String cacheName, Duration duration) {
public GenericDefaultCacheConfig(String id, Duration duration, Factory expiryPolicyFactory) {
CACHE_ID = id;
DURATION = duration;
EXPIRY_POLICY = expiryPolicyFactory;
}
private MutableConfiguration<Object, Object> getCacheConfiguration() {
return new MutableConfiguration<Object, Object>()
.setTypes(Object.class, Object.class)
.setStoreByValue(true)
.setExpiryPolicyFactory(EXPIRY_POLICY);
}
public Cache<Object, Object> getCache(CacheManager cacheManager) {
CacheManager cm = cacheManager;
Cache<K, V> cache = cm.getCache(CACHE_ID, Object.class, Object.class);
if (cache == null)
cache = cm.createCache(CACHE_ID, getCacheConfiguration());
return cache;
}
}
I try limiting the cache size with the following ehcache.xml:
<config
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns='http://www.ehcache.org/v3'
xmlns:jsr107='http://www.ehcache.org/v3/jsr107'
xsi:schemaLocation="
http://www.ehcache.org/v3 http://www.ehcache.org/schema/ehcache-core-3.0.xsd
http://www.ehcache.org/v3/jsr107 http://www.ehcache.org/schema/ehcache-107-ext-3.0.xsd">
<service>
<jsr107:defaults default-template="heap-cache" enable-management="true" enable-statistics="true">
</jsr107:defaults>
</service>
<cache-template name="heap-cache">
<resources>
<heap unit="entries">20</heap>
</resources>
</cache-template> </config>
I set the following declaration in my application.yaml:
spring:
cache:
jcache:
config: classpath:ehcache.xml
However, my caches don't honor the imposed limit. I validate with the following test:
#Test
public void testGetCacheMaxSize() {
Cache<Object, Object> cache = getCache(MY_CACHE); //I get a cache of type Eh107Cache[myCache]
CacheRuntimeConfiguration<Object, Object> ehcacheConfig = (CacheRuntimeConfiguration<Object, Object>)cache.getConfiguration(
Eh107Configuration.class).unwrap(CacheRuntimeConfiguration.class);
long size = ehcacheConfig.getResourcePools().getPoolForResource(ResourceType.Core.HEAP).getSize(); //Returns 9223372036854775807 instead of the expected 20
for(int i=0; i<30; i++)
commonDataService.getAllStates("ENTRY_"+i);
Map<Object, Object> cachedElements = cacheManagerService.getCachedElements(MY_CACHE);
assertTrue(cachedElements.size().equals(20)); //size() returns 30
}
Can somebody point out what I am doing wrong? Thanks in advance.

The issue comes from getting the cache manager as:
Caching.getCachingProvider().getCacheManager();
By setting the config file URI on cache manager's initialization I got it to work:
cachingProvider = Caching.getCachingProvider();
configFileURI = resourceLoader.getResource(configFilePath).getURI();
cacheManager = cachingProvider.getCacheManager(configFileURI, cachingProvider.getDefaultClassLoader());
I was under the expectation that Spring Boot would automatically create the cache manager based on the configuration file included given in property spring.cache.jcache.config,
but that was not the case because I get the cache manager as described above instead of simply auto-wiring it and letting Spring create it.

Related

How to get all data from Java Spring Cache

i need toknow how to retrieve or where to see al data stored in my cache.
#Configuration
#EnableCaching
public class CachingConf {
#Bean
public CacheManager cacheManager() {
Caffeine<Object, Object> cacheBuilder = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.SECONDS)
.maximumSize(1000);
CaffeineCacheManager cacheManager = new CaffeineCacheManager("hr");
cacheManager.setCaffeine(cacheBuilder);
return cacheManager;
}
}
private final CacheManager cacheManager;
public CacheFilter(CacheManager cacheManager) {
this.cacheManager = cacheManager;
}
#Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
final var cache = cacheManager.getCache("hr");
......
I want to somehow see all data in my cache stored but the cache does not have get all or something like tht.Any advices guys?
The spring cache abstraction does not provide a method to get all the entries in a cache. But luckily they provide a method to get the underlying native cache abstraction which is Caffeine cache in your case.
The Caffeine cache has a method called asMap() to return a map view containing all the entries stored in the cache.
So combining them together will give you the following :
var cache = cacheManager.getCache("hr");
com.github.benmanes.caffeine.cache.Cache<Object, Object> nativeCache = (com.github.benmanes.caffeine.cache.Cache<Object, Object>)cache.getNativeCache();
ConcurrentMap<K, V> map = nativeCache.asMap();
//Loop through the map here to access all the entries in the cache
Please note that it is a quick and effective fix but it will make your codes couple to Caffeine . If you mind , you can configure the spring cache to use JCache and configure JCache to use Caffeine cache (see this) . As JCache API implements Iterable<Cache.Entry<K, V>>, it allow you to iterate all of its entries :
var cache = cacheManager.getCache("hr");
javax.cache<Object, Object> nativeCache = (javax.cache<Object, Object>)cache.getNativeCache();
for(Cache.Entry<Object,Object> entry : nativeCache){
//access the entries here.
}

spring boot with redis

I worked with spring boot and redis to caching.I can cache my data that fetch from database(oracle) use #Cacheable(key = "{#input,#page,#size}",value = "on_test").
when i try to fetch data from key("on_test::0,0,10") with redisTemplate the result is 0
why??
Redis Config:
#Configuration
public class RedisConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
redisStandaloneConfiguration.setPassword(RedisPassword.of("admin#123"));
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<String,Objects> redisTemplate() {
RedisTemplate<String,Objects> template = new RedisTemplate<>();
template.setStringSerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
//service
#Override
#Cacheable(key = "{#input,#page,#size}",value = "on_test")
public Page<?> getAllByZikaConfirmedClinicIs(Integer input,int page,int size) {
try {
Pageable newPage = PageRequest.of(page, size);
String fromCache = controlledCacheService.getFromCache();
if (fromCache == null && input!=null) {
log.info("cache is empty lets initials it!!!");
Page<DataSet> all = dataSetRepository.getAllByZikaConfirmedClinicIs(input,newPage);
List<DataSet> d = redisTemplate.opsForHash().values("on_test::0,0,10");
System.out.print(d);
return all;
}
return null;
The whole point of using #Cacheable is that you don't need to be using RedisTemplate directly. You just need to call getAllByZikaConfirmedClinicIs() (from outside of the class it is defined in) and Spring will automatically check first if a cached result is available and return that instead of calling the function.
If that's not working, have you annotated one of your Spring Boot configuration classes with #EnableCaching to enable caching?
You might also need to set spring.cache.type=REDIS in application.properties, or spring.cache.type: REDIS in application.yml to ensure Spring is using Redis and not some other cache provider.

how to configure different ttl for each redis cache when using #cacheable in springboot2.0

I am using #cacheable in springboot2.0 with redis. I have configured RedisCacheManager as follow:
#Bean
public RedisCacheManager redisCacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheWriter redisCacheWriter = RedisCacheWriter.lockingRedisCacheWriter(connectionFactory);
SerializationPair<Object> valueSerializationPair = RedisSerializationContext.SerializationPair
.fromSerializer(new GenericJackson2JsonRedisSerializer());
RedisCacheConfiguration cacheConfiguration = RedisCacheConfiguration.defaultCacheConfig();
cacheConfiguration = cacheConfiguration.serializeValuesWith(valueSerializationPair);
cacheConfiguration = cacheConfiguration.prefixKeysWith("myPrefix");
cacheConfiguration = cacheConfiguration.entryTtl(Duration.ofSeconds(30));
RedisCacheManager redisCacheManager = new RedisCacheManager(redisCacheWriter, cacheConfiguration);
return redisCacheManager;
}
but this make all key's ttl 30 second, how to configure different ttl for each redis cache with different cachename?
You can configure different expire time for each cache using only one CacheManager by creating different configurations for each cache and put them in a map with which you create the CacheManager.
For example:
#Bean
RedisCacheWriter redisCacheWriter() {
return RedisCacheWriter.lockingRedisCacheWriter(jedisConnectionFactory());
}
#Bean
RedisCacheConfiguration defaultRedisCacheConfiguration() {
return RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(defaultCacheExpiration));
}
#Bean
CacheManager cacheManager() {
Map<String, RedisCacheConfiguration> cacheNamesConfigurationMap = new HashMap<>();
cacheNamesConfigurationMap.put("cacheName1", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(ttl1)));
cacheNamesConfigurationMap.put("cacheName2", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(ttl2)));
cacheNamesConfigurationMap.put("cacheName3", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(ttl3)));
return new RedisCacheManager(redisCacheWriter(), defaultRedisCacheConfiguration(), cacheNamesConfigurationMap);
}
If you need configure different expire time for cache when using #cacheable ,
you can configure different CacheManager with different ttl,and specify cacheManager when using cache in your service.
#Cacheable(cacheManager = "expireOneHour", value = "onehour", key = "'_onehour_'+#key", sync = true)
Here is how you can define multiple Redis based caches with different TTL and maxIdleTime using Redisson Java client:
#Bean(destroyMethod="shutdown")
RedissonClient redisson() throws IOException {
Config config = new Config();
config.useClusterServers()
.addNodeAddress("redis://127.0.0.1:7004", "redis://127.0.0.1:7001");
return Redisson.create(config);
}
#Bean
CacheManager cacheManager(RedissonClient redissonClient) {
Map<String, CacheConfig> config = new HashMap<String, CacheConfig>();
// create "myCache1" cache with ttl = 20 minutes and maxIdleTime = 12 minutes
config.put("myCache", new CacheConfig(24*60*1000, 12*60*1000));
// create "myCache2" cache with ttl = 35 minutes and maxIdleTime = 24 minutes
config.put("myCache2", new CacheConfig(35*60*1000, 24*60*1000));
return new RedissonSpringCacheManager(redissonClient, config);
}
This is my code:
The shared config in common module
#Bean
RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer(List<RedisTtlConfig> ttlConfigs) {
RedisCacheConfiguration defaultCacheConfig = RedisCacheConfiguration.defaultCacheConfig();
return (builder) -> {
Map<String, RedisCacheConfiguration> ttlConfigMap = new HashMap<>();
ttlConfigs.forEach( config -> {
config.forEach( (key, ttl) -> {
ttlConfigMap.put(key, defaultCacheConfig.entryTtl(Duration.ofSeconds(ttl)));
});
});
builder.withInitialCacheConfigurations(ttlConfigMap);
builder.cacheDefaults(defaultCacheConfig);
};
}
A custom class to collect ttl config by key
public class RedisTtlConfig extends HashMap<String, Long> {
public RedisTtlConfig setTTL(String key, Long ttl){
this.put(key, ttl);
return this;
}
}
3.Simple ttl config code in ref module
#Bean
RedisTtlConfig corpCacheTtlConfig(){
return new RedisTtlConfig()
.setTTL("test1", 300l)
.setTTL("test2", 300l);
}

ClassNotFound in HazelcastMembers for ReplicatedMaps, but ok for Maps

Our Problem might be similar to that one:
Hazelcast ClassNotFoundException for replicated maps
Since the description of the environment is not given in detail I describe our problematic enironment here:
We have a dedicated Hazelcast Server(Member), out of the box with some config. No additional classes added (The ones from our project).
Then we got two Hazelcast Clients using this Member with several of our own classes.
The Clients intend to use Replicated Maps, so at some point in our software they do "hazelcastInstance.getReplicatedMap("MyName")" and then do some put operations.
Doing this, the dedicated hazelcast server throws a ClassNotFound for our classes we want to put into the replicated map. I understand this. How should he know about the classes.
Then I change to a Map insteadof replicatedMap.
"hazelcastInstance.getMap("MyName")"
With no other change it works. And this is what makes me wonder how that can be?
Does this have to do with different InMemory Storage? Does replicatedMap here behave differently ?
Hazelcast Version is: 3.9.2
One info might be important: the Client configures a NearCache for all the maps used:
EvictionConfig evictionConfig = new EvictionConfig()
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT)
.setSize(eapCacheId.getMaxAmountOfValues());
new NearCacheConfig()
.setName(eapCacheId.buildFullName())
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setInvalidateOnChange(true)
.setEvictionConfig(evictionConfig);
}
I changed the InMemoryFormat to BINARY. Still the same ClassNotFound
The Start of the stacktrace is:
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.read(JavaDefaultSerializers.java:224)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.read(StreamSerializerAdapter.java:48)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toObject(AbstractSerializationService.java:185)
at com.hazelcast.spi.impl.NodeEngineImpl.toObject(NodeEngineImpl.java:339)
EDIT: wrote a little Test do demonstrate my problem:
package de.empic.hazelclient.client;
import com.hazelcast.client.HazelcastClient;
import com.hazelcast.client.config.ClientConfig;
import com.hazelcast.config.EvictionConfig;
import com.hazelcast.config.InMemoryFormat;
import com.hazelcast.config.NearCacheConfig;
import com.hazelcast.core.HazelcastInstance;
import java.util.Map;
import java.util.Random;
public class HazelClient {
private static final String[] MAP_KEYS = {"Mike", "Ben", "Luis", "Adria", "Lena"};
private static final String MAP_NAME = "Regular Map" ;
private static final String REPLICATED_MAP_NAME = "Replicated Map" ;
private static final String CACHE_MEMBERS = "192.168.56.101:5701" ;
private static final String MNGT_CENTER = "192.168.56.101:5701" ;
HazelcastInstance hazelClientInstance = null ;
private static Random rand = new Random(System.currentTimeMillis());
public static void main(String[] args) {
new HazelClient(true).loop();
}
private HazelClient(boolean useNearCache)
{
ClientConfig cfg = prepareClientConfig(useNearCache) ;
hazelClientInstance = HazelcastClient.newHazelcastClient(cfg);
}
private void loop()
{
Map<String, SampleSerializable> testMap = hazelClientInstance.getMap(MAP_NAME);
Map<String, SampleSerializable> testReplicatedMap = hazelClientInstance.getReplicatedMap(REPLICATED_MAP_NAME);
int count = 0 ;
while ( true )
{
// do a random write to map
testMap.put(MAP_KEYS[rand.nextInt(MAP_KEYS.length)], new SampleSerializable());
// do a random write to replicated map
testReplicatedMap.put(MAP_KEYS[rand.nextInt(MAP_KEYS.length)], new SampleSerializable());
if ( ++count == 10)
{
// after a while we print the map contents
System.out.println("MAP Content -------------------------");
printMapContent(testMap) ;
System.out.println("REPLIACTED MAP Content --------------");
printMapContent(testReplicatedMap) ;
count = 0 ;
}
// we do not want to drown in system outs....
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private void printMapContent(Map<String, SampleSerializable> map)
{
for ( String currKey : map.keySet())
{
System.out.println(String.format(" - %s -> %s", currKey, map.get(currKey)));
}
}
private ClientConfig prepareClientConfig(boolean useNearCache)
{
ClientConfig cfg = new ClientConfig();
cfg.setInstanceName("SampleInstance");
cfg.getProperties().put("hazelcast.client.statistics.enabled", "true");
cfg.getProperties().put("hazelcast.client.statistics.period.seconds", "5");
if ( useNearCache )
{
cfg.addNearCacheConfig(defineNearCache(MAP_NAME));
cfg.addNearCacheConfig(defineNearCache(REPLICATED_MAP_NAME));
}
// we use a single member for demo
String[] members = {CACHE_MEMBERS} ;
cfg.getNetworkConfig().addAddress(members);
return cfg ;
}
private NearCacheConfig defineNearCache(String name)
{
EvictionConfig evictionConfig = new EvictionConfig()
.setMaximumSizePolicy(EvictionConfig.MaxSizePolicy.ENTRY_COUNT)
.setSize(Integer.MAX_VALUE);
NearCacheConfig nearCacheConfig = new NearCacheConfig()
.setName(name)
.setInMemoryFormat(InMemoryFormat.OBJECT)
.setInvalidateOnChange(true)
.setEvictionConfig(evictionConfig) ;
return nearCacheConfig;
}
}
To have the full info, the Hazelcast member is started with the following xml:
<?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config https://hazelcast.com/schema/config/hazelcast-config-3.8.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<instance-name>server-cache</instance-name>
<network>
<port>5701</port>
<join>
<multicast enabled="false"/>
<tcp-ip enabled="true">
<members>192.168.56.101:5701</members>
</tcp-ip>
</join>
<public-address>192.168.56.101:5701</public-address>
</network>
<management-center enabled="true">http://192.168.56.101:6679/mancenter</management-center>
</hazelcast>
The fact that the Hazelcast Member is running in docker while the clients are not is not important I think.
Can you post your configurations for both mapConfig and replicatedMapConfig? I will try to reproduce this.
I’m thinking this has to do with where the serialization happens. A couple things to keep in mind, there are two different configurations for map and replicatedmap. When you changed your getReplicatedMap("MyName") to .getMap("MyName"), if you don’t have a map config for “MyName”, then it will use the default config.
By default, Replicated Maps store in object memory format for performance.
I found my mistake. I configured the near cache to be of memory type "BINARY"
The server itself I did not configure. After having the replicated maps defined as "BINARY" in the server it works.

Is it possible to set a different specification per cache using caffeine in spring boot?

I have a simple sprint boot application using spring boot 1.5.11.RELEASE with #EnableCaching on the Application Configuration class.
pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>
application.properties
spring.cache.type=caffeine
spring.cache.cache-names=cache-a,cache-b
spring.cache.caffeine.spec=maximumSize=100, expireAfterWrite=1d
Question
My question is simple, how can one specify a different size/expiration per cache. E.g. perhaps it's acceptable for cache-a to be valid for 1 day. But cache-b might be ok for 1 week. The specification on a caffeine cache appears to be global to the CacheManager rather than Cache. Am I missing something? Perhaps there is a more suitable provider for my use case?
This is your only chance:
#Bean
public CaffeineCache cacheA() {
return new CaffeineCache("CACHE_A",
Caffeine.newBuilder()
.expireAfterAccess(1, TimeUnit.DAYS)
.build());
}
#Bean
public CaffeineCache cacheB() {
return new CaffeineCache("CACHE_B",
Caffeine.newBuilder()
.expireAfterWrite(7, TimeUnit.DAYS)
.recordStats()
.build());
}
Just expose your custom caches as beans. They are automatically added to the CaffeineCacheManager.
I config multiple cache manager like this
#Primary
#Bean
public CacheManager template() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager(CACHE_TEMPLATE);
cacheManager.setCaffeine(caffeineCacheBuilder(this.settings.getCacheExpiredInMinutes()));
return cacheManager;
}
#Bean
public CacheManager daily() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager(CACHE_TEMPLATE);
cacheManager.setCaffeine(caffeineCacheBuilder(24 * 60));
return cacheManager;
}
And use the cache normally
#Cacheable(cacheManager = "template")
#Override
public ArrayList<FmdModel> getData(String arg) {
return ....;
}
Update
It look like the above code has a big mistake. So I change to
#Configuration
#Data
#Slf4j
#ConfigurationProperties(prefix = "caching")
public class AppCacheConfig {
//This cache spec is load from `application.yml` file
// #ConfigurationProperties(prefix = "caching")
private Map<String, CacheSpec> specs;
#Bean
public CacheManager cacheManager(Ticker ticker) {
SimpleCacheManager manager = new SimpleCacheManager();
if (specs != null) {
List<CaffeineCache> caches = specs.entrySet().stream()
.map(entry -> buildCache(entry.getKey(), entry.getValue(), ticker)).collect(Collectors.toList());
manager.setCaches(caches);
}
return manager;
}
private CaffeineCache buildCache(String name, CacheSpec cacheSpec, Ticker ticker) {
log.info("Cache {} specified timeout of {} min, max of {}", name, cacheSpec.getTimeout(), cacheSpec.getMax());
final Caffeine<Object, Object> caffeineBuilder = Caffeine.newBuilder()
.expireAfterWrite(cacheSpec.getTimeout(), TimeUnit.MINUTES).maximumSize(cacheSpec.getMax())
.ticker(ticker);
return new CaffeineCache(name, caffeineBuilder.build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
This AppCacheConfig class allow you to define many cache spec as you prefer.
And you can define cache spec in application.yml file
caching:
specs:
template:
timeout: 10 #15 minutes
max: 10_000
daily:
timeout: 1440 #1 day
max: 10_000
weekly:
timeout: 10080 #7 days
max: 10_000
...:
timeout: ... #in minutes
max:
But still, this class has a limitation that we can only set timeout and max size only. because of CacheSpec class
#Data
public class CacheSpec {
private Integer timeout;
private Integer max = 200;
}
Therefore, If you like to add more config parameters, you are to add more parameters on CacheSpec class and set the Cache configuration on AppCacheConfig.buildCache function.
Hope this help!
I converted my initial PR into a separate tiny project.
To start using it just add the latest dependency from Maven Central:
<dependency>
<groupId>io.github.stepio.coffee-boots</groupId>
<artifactId>coffee-boots</artifactId>
<version>2.0.0</version>
</dependency>
Format of properties is the following:
coffee-boots.cache.spec.myCache=maximumSize=100000,expireAfterWrite=1m
If no specific configuration is defined, CacheManager defaults to Spring's behavior.
Instead of using SimpleCacheManager, you can use registerCustomCache() method of CaffeineCacheManager. Below is an example:
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.registerCustomCache(
"Cache1",
Caffeine.newBuilder()
.maximumSize(1000)
.expireAfterAccess(6, TimeUnit.MINUTES)
.build()
);
manager.registerCustomCache(
"Cache2",
Caffeine.newBuilder()
.maximumSize(2000)
.expireAfterAccess(12, TimeUnit.MINUTES)
.build()
);

Resources