how to configure different ttl for each redis cache when using #cacheable in springboot2.0 - spring-boot

I am using #cacheable in springboot2.0 with redis. I have configured RedisCacheManager as follow:
#Bean
public RedisCacheManager redisCacheManager(RedisConnectionFactory connectionFactory) {
RedisCacheWriter redisCacheWriter = RedisCacheWriter.lockingRedisCacheWriter(connectionFactory);
SerializationPair<Object> valueSerializationPair = RedisSerializationContext.SerializationPair
.fromSerializer(new GenericJackson2JsonRedisSerializer());
RedisCacheConfiguration cacheConfiguration = RedisCacheConfiguration.defaultCacheConfig();
cacheConfiguration = cacheConfiguration.serializeValuesWith(valueSerializationPair);
cacheConfiguration = cacheConfiguration.prefixKeysWith("myPrefix");
cacheConfiguration = cacheConfiguration.entryTtl(Duration.ofSeconds(30));
RedisCacheManager redisCacheManager = new RedisCacheManager(redisCacheWriter, cacheConfiguration);
return redisCacheManager;
}
but this make all key's ttl 30 second, how to configure different ttl for each redis cache with different cachename?

You can configure different expire time for each cache using only one CacheManager by creating different configurations for each cache and put them in a map with which you create the CacheManager.
For example:
#Bean
RedisCacheWriter redisCacheWriter() {
return RedisCacheWriter.lockingRedisCacheWriter(jedisConnectionFactory());
}
#Bean
RedisCacheConfiguration defaultRedisCacheConfiguration() {
return RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(defaultCacheExpiration));
}
#Bean
CacheManager cacheManager() {
Map<String, RedisCacheConfiguration> cacheNamesConfigurationMap = new HashMap<>();
cacheNamesConfigurationMap.put("cacheName1", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(ttl1)));
cacheNamesConfigurationMap.put("cacheName2", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(ttl2)));
cacheNamesConfigurationMap.put("cacheName3", RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(ttl3)));
return new RedisCacheManager(redisCacheWriter(), defaultRedisCacheConfiguration(), cacheNamesConfigurationMap);
}

If you need configure different expire time for cache when using #cacheable ,
you can configure different CacheManager with different ttl,and specify cacheManager when using cache in your service.
#Cacheable(cacheManager = "expireOneHour", value = "onehour", key = "'_onehour_'+#key", sync = true)

Here is how you can define multiple Redis based caches with different TTL and maxIdleTime using Redisson Java client:
#Bean(destroyMethod="shutdown")
RedissonClient redisson() throws IOException {
Config config = new Config();
config.useClusterServers()
.addNodeAddress("redis://127.0.0.1:7004", "redis://127.0.0.1:7001");
return Redisson.create(config);
}
#Bean
CacheManager cacheManager(RedissonClient redissonClient) {
Map<String, CacheConfig> config = new HashMap<String, CacheConfig>();
// create "myCache1" cache with ttl = 20 minutes and maxIdleTime = 12 minutes
config.put("myCache", new CacheConfig(24*60*1000, 12*60*1000));
// create "myCache2" cache with ttl = 35 minutes and maxIdleTime = 24 minutes
config.put("myCache2", new CacheConfig(35*60*1000, 24*60*1000));
return new RedissonSpringCacheManager(redissonClient, config);
}

This is my code:
The shared config in common module
#Bean
RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer(List<RedisTtlConfig> ttlConfigs) {
RedisCacheConfiguration defaultCacheConfig = RedisCacheConfiguration.defaultCacheConfig();
return (builder) -> {
Map<String, RedisCacheConfiguration> ttlConfigMap = new HashMap<>();
ttlConfigs.forEach( config -> {
config.forEach( (key, ttl) -> {
ttlConfigMap.put(key, defaultCacheConfig.entryTtl(Duration.ofSeconds(ttl)));
});
});
builder.withInitialCacheConfigurations(ttlConfigMap);
builder.cacheDefaults(defaultCacheConfig);
};
}
A custom class to collect ttl config by key
public class RedisTtlConfig extends HashMap<String, Long> {
public RedisTtlConfig setTTL(String key, Long ttl){
this.put(key, ttl);
return this;
}
}
3.Simple ttl config code in ref module
#Bean
RedisTtlConfig corpCacheTtlConfig(){
return new RedisTtlConfig()
.setTTL("test1", 300l)
.setTTL("test2", 300l);
}

Related

Meter registration fails on Spring Boot Kafka consumer with Prometheus MeterRegistry

I am investigating a bug report in our application (spring boot) regarding the kafka metric kafka.consumer.fetch.manager.records.consumed.total being missing.
The application has two kafka consumers, lets call them query-routing and query-tracking consumers, and they are configured via #KafkaListener annotation and each kafka consumer has it's own instance of ConcurrentKafkaListenerContainerFactory.
The query-router consumer is configured as
#Configuration
#EnableKafka
public class QueryRoutingConfiguration {
#Bean(name = "queryRoutingContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, RoutingInfo> kafkaListenerContainerFactory(MeterRegistry meterRegistry) {
Map<String, Object> consumerConfigs = new HashMap<>();
// For brevity I removed the configs as they are trivial configs like bootstrap servers and serializers
DefaultKafkaConsumerFactory<String, RoutingInfo> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerConfigs);
consumerFactory.addListener(new MicrometerConsumerListener<>(meterRegistry));
ConcurrentKafkaListenerContainerFactory<String, RoutingInfo> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.getContainerProperties().setIdleEventInterval(5000L);
return factory;
}
}
And the query-tracking consumer is configured as:
#Configuration
#EnableKafka
public class QueryTrackingConfiguration {
private static final FixedBackOff NO_ATTEMPTS = new FixedBackOff(Duration.ofSeconds(0).toMillis(), 0L);
#Bean(name = "queryTrackingContainerFactory")
public ConcurrentKafkaListenerContainerFactory<String, QueryTrackingMessage> kafkaListenerContainerFactory(MeterRegistry meterRegistry) {
Map<String, Object> consumerConfigs = new HashMap<>();
// For brevity I removed the configs as they are trivial configs like bootstrap servers and serializers
DefaultKafkaConsumerFactory<String, QueryTrackingMessage> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerConfigs);
consumerFactory.addListener(new MicrometerConsumerListener<>(meterRegistry));
ConcurrentKafkaListenerContainerFactory<String, QueryTrackingMessage> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.setBatchListener(true);
DefaultErrorHandler deusErrorHandler = new DefaultErrorHandler(NO_ATTEMPTS);
factory.setCommonErrorHandler(deusErrorHandler);
return factory;
}
}
The MeterRegistryConfigurator bean configuaration is set as:
#Configuration
public class MeterRegistryConfigurator {
private static final Logger LOG = LoggerFactory.getLogger(MeterRegistryConfigurator.class);
private static final String PREFIX = "dps";
#Bean
MeterRegistryCustomizer<MeterRegistry> meterRegistryCustomizer() {
return registry -> registry.config()
.onMeterAdded(meter -> LOG.info("onMeterAdded: {}", meter.getId().getName()))
.onMeterRemoved(meter -> LOG.info("onMeterRemoved: {}", meter.getId().getName()))
.onMeterRegistrationFailed(
(id, s) -> LOG.info("onMeterRegistrationFailed - id '{}' value '{}'", id.getName(), s))
.meterFilter(PrefixMetricFilter.withPrefix(PREFIX))
.meterFilter(
MeterFilter.deny(id ->
id.getName().startsWith(PREFIX + ".jvm")
|| id.getName().startsWith(PREFIX + ".system")
|| id.getName().startsWith(PREFIX + ".process")
|| id.getName().startsWith(PREFIX + ".logback")
|| id.getName().startsWith(PREFIX + ".tomcat"))
)
.meterFilter(MeterFilter.ignoreTags("host", "host.name"))
.namingConvention(NamingConvention.snakeCase);
}
}
The #KafkaListener for each consumer is set as
#KafkaListener(
id = "query-routing",
idIsGroup = true,
topics = "${query-routing.consumer.topic}",
groupId = "${query-routing.consumer.groupId}",
containerFactory = "queryRoutingContainerFactory")
public void listenForMessages(ConsumerRecord<String, RoutingInfo> record) {
// Handle each record ...
}
and
#KafkaListener(
id = "query-tracking",
idIsGroup = true,
topics = "${query-tracking.consumer.topic}",
groupId = "${query-tracking.consumer.groupId}",
containerFactory = "queryTrackingContainerFactory"
)
public void listenForMessages(List<ConsumerRecord<String, QueryTrackingMessage>> consumerRecords, Acknowledgment ack) {
// Handle each record ...
}
When the application starts up, going to the actuator/prometheus endpoing I can see the metric for both consumers:
# HELP dps_kafka_consumer_fetch_manager_records_consumed_total The total number of records consumed
# TYPE dps_kafka_consumer_fetch_manager_records_consumed_total counter
dps_kafka_consumer_fetch_manager_records_consumed_total{client_id="consumer-qf-query-tracking-consumer-1",kafka_version="3.1.2",spring_id="not.managed.by.Spring.consumer-qf-query-tracking-consumer-1",} 7.0
dps_kafka_consumer_fetch_manager_records_consumed_total{client_id="consumer-QF-Routing-f5d0d9f1-e261-407b-954d-5d217211dee0-2",kafka_version="3.1.2",spring_id="not.managed.by.Spring.consumer-QF-Routing-f5d0d9f1-e261-407b-954d-5d217211dee0-2",} 0.0
But a few seconds later there is a new call to io.micrometer.core.instrument.binder.kafka.KafkaMetrics#checkAndBindMetrics which will remove a set of metrics (including kafka.consumer.fetch.manager.records.consumed.total)
onMeterRegistrationFailed - dps.kafka.consumer.fetch.manager.records.consumed.total string Prometheus requires that all meters with the same name have the same set of tag keys. There is already an existing meter named 'dps.kafka.consumer.fetch.manager.records.consumed.total' containing tag keys [client_id, kafka_version, spring_id]. The meter you are attempting to register has keys [client_id, kafka_version, spring_id, topic].
Going again to actuator/prometheus will only show the metric for the query-routing consumer:
# HELP deus_dps_persistence_kafka_consumer_fetch_manager_records_consumed_total The total number of records consumed for a topic
# TYPE deus_dps_persistence_kafka_consumer_fetch_manager_records_consumed_total counter
deus_dps_persistence_kafka_consumer_fetch_manager_records_consumed_total{client_id="consumer-QF-Routing-0a739a21-4764-411a-9cc6-0e60293b40b4-2",kafka_version="3.1.2",spring_id="not.managed.by.Spring.consumer-QF-Routing-0a739a21-4764-411a-9cc6-0e60293b40b4-2",theKey="routing",topic="QF_query_routing_v1",} 0.0
As you can see above the metric for the query-tracking consumer is gone.
As the log says, The meter you are attempting to register has keys [client_id, kafka_version, spring_id, topic]. The issue is I cannot find where is this metric with a topic key being registered which will trigger io.micrometer.core.instrument.binder.kafka.KafkaMetrics#checkAndBindMetrics which will remove the metric for the query-tracking consumer.
I am using
micrometer-registry-prometheus version 1.9.5
spring boot version 2.7.5
spring kafka (org.springframework.kafka:spring-kafka)
My question is, why does the metric kafka.consumer.fetch.manager.records.consumed.total fails causing it to be removed for the query-tracking consumer and how can I fix it?
I believe this is internal in Micrometer KafkaMetrics.
Periodically, it checks for new metrics; presumably, the topic one shows up after the consumer subscribes to the topic.
#Override
public void bindTo(MeterRegistry registry) {
this.registry = registry;
commonTags = getCommonTags(registry);
prepareToBindMetrics(registry);
checkAndBindMetrics(registry);
VVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
scheduler.scheduleAtFixedRate(() -> checkAndBindMetrics(registry), getRefreshIntervalInMillis(),
getRefreshIntervalInMillis(), TimeUnit.MILLISECONDS);
}
You should be able to write a filter to exclude the one with fewer tags.

How to solve List Mapping Exception on Redis Cache Load in Spring Boot

I want to add Caching to my Spring Boot Backend. Saving the entries to the Cache seems to work since I can see the json list in Redis after my first request but once I send my second request (which would read the Cache) to the backend Spring throws an internal error and the request fails:
WARN 25224 --- [nio-8080-exec-2] .w.s.m.s.DefaultHandlerExceptionResolver :
Resolved [org.springframework.http.converter.HttpMessageNotWritableException:
Could not write JSON: java.lang.ClassCastException#291f1fc4;
nested exception is com.fasterxml.jackson.databind.JsonMappingException:
java.lang.ClassCastException#291f1fc4
(through reference chain: java.util.ArrayList[0]->java.util.LinkedHashMap["id"])]
My backend looks as it follows:
Config:
#Configuration
class RedisConfig {
#Bean
fun jedisConnectionFactory(): JedisConnectionFactory {
val jedisConnectionFactory = JedisConnectionFactory()
return jedisConnectionFactory
}
#Bean
fun redisTemplate(): RedisTemplate<String, Any> {
val myRedisTemplate = RedisTemplate<String, Any>()
myRedisTemplate.setConnectionFactory(jedisConnectionFactory());
return myRedisTemplate;
}
#Bean
fun cacheManager(): RedisCacheManager {
return RedisCacheManager.RedisCacheManagerBuilder.fromConnectionFactory(jedisConnectionFactory()).cacheDefaults(
RedisCacheConfiguration.defaultCacheConfig().disableCachingNullValues()
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(RedisSerializer.string()))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(GenericJackson2JsonRedisSerializer(redisMapper())))
).build()
}
private fun redisMapper(): ObjectMapper {
return ObjectMapper() //.enableDefaultTyping(DefaultTyping.NON_FINAL, As.PROPERTY)
.setSerializationInclusion(JsonInclude.Include.NON_EMPTY)
}
}
Controller:
fun getPrivateRecipes(#RequestParam(required = false) langCode: String?): List<PrivateRecipeData> {
val lang = langCode ?: "en"
val userId = getCurrentUser().userRecord.uid
return privateRecipeCacheService.getPrivateRecipesCached(lang, userId)
}
Caching-Service
#Cacheable("privateRecipes")
fun getPrivateRecipesCached(lang: String, userId: String): List<PrivateRecipeData> {
return privateRecipeService.getPrivateRecipes(lang, userId)
}
I played around with the Cachable annotation, added keys, but it does not change the problem. The import and export of the list seems to be done with different classes. How to solve this?
In your ObjectMapper tell Jackson to use an ArrayList to hold collections of PrivateRecipeData instances like:
objectMapper.getTypeFactory().constructCollectionType(ArrayList.class, PrivateRecipeData.class);
One possible way to set it up:
One possible way is, in your cacheManager config, pass it a RedisTemplate configure with the right object mapper. Off the top of my head:
public RedisTemplate<String, Object> redisTemplate(...) {
Jackson2JsonRedisSerializer serializer = new ...
ObjectMapper objectMapper = new ObjectMapper();
//...
serializer.setObjectMapper(objectMapper);
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
// ...
redisTemplate.setValueSerializer(serializer);
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
What I did in the end was just using the default ObjectMapper (providing no arguments to GenericJackson2JsonRedisSerializer):
#Bean
fun cacheManager(): RedisCacheManager {
return RedisCacheManager.RedisCacheManagerBuilder.fromConnectionFactory(jedisConnectionFactory()).cacheDefaults(
RedisCacheConfiguration.defaultCacheConfig().disableCachingNullValues()
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(RedisSerializer.string()))
.serializeValuesWith(
RedisSerializationContext.SerializationPair.fromSerializer(
GenericJackson2JsonRedisSerializer()
)
)
).build()
}
The serialization in Json now looks a bit different (containing class names as well) but thats totally fine since it is still human readable :)
[
"java.util.ArrayList",
[
{
"#class": "com.my.package.data.PrivateRecipeData",
"id": 1,
"img_src": "user/xxxxxx/recipeImgs/32758c1c-35cf-4f92-9e8c-0057f4447d6c.jpg",
"name": "VeggieBurger",
"instructions": [
"java.util.ArrayList",
[
{
"id": 6,
"recipeId": 1,
}
]
],
...
}
],
...

Create multiple beans of SftpInboundFileSynchronizingMessageSource dynamically with InboundChannelAdapter

I am using spring inbound channel adapter to poll files from sftp server. Application needs to poll from multiple directories from single sftp server. Since Inbound channel adapter does not allow to poll multiple directories I tried creating multiple beans of same type with different values. Since number of directories can increase in future, I want to control it from application properties and want to register beans dynamically.
My code -
#Override
public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.registerSingleton("sftpSessionFactory", sftpSessionFactory(host, port, user, password));
beanFactory.registerSingleton("sftpInboundFileSynchronizer",
sftpInboundFileSynchronizer((SessionFactory) beanFactory.getBean("sftpSessionFactory")));
}
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory(String host, String port, String user, String password) {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(host);
factory.setPort(Integer.parseInt(port));
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
private SftpInboundFileSynchronizer sftpInboundFileSynchronizer(SessionFactory sessionFactory) {
SftpInboundFileSynchronizer fileSynchronizer = new SftpInboundFileSynchronizer(sessionFactory);
fileSynchronizer.setDeleteRemoteFiles(true);
fileSynchronizer.setPreserveTimestamp(true);
fileSynchronizer.setRemoteDirectory("/mydir/subdir);
fileSynchronizer.setFilter(new SftpSimplePatternFileListFilter("*.pdf"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "sftpChannel", poller = #Poller(fixedDelay = "2000"))
public MessageSource<File> sftpMessageSource(String s) {
SftpInboundFileSynchronizingMessageSource source = new SftpInboundFileSynchronizingMessageSource(
(AbstractInboundFileSynchronizer<ChannelSftp.LsEntry>) applicationContext.getBean("sftpInboundFileSynchronizer"));
source.setLocalDirectory(new File("/dir/subdir"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<>());
source.setMaxFetchSize(Integer.parseInt(maxFetchSize));
source.setAutoCreateLocalDirectory(true);
return source;
}
#Bean
#ServiceActivator(inputChannel = "sftpChannel")
public MessageHandler handler() {
return message -> {
LOGGER.info("Payload - {}", message.getPayload());
};
}
This code works fine. But If I create sftpMessageSource dynamically, then #InboundChannelAdapter annotation won't work. Please suggest a way to dynamically create sftpMessageSource and handler beans also and add respective annotations.
Update:
Following Code Worked :
#PostConstruct
void init() {
int index = 0;
for (String directory : directories) {
index++;
int finalI = index;
IntegrationFlow flow = IntegrationFlows
.from(Sftp.inboundAdapter(sftpSessionFactory())
.preserveTimestamp(true)
.remoteDirectory(directory)
.autoCreateLocalDirectory(true)
.localDirectory(new File("/" + directory))
.localFilter(new AcceptOnceFileListFilter<>())
.maxFetchSize(10)
.filter(new SftpSimplePatternFileListFilter("*.pdf"))
.deleteRemoteFiles(true),
e -> e.id("sftpInboundAdapter" + finalI)
.autoStartup(true)
.poller(Pollers.fixedDelay(2000)))
.handle(handler())
.get();
this.flowContext.registration(flow).register();
}
}
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(host);
factory.setPort(Integer.parseInt(port));
factory.setUser(user);
factory.setPassword(password);
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
Annotations in Java are static. You can't add them at runtime for created objects. Plus the framework reads those annotation on application context startup. So, what you are looking for is just not possible with Java as language per se.
You need consider to switch to Java DSL in Spring Integration to be able to use its "dynamic flows": https://docs.spring.io/spring-integration/docs/5.3.1.RELEASE/reference/html/dsl.html#java-dsl-runtime-flows.
But, please, first of all study more what Java can do and what cannot.

spring boot with redis

I worked with spring boot and redis to caching.I can cache my data that fetch from database(oracle) use #Cacheable(key = "{#input,#page,#size}",value = "on_test").
when i try to fetch data from key("on_test::0,0,10") with redisTemplate the result is 0
why??
Redis Config:
#Configuration
public class RedisConfig {
#Bean
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
redisStandaloneConfiguration.setPassword(RedisPassword.of("admin#123"));
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
#Bean
public RedisTemplate<String,Objects> redisTemplate() {
RedisTemplate<String,Objects> template = new RedisTemplate<>();
template.setStringSerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.setConnectionFactory(jedisConnectionFactory());
return template;
}
//service
#Override
#Cacheable(key = "{#input,#page,#size}",value = "on_test")
public Page<?> getAllByZikaConfirmedClinicIs(Integer input,int page,int size) {
try {
Pageable newPage = PageRequest.of(page, size);
String fromCache = controlledCacheService.getFromCache();
if (fromCache == null && input!=null) {
log.info("cache is empty lets initials it!!!");
Page<DataSet> all = dataSetRepository.getAllByZikaConfirmedClinicIs(input,newPage);
List<DataSet> d = redisTemplate.opsForHash().values("on_test::0,0,10");
System.out.print(d);
return all;
}
return null;
The whole point of using #Cacheable is that you don't need to be using RedisTemplate directly. You just need to call getAllByZikaConfirmedClinicIs() (from outside of the class it is defined in) and Spring will automatically check first if a cached result is available and return that instead of calling the function.
If that's not working, have you annotated one of your Spring Boot configuration classes with #EnableCaching to enable caching?
You might also need to set spring.cache.type=REDIS in application.properties, or spring.cache.type: REDIS in application.yml to ensure Spring is using Redis and not some other cache provider.

Is it possible to set a different specification per cache using caffeine in spring boot?

I have a simple sprint boot application using spring boot 1.5.11.RELEASE with #EnableCaching on the Application Configuration class.
pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>
application.properties
spring.cache.type=caffeine
spring.cache.cache-names=cache-a,cache-b
spring.cache.caffeine.spec=maximumSize=100, expireAfterWrite=1d
Question
My question is simple, how can one specify a different size/expiration per cache. E.g. perhaps it's acceptable for cache-a to be valid for 1 day. But cache-b might be ok for 1 week. The specification on a caffeine cache appears to be global to the CacheManager rather than Cache. Am I missing something? Perhaps there is a more suitable provider for my use case?
This is your only chance:
#Bean
public CaffeineCache cacheA() {
return new CaffeineCache("CACHE_A",
Caffeine.newBuilder()
.expireAfterAccess(1, TimeUnit.DAYS)
.build());
}
#Bean
public CaffeineCache cacheB() {
return new CaffeineCache("CACHE_B",
Caffeine.newBuilder()
.expireAfterWrite(7, TimeUnit.DAYS)
.recordStats()
.build());
}
Just expose your custom caches as beans. They are automatically added to the CaffeineCacheManager.
I config multiple cache manager like this
#Primary
#Bean
public CacheManager template() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager(CACHE_TEMPLATE);
cacheManager.setCaffeine(caffeineCacheBuilder(this.settings.getCacheExpiredInMinutes()));
return cacheManager;
}
#Bean
public CacheManager daily() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager(CACHE_TEMPLATE);
cacheManager.setCaffeine(caffeineCacheBuilder(24 * 60));
return cacheManager;
}
And use the cache normally
#Cacheable(cacheManager = "template")
#Override
public ArrayList<FmdModel> getData(String arg) {
return ....;
}
Update
It look like the above code has a big mistake. So I change to
#Configuration
#Data
#Slf4j
#ConfigurationProperties(prefix = "caching")
public class AppCacheConfig {
//This cache spec is load from `application.yml` file
// #ConfigurationProperties(prefix = "caching")
private Map<String, CacheSpec> specs;
#Bean
public CacheManager cacheManager(Ticker ticker) {
SimpleCacheManager manager = new SimpleCacheManager();
if (specs != null) {
List<CaffeineCache> caches = specs.entrySet().stream()
.map(entry -> buildCache(entry.getKey(), entry.getValue(), ticker)).collect(Collectors.toList());
manager.setCaches(caches);
}
return manager;
}
private CaffeineCache buildCache(String name, CacheSpec cacheSpec, Ticker ticker) {
log.info("Cache {} specified timeout of {} min, max of {}", name, cacheSpec.getTimeout(), cacheSpec.getMax());
final Caffeine<Object, Object> caffeineBuilder = Caffeine.newBuilder()
.expireAfterWrite(cacheSpec.getTimeout(), TimeUnit.MINUTES).maximumSize(cacheSpec.getMax())
.ticker(ticker);
return new CaffeineCache(name, caffeineBuilder.build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
This AppCacheConfig class allow you to define many cache spec as you prefer.
And you can define cache spec in application.yml file
caching:
specs:
template:
timeout: 10 #15 minutes
max: 10_000
daily:
timeout: 1440 #1 day
max: 10_000
weekly:
timeout: 10080 #7 days
max: 10_000
...:
timeout: ... #in minutes
max:
But still, this class has a limitation that we can only set timeout and max size only. because of CacheSpec class
#Data
public class CacheSpec {
private Integer timeout;
private Integer max = 200;
}
Therefore, If you like to add more config parameters, you are to add more parameters on CacheSpec class and set the Cache configuration on AppCacheConfig.buildCache function.
Hope this help!
I converted my initial PR into a separate tiny project.
To start using it just add the latest dependency from Maven Central:
<dependency>
<groupId>io.github.stepio.coffee-boots</groupId>
<artifactId>coffee-boots</artifactId>
<version>2.0.0</version>
</dependency>
Format of properties is the following:
coffee-boots.cache.spec.myCache=maximumSize=100000,expireAfterWrite=1m
If no specific configuration is defined, CacheManager defaults to Spring's behavior.
Instead of using SimpleCacheManager, you can use registerCustomCache() method of CaffeineCacheManager. Below is an example:
CaffeineCacheManager manager = new CaffeineCacheManager();
manager.registerCustomCache(
"Cache1",
Caffeine.newBuilder()
.maximumSize(1000)
.expireAfterAccess(6, TimeUnit.MINUTES)
.build()
);
manager.registerCustomCache(
"Cache2",
Caffeine.newBuilder()
.maximumSize(2000)
.expireAfterAccess(12, TimeUnit.MINUTES)
.build()
);

Resources