i need toknow how to retrieve or where to see al data stored in my cache.
#Configuration
#EnableCaching
public class CachingConf {
#Bean
public CacheManager cacheManager() {
Caffeine<Object, Object> cacheBuilder = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.SECONDS)
.maximumSize(1000);
CaffeineCacheManager cacheManager = new CaffeineCacheManager("hr");
cacheManager.setCaffeine(cacheBuilder);
return cacheManager;
}
}
private final CacheManager cacheManager;
public CacheFilter(CacheManager cacheManager) {
this.cacheManager = cacheManager;
}
#Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
final var cache = cacheManager.getCache("hr");
......
I want to somehow see all data in my cache stored but the cache does not have get all or something like tht.Any advices guys?
The spring cache abstraction does not provide a method to get all the entries in a cache. But luckily they provide a method to get the underlying native cache abstraction which is Caffeine cache in your case.
The Caffeine cache has a method called asMap() to return a map view containing all the entries stored in the cache.
So combining them together will give you the following :
var cache = cacheManager.getCache("hr");
com.github.benmanes.caffeine.cache.Cache<Object, Object> nativeCache = (com.github.benmanes.caffeine.cache.Cache<Object, Object>)cache.getNativeCache();
ConcurrentMap<K, V> map = nativeCache.asMap();
//Loop through the map here to access all the entries in the cache
Please note that it is a quick and effective fix but it will make your codes couple to Caffeine . If you mind , you can configure the spring cache to use JCache and configure JCache to use Caffeine cache (see this) . As JCache API implements Iterable<Cache.Entry<K, V>>, it allow you to iterate all of its entries :
var cache = cacheManager.getCache("hr");
javax.cache<Object, Object> nativeCache = (javax.cache<Object, Object>)cache.getNativeCache();
for(Cache.Entry<Object,Object> entry : nativeCache){
//access the entries here.
}
Related
In my application I am using spring webflux and I am using webclient to retrieve details from some 3rd party API. Now, I want to store the first time webClient response in some in memory cache so that for 2nd time I can have those response directly from the cache.
I am trying to use Spring boot in memory caching mechanism and also "caffine". But none is working as expected.
application.yml:
spring:
cache:
cache-names: employee
caffiene:
spec: maximumSize=200, expireAfterAccess=5m
EmployeeApplication.java:
#SpringBootApplication
#EnableCaching
public class EmployeeApplication{
public static void main(String[] args){
}
}
EmployeeController.java:
It has a rest endpoint employee/all which fetch all employee from the 3rd party Api.
EmployeeService.java:
#Service
#Slf4j
public class EmployeeService{
#Autowired
private WebClient webClient;
#Autowired
private CacheManager cacheManager;
#Cacheable("employee")
public Mono<List<Employee>> getAllEmployee(){
log.info("inside employee service {}");
return webClient.get()
.uri("/employees/")
.retrieve()
.bodyToMono(Employee.class);
}
}
Although I have configured the cache name , 2nd time when I hit the url it is calling the service method. What cache mechanism need to be used to cache Mono response? Please suggest.
There are several options to cache reactive publishers.
Use reactive cache API to cache Mono for the defined duration
employeeService.getAllEmployee()
.cache(Duration.ofMinutes(60))
.flatMap(employees -> {
// process data
})
Use external cache with Caffeine.
Caffeine supports async cache based on CompletableFuture that could be easily adapted to Reactive API.
AsyncLoadingCache<String, List<Employee>> cache = Caffeine.newBuilder()
.buildAsync((tenant, executor) ->
employeeService.getAllEmployee(tenant).toFuture()
);
Mono<List<Employee>> getEmployee(String tenant) {
return Mono.fromCompletionStage(clientCache.get(tenant));
}
Use external cache with Guava and CacheMono from reactor-extra. This option is more suitable if you need to cache results based on different input (e.g. multi tenant environment)
UPDATE: CacheMono has been deprecated since reactor-extra 3.4.7. Better use #2 Use external cache with Caffeine.
Here is an example for Guava but you could adapt it for CacheManager
Cache<String, List<Employee>> cache = CacheBuilder.newBuilder()
.expireAfterWrite(cacheTtl)
.build();
Mono<List<Employee>> getEmployee(String tenant) {
return CacheMono.lookup(key -> Mono.justOrEmpty(cache.getIfPresent(key)).map(Signal::next), tenant)
.onCacheMissResume(() -> employeeService.getAllEmployee(tenant))
.andWriteWith((key, signal) -> Mono.fromRunnable(() ->
Optional.ofNullable(signal.get())
.ifPresent(value -> cache.put(key, value))
)
);
}
I am trying to cache Kafka Records within 3 minutes of interval post that it will get expired and removed from the cache.
Each incoming records which is fetched using kafka consumer written in springboot needs to be updated in cache first then if it is present i need to discard the next duplicate records if it matches the cache record.
I have tried using Caffeine cache as below,
#EnableCaching
public class AppCacheManagerConfig {
#Bean
public CacheManager cacheManager(Ticker ticker) {
CaffeineCache bookCache = buildCache("declineRecords", ticker, 3);
SimpleCacheManager cacheManager = new SimpleCacheManager();
cacheManager.setCaches(Collections.singletonList(bookCache));
return cacheManager;
}
private CaffeineCache buildCache(String name, Ticker ticker, int minutesToExpire) {
return new CaffeineCache(name, Caffeine.newBuilder().expireAfterWrite(minutesToExpire, TimeUnit.MINUTES)
.maximumSize(100).ticker(ticker).build());
}
#Bean
public Ticker ticker() {
return Ticker.systemTicker();
}
}
and my Kafka Consumer is as below,
#Autowired
CachingServiceImpl cachingService;
#KafkaListener(topics = "#{'${spring.kafka.consumer.topic}'}", concurrency = "#{'${spring.kafka.consumer.concurrentConsumers}'}", errorHandler = "#{'${spring.kafka.consumer.errorHandler}'}")
public void consume(Message<?> message, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_TIMESTAMP) long createTime) {
logger.info("Recieved Message: " + message.getPayload());
try {
boolean approveTopic = false;
boolean duplicateRecord = false;
if (cachingService.isDuplicateCheck(declineRecord)) {
//do something with records
}
else
{
//do something with records
}
cachingService.putInCache(xmlJSONObj, declineRecord, time);
and my caching service is as below,
#Component
public class CachingServiceImpl {
private static final Logger logger = LoggerFactory.getLogger(CachingServiceImpl.class);
#Autowired
CacheManager cacheManager;
#Cacheable(value = "declineRecords", key = "#declineRecord", sync = true)
public String putInCache(JSONObject xmlJSONObj, String declineRecord, String time) {
logger.info("Record is Cached for 3 minutes interval check", declineRecord);
cacheManager.getCache("declineRecords").put(declineRecord, time);
return declineRecord;
}
public boolean isDuplicateCheck(String declineRecord) {
if (null != cacheManager.getCache("declineRecords").get(declineRecord)) {
return true;
}
return false;
}
}
But Each time a record comes in consumer my cache is always empty. Its not holding the records.
Modifications Done:
I have added Configuration file as below after going through the suggestions and more kind of R&D removed some of the earlier logic and now the caching is working as expected but duplicate check is failing when all the three consumers are sending the same records.
`
#Configuration
public class AppCacheManagerConfig {
public static Cache<String, Object> jsonCache =
Caffeine.newBuilder().expireAfterWrite(3, TimeUnit.MINUTES)
.maximumSize(10000).recordStats().build();
#Bean
public CacheLoader<Object, Object> cacheLoader() {
CacheLoader<Object, Object> cacheLoader = new CacheLoader<Object, Object>() {
#Override
public Object load(Object key) throws Exception {
return null;
}
#Override
public Object reload(Object key, Object oldValue) throws Exception {
return oldValue;
}
};
return cacheLoader;
}
`
Now i am using the above cache as manual put and get.
I guess you're trying to implement records deduplication for Kafka.
Here is the similar discussion:
https://github.com/spring-projects/spring-kafka/issues/80
Here is the current abstract class which you may extend to achieve the necessary result:
https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/main/java/org/springframework/kafka/listener/adapter/AbstractFilteringMessageListener.java
Your caching service is definitely incorrect: Cacheable annotation allows marking the data getters and setters, to add caching through AOP. While in the code you clearly implement some low-level cache updating logic of your own.
At least next possible changes may help you:
Remove #Cacheable. You don't need it because you work with cache manually, so it may be the source of conflicts (especially as soon as you use sync = true). If it helps, remove #EnableCaching as well - it enables support for cache-related Spring annotations which you don't need here.
Try removing Ticker bean with the appropriate parameters for other beans. It should not be harmful as per your configuration, but usually it's helpful only for tests, no need to define it otherwise.
Double-check what is declineRecord. If it's a serialized object, ensure that serialization works properly.
Add recordStats() for cache and output stats() to log for further analysis.
I am using a Redis cache (via the Jedis client), and I would like to use ValueOperations#multiGet, which takes a Collection of keys, and returns a List of objects from the cache, in the same order. My question is, what happens when some of the keys are in the cache, but others are not? I am aware that underneath, Redis MGET is used, which will return nil for any elements that are not in the cache.
I cannot find any documentation of how ValueOperations will interpret this response. I assume they will be null, and can certainly test it, but it would be dangerous to build a system around undocumented behavior.
For completeness, here is how the cache client is configured:
#Bean
public RedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
redisConnectionFactory.setHostName(address);
redisConnectionFactory.setPort(port);
redisConnectionFactory.afterPropertiesSet();
return redisConnectionFactory;
}
#Bean
public ValueOperations<String, Object> someRedisCache(RedisConnectionFactory cf) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(cf);
redisTemplate.setDefaultSerializer(new GenericJackson2JsonRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate.opsForValue();
}
I am using spring-data-redis:2.1.4
So, is there any documentation around this, or some reliable source of truth?
After some poking around, it looks like the answer has something to do with the serializer used - in this case GenericJackson2JsonRedisSerializer. Not wanting to dig too much, I simply wrote a test validating that any (nil) values returned by Redis are convereted to null:
#Autowired
ValueOperations<String, SomeObject> valueOperations
#Test
void multiGet() {
//Given
SomeObject someObject = SomeObject
.builder()
.contentId("key1")
.build()
valueOperations.set("key1", someObject)
//When
List<SomeObject> someObjects = valueOperations.multiGet(Arrays.asList("key1", "nonexisting"))
//Then
assertEquals(2, someObjects.size())
assertEquals(someObject, someObjects.get(0))
assertEquals(null, someObjects.get(1))
}
So, in Redis, this:
127.0.0.1:6379> MGET "\"key1\"" "\"nonexisting\""
1) "{\"#class\":\"some.package.SomeObject\",\"contentId\":\"key1\"}"
2) (nil)
Will results in a List of {SomeObject, null}
I'm learning Spring WebFlux and during writing a sample application I found a concern related to Reactive types (Mono/Flux) combined with Spring Cache.
Consider the following code-snippet (in Kotlin):
#Repository
interface TaskRepository : ReactiveMongoRepository<Task, String>
#Service
class TaskService(val taskRepository: TaskRepository) {
#Cacheable("tasks")
fun get(id: String): Mono<Task> = taskRepository.findById(id)
}
Is this valid and safe way of caching method calls returning Mono or Flux? Maybe there are some other principles to do this?
The following code is working with SimpleCacheResolver but by default fails with Redis because of the fact that Mono is not Serializable. In order to make them work e.g Kryo serializer needs to be used.
Hack way
For now, there is no fluent integration of #Cacheable with Reactor 3.
However, you may bypass that thing by adding .cache() operator to returned Mono
#Repository
interface TaskRepository : ReactiveMongoRepository<Task, String>
#Service
class TaskService(val taskRepository: TaskRepository) {
#Cacheable("tasks")
fun get(id: String): Mono<Task> = taskRepository.findById(id).cache()
}
That hack cache and share returned from taskRepository data. In turn, spring cacheable will cache a reference of returned Mono and then, will return that reference. In other words, it is a cache of mono which holds the cache :).
Reactor Addons Way
There is an addition to Reactor 3 which allows fluent integration with modern in-memory caches like caffeine, jcache, etc. Using that technique you will be capable to cache your data easily:
#Repository
interface TaskRepository : ReactiveMongoRepository<Task, String>
#Service
class TaskService(val taskRepository: TaskRepository) {
#Autowire
CacheManager manager;
fun get(id: String): Mono<Task> = CacheMono.lookup(reader(), id)
.onCacheMissResume(() -> taskRepository.findById(id))
.andWriteWith(writer());
fun reader(): CacheMono.MonoCacheReader<String, Task> = key -> Mono.<Signal<Task>>justOrEmpty((Signal) manager.getCache("tasks").get(key).get())
fun writer(): CacheMono.MonoCacheWriter<String, Task> = (key, value) -> Mono.fromRunnable(() -> manager.getCache("tasks").put(key, value));
}
Note: Reactor addons caching own abstraction which is Signal<T>, so, do not worry about that and following that convention
I have used Oleh Dokuka's hacky solution worked great but there is a catch. You must use a greater Duration in Flux cache than your Cachable caches timetolive value. If you dont use a duration for Flux cache it wont invalidate it (Flux documentation says "Turn this Flux into a hot source and cache last emitted signals for further Subscriber.").
So making Flux cache 2 minutes and timetolive 30 seconds can be valid configuration. If ehcahce timeout occurs first, than a new Flux cache reference is generated and it will be used.
// In a Facade:
public Mono<HybrisResponse> getProducts(HybrisRequest request) {
return Mono.just(HybrisResponse.builder().build());
}
// In a service layer:
#Cacheable(cacheNames = "embarkations")
public HybrisResponse cacheable(HybrisRequest request) {
LOGGER.info("executing cacheable");
return null;
}
#CachePut(cacheNames = "embarkations")
public HybrisResponse cachePut(HybrisRequest request) {
LOGGER.info("executing cachePut");
return hybrisFacade.getProducts(request).block();
}
// In a Controller:
HybrisResponse hybrisResponse = null;
try {
// get from cache
hybrisResponse = productFeederService.cacheable(request);
} catch (Throwable e) {
// if not in cache then cache it
hybrisResponse = productFeederService.cachePut(request);
}
return Mono.just(hybrisResponse)
.map(result -> ResponseBody.<HybrisResponse>builder()
.payload(result).build())
.map(ResponseEntity::ok);
I am working on a local classifieds website and right now every time a page loads the database gets queried.
I have noticed that other popular classifieds websites serve a cached version of their site, which would greatly reduce the load time and server load.
How can I achieve this with Spring Boot or Tomcat? I want the website's cache to update every X minutes.
I am using Thymeleaf as my template engine
First you should add org.springframework.boot:spring-boot-starter-cache to your dependencies in build.gradle or pom.xml.
Let's say you're using DataService to get data to feed your view. You can put #Cacheable annotation on it.
#Service
class DataService {
#Cacheable("cache")
String compute() {
return "something"
}
}
Then you should add the following configuration:
#EnableCaching
#Configuration
public class CacheConfiguration {
public static final String CACHE_NAME = "cache";
#Bean
public CacheManager cacheManager() {
ConcurrentMapCacheManager cacheManager = new ConcurrentMapCacheManager(CACHE_NAME);
return cacheManager;
}
#CacheEvict(allEntries = true, value = CACHE_NAME)
#Scheduled(fixedDelay = 10* 60 * 1000 , initialDelay = 500)
public void evictCache() {}
}
Every 10 minutes cache will be cleared.