Use ehcache to implement a multimap - ehcache

Is it possible to use ehcache to implement a multimap? I would like to store duplicate keys that store different values and expire after a given time. Ehcache can easily handle the expiration of elements, but I am not aware of a configuration to allow duplicate keys.

Ehcache does not support duplicate keys. You could always cache a collection of values as long as a single expiration for all values makes sense.

you can use infinispan in this
#Test
public void testMultimapCache() throws Exception {
EmbeddedCacheManager cacheManager = new DefaultCacheManager(Environment.openClasspathResource("/infinispan.xml"));
MultimapCacheManager<String, Object> multimapCacheManager = EmbeddedMultimapCacheManagerFactory.from(cacheManager);
MultimapCache<String, Object> cache = multimapCacheManager.get("test");
cache.put("a", new Integer(1));
cache.put("a", new Integer(2));
cache.put("a", new Integer(3));
cache.put("a", new Integer(4));
cache.put("a", new Integer(1));
System.out.println(cache.get("a").get());
// [1, 2, 3, 4]
}

Related

How to get all data from Java Spring Cache

i need toknow how to retrieve or where to see al data stored in my cache.
#Configuration
#EnableCaching
public class CachingConf {
#Bean
public CacheManager cacheManager() {
Caffeine<Object, Object> cacheBuilder = Caffeine.newBuilder()
.expireAfterWrite(10, TimeUnit.SECONDS)
.maximumSize(1000);
CaffeineCacheManager cacheManager = new CaffeineCacheManager("hr");
cacheManager.setCaffeine(cacheBuilder);
return cacheManager;
}
}
private final CacheManager cacheManager;
public CacheFilter(CacheManager cacheManager) {
this.cacheManager = cacheManager;
}
#Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
final var cache = cacheManager.getCache("hr");
......
I want to somehow see all data in my cache stored but the cache does not have get all or something like tht.Any advices guys?
The spring cache abstraction does not provide a method to get all the entries in a cache. But luckily they provide a method to get the underlying native cache abstraction which is Caffeine cache in your case.
The Caffeine cache has a method called asMap() to return a map view containing all the entries stored in the cache.
So combining them together will give you the following :
var cache = cacheManager.getCache("hr");
com.github.benmanes.caffeine.cache.Cache<Object, Object> nativeCache = (com.github.benmanes.caffeine.cache.Cache<Object, Object>)cache.getNativeCache();
ConcurrentMap<K, V> map = nativeCache.asMap();
//Loop through the map here to access all the entries in the cache
Please note that it is a quick and effective fix but it will make your codes couple to Caffeine . If you mind , you can configure the spring cache to use JCache and configure JCache to use Caffeine cache (see this) . As JCache API implements Iterable<Cache.Entry<K, V>>, it allow you to iterate all of its entries :
var cache = cacheManager.getCache("hr");
javax.cache<Object, Object> nativeCache = (javax.cache<Object, Object>)cache.getNativeCache();
for(Cache.Entry<Object,Object> entry : nativeCache){
//access the entries here.
}

Spring Batch - How to reads 5 million records in faster ways?

I'm developing Spring Boot v2.2.5.RELEASE and Spring Batch example. In this example, I'm reading 5 million records using JdbcPagingItemReader from Postgres system from one data-center and writing in into MongoDB into another data-center.
This migration is too slow and need to make the better performance of this batch job. I 'm not sure on how to use partition, because I have a PK in that table holds UUID values, so I can't think of using ColumnRangePartitioner. Is there any best approach to implement this?
Approach-1:
#Bean
public JdbcPagingItemReader<Customer> customerPagingItemReader(){
// reading database records using JDBC in a paging fashion
JdbcPagingItemReader<Customer> reader = new JdbcPagingItemReader<>();
reader.setDataSource(this.dataSource);
reader.setFetchSize(1000);
reader.setRowMapper(new CustomerRowMapper());
// Sort Keys
Map<String, Order> sortKeys = new HashMap<>();
sortKeys.put("cust_id", Order.ASCENDING);
// POSTGRES implementation of a PagingQueryProvider using database specific features.
PostgresPagingQueryProvider queryProvider = new PostgresPagingQueryProvider();
queryProvider.setSelectClause("*");
queryProvider.setFromClause("from customer");
queryProvider.setSortKeys(sortKeys);
reader.setQueryProvider(queryProvider);
return reader;
}
Then Mongo writer, I've used Spring Data Mongo as custom writer:
Job details
#Bean
public Job multithreadedJob() {
return this.jobBuilderFactory.get("multithreadedJob")
.start(step1())
.build();
}
#Bean
public Step step1() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(4);
taskExecutor.setMaxPoolSize(4);
taskExecutor.afterPropertiesSet();
return this.stepBuilderFactory.get("step1")
.<Transaction, Transaction>chunk(100)
.reader(fileTransactionReader(null))
.writer(writer(null))
.taskExecutor(taskExecutor)
.build();
}
Approach-2: AsyncItemProcessor and AsyncItemWriter would be the better option, because still I've to read using same JdbcPagingItemReader?
Approach-3: Partition, how to use it where I've PK as UUID?
Partitioning (approach 3) is the best option IMO. If your primary key is a String, you can try to create a compound key (aka a combination of columns to make up a unique key).

Fetching all keys from Partitioned Coherent Cache

I am doing a project and my requirement is to basically implement a Coherence Cache Dashboard. The basic idea is to fetch all the keys stored in the Coherent Cache. Is there a command or any other way to fetch all cache keys from distributed coherent cache?
public void printAllCache(){
Cache<String, String> cache = cacheManager.getCache(CACHENAME,
String.class, String.class);
Iterator<Cache.Entry<String,String>> allCacheEntries= cache.iterator();
while(allCacheEntries.hasNext()){
Cache.Entry<String,String> currentEntry = allCacheEntries.next();
System.out.println("Key: "+currentEntry.getKey()+" Value: "+
currentEntry.getValue());
}
return returnProperties;
}
By doing this, i can iterate over cache keys created by the current cache manager. But, how can I find keys created by other cache managers in a partitioned coherent cache?
This is a simple command:
NamedCache<String, String> cache = CacheFactory.getCache(CACHENAME, String.class, String.class);
Set<String> keys = cache.keySet();

How are cache misses handled by spring-data-redis multiGet?

I am using a Redis cache (via the Jedis client), and I would like to use ValueOperations#multiGet, which takes a Collection of keys, and returns a List of objects from the cache, in the same order. My question is, what happens when some of the keys are in the cache, but others are not? I am aware that underneath, Redis MGET is used, which will return nil for any elements that are not in the cache.
I cannot find any documentation of how ValueOperations will interpret this response. I assume they will be null, and can certainly test it, but it would be dangerous to build a system around undocumented behavior.
For completeness, here is how the cache client is configured:
#Bean
public RedisConnectionFactory redisConnectionFactory() {
JedisConnectionFactory redisConnectionFactory = new JedisConnectionFactory();
redisConnectionFactory.setHostName(address);
redisConnectionFactory.setPort(port);
redisConnectionFactory.afterPropertiesSet();
return redisConnectionFactory;
}
#Bean
public ValueOperations<String, Object> someRedisCache(RedisConnectionFactory cf) {
RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(cf);
redisTemplate.setDefaultSerializer(new GenericJackson2JsonRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate.opsForValue();
}
I am using spring-data-redis:2.1.4
So, is there any documentation around this, or some reliable source of truth?
After some poking around, it looks like the answer has something to do with the serializer used - in this case GenericJackson2JsonRedisSerializer. Not wanting to dig too much, I simply wrote a test validating that any (nil) values returned by Redis are convereted to null:
#Autowired
ValueOperations<String, SomeObject> valueOperations
#Test
void multiGet() {
//Given
SomeObject someObject = SomeObject
.builder()
.contentId("key1")
.build()
valueOperations.set("key1", someObject)
//When
List<SomeObject> someObjects = valueOperations.multiGet(Arrays.asList("key1", "nonexisting"))
//Then
assertEquals(2, someObjects.size())
assertEquals(someObject, someObjects.get(0))
assertEquals(null, someObjects.get(1))
}
So, in Redis, this:
127.0.0.1:6379> MGET "\"key1\"" "\"nonexisting\""
1) "{\"#class\":\"some.package.SomeObject\",\"contentId\":\"key1\"}"
2) (nil)
Will results in a List of {SomeObject, null}

Difference Between cacheNames and Key in #cachable

I am new to caching and Spring, I can't work out the difference between cacheNames and Key in below example taken from Spring Docs:
#Cacheable(cacheNames="books", key="#isbn")
public Book findBook(ISBN isbn, boolean checkWarehouse, boolean includeUsed)
As I understand cache is simply a key-value pair stored in memory. So in the above example on first invocation the returned Book value will be stored in cache using the value of isbn parameter as key. On subsequent invocations where isbn value is the same as it was first requested the Book stored in cache will be returned. This Book in cache will be found using the Key. So what is cacheNames?
Am I correct in saying cache is stored as key values like this:
isbn111111 ---> Book,
isbn122222 ---> Book2,
isbn123333 ---> Book3
Thanks in advance.
CacheName is more like group of cache key. When you open this class
org.springframework.cache.interceptor.AbstractCacheResolver
you will find this method to find cache by cacheName
#Override
public Collection<? extends Cache> resolveCaches(CacheOperationInvocationContext<?> context) {
Collection<String> cacheNames = getCacheNames(context);
if (cacheNames == null) {
return Collections.emptyList();
}
Collection<Cache> result = new ArrayList<>(cacheNames.size());
for (String cacheName : cacheNames) {
Cache cache = getCacheManager().getCache(cacheName);
if (cache == null) {
throw new IllegalArgumentException("Cannot find cache named '" +
cacheName + "' for " + context.getOperation());
}
result.add(cache);
}
return result;
}
So later in org.springframework.cache.interceptor.CacheAspectSupport spring will get value by cache key from that cache object
private Object execute(final CacheOperationInvoker invoker, Method method, CacheOperationContexts contexts) {
// Special handling of synchronized invocation
if (contexts.isSynchronized()) {
CacheOperationContext context = contexts.get(CacheableOperation.class).iterator().next();
if (isConditionPassing(context, CacheOperationExpressionEvaluator.NO_RESULT)) {
Object key = generateKey(context, CacheOperationExpressionEvaluator.NO_RESULT);
Cache cache = context.getCaches().iterator().next();
try {
return wrapCacheValue(method, cache.get(key, () -> unwrapReturnValue(invokeOperation(invoker))));
}
catch (Cache.ValueRetrievalException ex) {
// The invoker wraps any Throwable in a ThrowableWrapper instance so we
// can just make sure that one bubbles up the stack.
throw (CacheOperationInvoker.ThrowableWrapper) ex.getCause();
}
}
//...other logic
The cacheNames are the names of the caches itself, where the data is stored. You can have multiple caches, e.g. for different entity types different caches, or depending on replication needs etc.
One significance of cacheNames would be helping with default key generation for #Cacheable used when explicit keys aren't passed to method. Its very unclear from Spring documentation on what would be seriously wrong or inaccurate if cacheNames is not supplied at Class level or Method level when using Spring Cache.
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/cache/annotation/CacheConfig.html#cacheNames--

Resources