How to refresh the key and value in cache after they are expired in Guava (Spring) - spring

So, I was looking at caching methods in Java (Spring). And Guava looked like it would solve the purpose.
This is the usecase -
I query for some data from a remote service. Kind of configuration field for my application. This field will be used by every inbound request to my application. And it would be expensive to call the remote service everytime as it's kind of constant which changes periodically.
So, on the first request inbound to my application, when I call remote service, I would cache the value. I set an expiry time of this cache as 30 mins. After 30 mins when the cache is expired and there is a request to retrieve the key, I would like a callback or something to do the operation of calling the remote service and setting the cache and return the value for that key.
How can I do it in Guava cache?

Here i give a example how to use guava cache. If you want to handle removal listener then need to call cleanUp. Here i run a thread which one call clean up every 30 minutes.
import com.google.common.cache.*;
import org.springframework.stereotype.Component;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
#Component
public class Cache {
public static LoadingCache<String, String> REQUIRED_CACHE;
public Cache(){
RemovalListener<String,String> REMOVAL_LISTENER = new RemovalListener<String, String>() {
#Override
public void onRemoval(RemovalNotification<String, String> notification) {
if(notification.getCause() == RemovalCause.EXPIRED){
//do as per your requirement
}
}
};
CacheLoader<String,String> LOADER = new CacheLoader<String, String>() {
#Override
public String load(String key) throws Exception {
return null; // return as per your requirement. if key value is not found
}
};
REQUIRED_CACHE = CacheBuilder.newBuilder().maximumSize(100000000)
.expireAfterWrite(30, TimeUnit.MINUTES)
.removalListener(REMOVAL_LISTENER)
.build(LOADER);
Executors.newSingleThreadExecutor().submit(()->{
while (true) {
REQUIRED_CACHE.cleanUp(); // need to call clean up for removal listener
TimeUnit.MINUTES.sleep(30L);
}
});
}
}
put & get data:
Cache.REQUIRED_CACHE.get("key");
Cache.REQUIRED_CACHE.put("key","value");

Related

(Redis, Springboot) Although TTL remains but expired event is generated

I have been developing some program with Spring and Redis but I am in stuck :(
What I want
when http request comes to server, then setex myKey, value and ttl.
when myKey is expired, then expired event comes to server and I will make a log.
Source code
#Configuration
public class RedisListenerContainerConfig {
#Bean("redisMyListenerContainer")
public RedisMessageListenerContainer redisLPRInOutCarExpiryListenerContainer(
RedisConnectionFactory redisConnectionFactory,
RedisLPRInOutCarExpiryListener redisLPRInOutCarExpiryListener) {
redisConnectionFactory.getConnection().setConfig("notify-keyspace-events", "Kx");
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(redisConnectionFactory);
container.addMessageListener(myListener, new PatternTopic("__keyspace#0__:myKey*"));
container.setErrorHandler(e -> log.error("error"));
return container;
}
}
// here expired event will be got
#Component
public class MyListener implements MessageListener {
#Override
public void onMessage(Message message, byte[] pattern) {
// to make log
}
}
#Service
public class MyService {
...
// here setex is generated
private void setExMyKey(String myKey) {
String value = "value";
redisTemplate.opsForValue().set(myKey, value, 60 * 10 ,TimeUnit.SECONDS);
final String listKey = "list_" + myKey;
if (redisTemplate.opsForList().size(listKey) < 1) {
redisTemplate.opsForList().leftPush(listKey, value);
}
}
}
info of Redis
standalone
keys: 610000
expires: 600000
Problem
Actually, the http requests keep coming to the server so it supposes to the ttl of mykey will never be zero.
But after taking a while, although ttl still remains, expired event is generated so many times.
127.0.0.1:6379> ttl myKey_3378
(integer) 1783
the suspicious part
According to Redis documents, I think, my function for setex does not change the value but only ttl.
And that could mean myKey is not really modified.
IMPORTANT all the commands generate events only if the target key is really modified.
For instance an SREM deleting a non-existing element from a Set will not actually change the value of the key, so no event will be generated.
Is it possible to be a issue of Redis lock?
My language is not English so I am sorry in advance not to understand very well.
Hopefully I solve this problem soon.
Thank you so much.

Spring Batch read enormous data from Rest web service

I need to process data from Rest web service. the following basic exemple is :
import org.springframework.batch.item.ItemReader;
import org.springframework.http.ResponseEntity;
import org.springframework.web.client.RestTemplate;
import java.util.Arrays;
import java.util.List;
class RESTDataReader implements ItemReader<DataDTO> {
private final String apiUrl;
private final RestTemplate restTemplate;
private int nextDataIndex;
private List<DataDTO> data;
RESTDataReader(String apiUrl, RestTemplate restTemplate) {
this.apiUrl = apiUrl;
this.restTemplate = restTemplate;
nextDataIndex = 0;
}
#Override
public DataDTO read() throws Exception {
if (dataIsNotInitialized()) {
data = fetchDataFromAPI();
}
DataDTO nextData = null;
if (nextDataIndex < data.size()) {
nextData = data.get(nextDataIndex);
nextDataIndex++;
}
else {
nextDataIndex= 0;
data = null;
}
return nextData;
}
private boolean dataIsNotInitialized() {
return this.data == null;
}
private List<DataDTO> fetchDataFromAPI() {
ResponseEntity<DataDTO[]> response = restTemplate.getForEntity(apiUrl,
DataDTO[].class
);
DataDTO[] data= response.getBody();
return Arrays.asList(data);
}
}
However, my fetchDataFromAPI method is called with time slots and it could get more than 20 Millions objects.
For example : if i call it between 01012020 and 01012021 i'll get 80 Millions data.
PS : the web service works by pagination of a single day, i.e. if I want to retrieve the data between 01/09/2020 and 07/09/2020 I have to call it several times (between 01/09-02/09 then between 02/09-03/09 and so on until 06/09-07/09)
My problem in this case is a heap space memory if the data is bulky.
I had to create a step for each month to avoid this problem in my BatchConfiguration (12 steps). The first step which will call the web service between 01/01/2020 and 01/02/2020 etc
Is there a solution to read all this volume of data with only one step before going to the processor ??
Thanks in advance
Since your web service does not provide pagination within a single day, you need to ensure that the process that calls this web service (ie your Spring Batch job) has enough memory to store all items returned by this service.
For example : if i call it between 01012020 and 01012021 i'll get 80 Millions data.
This means that if you call this web service with curl on a machine that does not have enough memory to hold the result, then the curl command will fail. The point I want to make here is that the only way to solve this issue is to give enough memory to the JVM that runs your Spring Batch job to hold such a big result set.
As a side note: if you have control over this web service, I highly recommend you to improve it by introducing a more granular pagination mechanism.

Spring cache #Cacheable and #CachePut. If exception is thrown inside method of #cachePut, get data from Cache

I have a spring cache requirement:
I need to request to a server to get some data and store the results in spring cache. The same request can give me different results every time so I decided to use #cachePut so that every time I can go inside my function and cache gets updated.
#CachePut(value="mycache", key="#url")
public String getData(String url){
try{
// get the data from server
// update the cache
// return data
} catch(){
// return data from cache
}
}
Now there is a twist. If the server is down and I am not able to get the response; I want my data from the cache (stored in previous requests).
If i use #Cacheable, I can't get the updated data. What is the clean way to do this? Something like catching an exception and return the data from cache.
you can get the cache implementation like this and handle the cache.
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cache.Cache;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CacheConfig;
import org.springframework.cache.annotation.CachePut;
import org.springframework.stereotype.Service;
#Service
#CacheConfig(cacheNames="mycache") // refer to cache/ehcache-xxxx.xml
public class CacheService {
#Autowired private CacheManager manager;
#CachePut(key="#url")
public String getData(String url) {
try {
//do something.
return null;
}catch(Exception e) {
Cache cache = manager.getCache("mycache");
return (String) cache.get(url).get();
}
}
}

Spring Boot with CXF Client Race Condition/Connection Timeout

I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.

Wicket cluster session store, page store, data store

I am dealing with a custom implementation for wicket session store, data store, page store. I have cu cluster wicket and make it work in the following situation:
There are 2 nodes in the cluster, node one fails and the user should be able to continue the flow without noticing, the pages a statefull, with a lot of ajax requests. For now I'm storing the wicket session in a custom storage over rmi, and I'm trying to extend the DiskPageStore. The new challenge is SessionEntry inner class, it is still hold by a ConcurrentMap.
My question is: Has anyone done this before? Do you have any suggestions on how to accomplish this?
My suggestion is forget about DiskPageStore and SessionEntry in your situation. The ConcurrentMap you mentioned is held in the heap locally. Once one of the nodes fails, there is no way to get access to its ConcurrentMap, and Wicket resources referred to from the ConcurrentMap will be impossible to be released.
Therefore, in a clustered environment, you need to cluster the Wicket page store. Page versions can be expired based on certain policy, or deliberately removed when their corresponding session expires.
I've enabled web session and data store clustering for Apache Wicket used in an enterprise web application in production, and it has been working very well. The software I use are:
JDK 1.8.0_60
Apache Tomcat 8.0.33 (Tomcat 7 works too)
Wicket 6.16 (versions 6.22.0 and 7.2.0 should also work)
Apache Ignite 1.7.0
Load balancer: Crossroads
Ubuntu 14.04.1
The idea is use Apache Ignite for web session clustering, and it is pretty straightforward following its instructions for Web Session Clustering.
Once I got the web session clustered, I then put the data store (which includes the page store already) into the Ignite distributed data grid, while at the same time I disabled the Wicket application scoped cache (so as to make sure all data is clustered). Take a look at the documentation on Wicket's page store to find out how to configure the data store.
Alternatively you should be able to use Wicket HttpSessionDataStore to put the data store into the session. As the session is clustered, the data store is clustered automatically. But this approach does not work compatibly with Apache Ignite for me. So I use my own implementation of the IDataStore interface, which puts the data store into the Ignite distributed data grid. See below the implementation.
import java.util.concurrent.TimeUnit;
import javax.cache.expiry.Duration;
import javax.cache.expiry.TouchedExpiryPolicy;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheMemoryMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.eviction.lru.LruEvictionPolicy;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.wicket.pageStore.IDataStore;
import org.apache.wicket.pageStore.memory.IDataStoreEvictionStrategy;
import org.apache.wicket.pageStore.memory.PageTable;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class IgniteDataStore implements IDataStore {
private static final Logger log = LoggerFactory.getLogger(IgniteDataStore.class);
private final IDataStoreEvictionStrategy evictionStrategy;
private Ignite ignite;
IgniteCache<String, PageTable> igniteCache;
public IgniteDataStore(IDataStoreEvictionStrategy evictionStrategy) {
this.evictionStrategy = evictionStrategy;
CacheConfiguration<String, PageTable> cacheCfg = new CacheConfiguration<String, PageTable>("wicket-data-store");
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_VALUES);
cacheCfg.setOffHeapMaxMemory(2 * 1024L * 1024L * 1024L); // 2 Gigabytes.
cacheCfg.setEvictionPolicy(new LruEvictionPolicy<String, PageTable>(10000));
cacheCfg.setExpiryPolicyFactory(TouchedExpiryPolicy.factoryOf(new Duration(TimeUnit.SECONDS, 14400)));
log.info("IgniteDataStore timeout is set to 14400 seconds.");
ignite = Ignition.ignite();
igniteCache = ignite.getOrCreateCache(cacheCfg);
}
#Override
public synchronized byte[] getData(String sessionId, int id) {
PageTable pageTable = getPageTable(sessionId, false);
byte[] pageAsBytes = null;
if (pageTable != null) {
pageAsBytes = pageTable.getPage(id);
}
return pageAsBytes;
}
#Override
public synchronized void removeData(String sessionId, int id) {
PageTable pageTable = getPageTable(sessionId, false);
if (pageTable != null) {
pageTable.removePage(id);
}
}
#Override
public synchronized void removeData(String sessionId) {
PageTable pageTable = getPageTable(sessionId, false);
if (pageTable != null) {
pageTable.clear();
}
igniteCache.remove(sessionId);
}
#Override
public synchronized void storeData(String sessionId, int id, byte[] data) {
PageTable pageTable = getPageTable(sessionId, true);
if (pageTable != null) {
pageTable.storePage(id, data);
evictionStrategy.evict(pageTable);
igniteCache.put(sessionId, pageTable);
} else {
log.error("Cannot store the data for page with id '{}' in session with id '{}'", id, sessionId);
}
}
#Override
public synchronized void destroy() {
igniteCache.clear();
}
#Override
public boolean isReplicated() {
return true;
}
#Override
public boolean canBeAsynchronous() {
return false;
}
private PageTable getPageTable(String sessionId, boolean create) {
if (igniteCache.containsKey(sessionId)) {
return igniteCache.get(sessionId);
}
if (!create) {
return null;
}
PageTable pageTable = new PageTable();
igniteCache.put(sessionId, pageTable);
return pageTable;
}
}
Hope it helps.

Resources