When an object was added to Caffeine cache - spring

I'm using Spring Boot caching in a project and am using Caffiene. I've added some default configuration for Caffeine to the project, and I can get the most recent object from the cache using the following code:
private final CaffeineCache caffeineCache = (CaffeineCache) caffeineCacheManager.getCache("myCacheName");
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
cache.policy().eviction().get().hottest(1);
I don't actually want the object itself, but I'd want to know when it was added to the cache. Is there a way to find out when this object was added to the cache?

Thanks to Ben Maines comment, I found a solution to my problem. I calculated the cache time using the expiration timestamp:
CacheResponse cacheResponse = new CacheResponse();
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
if(cache.policy().eviction().get().hottest(1).keySet().iterator().hasNext()) {
// Time in milliseconds since object was cached
OptionalLong time = cache.policy().expireAfterWrite().get().ageOf(cache.policy().eviction().get().hottest(1).keySet().iterator().next(), TimeUnit.MILLISECONDS);
LOGGER.info("Calculating last UTC cached date");
// Calculate last cached time and set the object in the cache response
cacheResponse.setTime(Instant.now().minusMillis(time.getAsLong()).atZone(ZoneOffset.UTC).toLocalDateTime());
} else {
LOGGER.info("Cache appears to be empty.");
}
cacheResponse.setNumCachedItems(cache.estimatedSize());
return cacheResponse;

Related

Hazelcast TTL (Time to Live) is not working

In a spring boot environment, I have the following Hazelcast configuration.
#Bean
public Config hazelCastConfig() {
final Config config = new Config().setInstanceName("hazelcast-cache")
.addMapConfig(new MapConfig().setName("hazelcast-cache")
.setMaxSizeConfig(
new MaxSizeConfig(200, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU).setTimeToLiveSeconds(5))
.setClassLoader(Thread.currentThread().getContextClassLoader());
final UserCodeDeploymentConfig distCLConfig = config.getUserCodeDeploymentConfig();
distCLConfig.setEnabled(true)
.setClassCacheMode(UserCodeDeploymentConfig.ClassCacheMode.ETERNAL)
.setProviderMode(UserCodeDeploymentConfig.ProviderMode.LOCAL_CLASSES_ONLY);
return config;
}
This is how use the cacheable in our code
#Cacheable(value = "presetCategoryMaster", key = "{#storeCode, #validDisplayFlag}")
public List<PresetCategoryMasterEntity> getPresetMasterCategoryForStoreCdAndValdiDisplayFlag(
final Integer storeCode, final Short validDisplayFlag) {
----------------
----------------
}
But the TTL is never honored. We confirmed in the trace logs also. But after the first call, once the cache entries are made, it never gets evicted unless you explicitly call CacehEvict() or CachePut(). All though we have TTL value to 5 secs, the cache is not cleared even after an hour.
Any help is appreciated.
The cache name is presetCategoryMaster
#Cacheable(value = "presetCategoryMaster"
The configuration uses hazelcast-cache, so doesn't match.
new MapConfig().setName("hazelcast-cache")
You can use the Management Center or getDistributedObjects()
call to find out what's created and watch for them expiring.

Update Builder gives late response when multiple versions are there in Elasticsearch?

Project : Spring Boot
I'm updating my elasticsearch document using following way,
#Override
public Document update(DocumentDTO document) {
try {
Document doc = documentMapper.documentDTOToDocument(document);
Optional<Document> fetchDocument = documentRepository.findById(document.getId());
if (fetchDocument.isPresent()) {
fetchDocument.get().setTag(doc.getTag());
Document result = documentRepository.save(fetchDocument.get());
final UpdateRequest updateRequest = new UpdateRequest(Constants.INDEX_NAME, Constants.INDEX_TYPE, document.getId().toString());
updateRequest.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
updateRequest.doc(jsonBuilder().startObject().field("tag", doc.getTag()).endObject());
UpdateResponse updateResponse = client.update(updateRequest, RequestOptions.DEFAULT);
log.info("ES result : "+ updateResponse.status());
return result;
}
} catch (Exception ex) {
log.info(ex.getMessage());
}
return null;
}
Using this my document updated successfully and version incremented but when version goes 20+.
It takes lot many time to retrieve data(around 14sec).
I'm still confused regarding process of versioning. How it works in update and delete scenario? At time of search it process all the data version and send latest one? Is it so?
Elasticsearch internally uses Lucene which uses immutable segments to store the data. as these segments are immutable, every update on elasticsearch internally marks the old document delete(soft delete) and inserts a new document(with a new version).
The old document is later on cleanup during a background segment merging process.
A newly updated document should be available in 1 second(default refresh interval) but it can be disabled or change, so please check this setting in your index. I can see you are using wait_for param in your code, please remove this and you should be able to see the updated document fast if you have not changed the default refresh_interval.
Note:- Here both update and delete operation works similarly, the only difference is that in delete operation new document is not created, and the old document is marked soft delete and later on during segment merge deleted permanently.

How to have WRO answer with a http 304 not modified?

We are serving javascript resources (and others) via wro in our webapp.
On the PROD environment, the browser gets (for example) the app.js angular webapp's content with an 'expires' headers one year in the future.
Meaning that for subsequent requests the browser takes it from cache without a request to the server.
If we deploy a new version of the webapp, the browser does not get the new version, as it takes it from the local cache.
The goal is to configure wro or/and spring so that the headers will be correctly set to have the browser perform the request each time, and the server return a 304 not modified. So we would have the clients automatically "updated" uppon new deployment.
Did someone already achieve this?
We use Spring's Java Configuration:
#Configuration
public class Wro4jConfiguration {
#Value("${app.webapp.web.minimize}")
private String minimize;
#Value("${app.webapp.web.disableCache}")
private String disableCache;
#Autowired
private Environment env;
#Bean(name = "wroFilter")
public WroFilter wroFilter() {
ConfigurableWroFilter filter = new ConfigurableWroFilter();
filter.setWroManagerFactory(new Wro4jManagerFactory());
filter.setWroConfigurationFactory(createProperties());
return filter;
}
private PropertyWroConfigurationFactory createProperties() {
Properties props = new Properties();
props.setProperty("jmxEnabled", "false");
props.setProperty("debug", String.valueOf(!env.acceptsProfiles(EnvConstants.PROD)));
props.setProperty("gzipResources", "false");
props.setProperty("ignoreMissingResources", "true");
props.setProperty("minimizeEnabled", minimize);
props.setProperty("resourceWatcherUpdatePeriod", "0");
props.setProperty("modelUpdatePeriod", "0");
props.setProperty("cacheGzippedContent", "false");
// let's see if server-side cache is disabled (DEV only)
if (Boolean.valueOf(disableCache)) {
props.setProperty("resourceWatcherUpdatePeriod", "1");
props.setProperty("modelUpdatePeriod", "5");
}
return new PropertyWroConfigurationFactory(props);
}
}
By default, WroFilter set the following headers: ETag (md5 checksum of the resource), Cache-Control (public, max-age=315360000), Expires (1 year since resource creation).
There are plenty of details about the significance of those headers. The short explanation is this:
When the server reads the ETag from the client request, the server can determine whether to send the file (HTTP 200) or tell the client to just use their local copy (HTTP 304). An ETag is basically just a checksum for a file that semantically changes when the content of the file changes. If only ETag is sent, the client will always have to make a request.
The Expires and Cache-Control headers are very similar and are used by the client (and proxies/caches) to determine whether or not it even needs to make a request to the server at all.
So really what you want to do is use BOTH headers - set the Expires header to a reasonable value based on how often the content changes. Then configure ETags to be sent so that when clients DO send a request to the server, it can more easily determine whether or not to send the file back.
If you want the client always to check for the latest resource version, you should not send the expires & cache-control headers.
Alternatively, there is a more aggressive caching technique: encode the checksum of the resource into its path. As result, every time a resource is changed, the path to that resource is changed. This approach guarantee that the client would always request the most recent version. For this approach, in theory the resources should never expire, since the checksum change every time a resource is changed.
Based on Alex's information and documentation reference, I ended up overriding WroFilter.setResponseHeaders to put appropriate expire values.
This is working fine. Wro already takes care of setting ETag, Date and others, so I only overwrite the expiration delay and date.
#Configuration
public class Wro4jConfiguration {
#Value("${app.webapp.web.browserCache.maxAgeInHours}")
private String maxAgeInHours;
#Bean(name = "wroFilter")
public WroFilter wroFilter() {
ConfigurableWroFilter filter = createFilter();
filter.setWroManagerFactory(new Wro4jManagerFactory());
filter.setWroConfigurationFactory(createProperties());
return filter;
}
private ConfigurableWroFilter createFilter() {
return new ConfigurableWroFilter() {
private final int BROWSER_CACHE_HOURS = Integer.parseInt(maxAgeInHours);
private final int BROWSER_CACHE_SECONDS = BROWSER_CACHE_HOURS * 60 * 60;
#Override
protected void setResponseHeaders(final HttpServletResponse response){
super.setResponseHeaders(response);
if (!getConfiguration().isDebug()) {
ZonedDateTime cacheExpires = ZonedDateTime.of(LocalDateTime.now(), ZoneId.of("GMT")).plusHours(BROWSER_CACHE_HOURS);
String cacheExpiresStr = cacheExpires.format(DateTimeFormatter.RFC_1123_DATE_TIME);
response.setHeader(HttpHeader.EXPIRES.toString(), cacheExpiresStr);
response.setHeader(HttpHeader.CACHE_CONTROL.toString(), "public, max-age=" + BROWSER_CACHE_SECONDS);
}
}
};
}
// Other config methods
}

Ehcache Refresh cache not periodically but conditional

I am using ehcache with spring architecture.
Right now, I am refreshing the cache at FIXED interval of every 15 minute from the database.
#Cacheable(cacheName = "fpodcache",refreshInterval=60000, decoratedCacheType= DecoratedCacheType.REFRESHING_SELF_POPULATING_CACHE)
public List<Account> getAccount(String key) {
//Running a database query to fetch the data.
}
Instead of time based cache refresh, I want CONDITION BASED cache refresh. There are 2 reasons behind it -
1. database doesn't update very frequently(15 times a day but NOT at fixed interval) and 2. data fetched and cached is huge.
So, I decided to maintain two variables - a version in database (version_db) and one version in cache (version_cache). I wish to put a condition that if(version_db > version_cache) then only refresh cache, otherwise dont refresh. Something like -
#Cacheable(cacheName = "fpodcache", conditionforrefresh = version_db>version_cache, decoratedCacheType= DecoratedCacheType.REFRESHING_SELF_POPULATING_CACHE)
public List<Account> getAccount(String key) {
//Running a database query to fetch the data.
}
What is the right syntax for conditionforrefresh = version_db>version_cache in the above code ?
How do I achieve this?
You can have a check in the beginning of your refresh logic to do the check you need.
If true then load from DB otherwise use the existing loaded data.
#Cacheable(cacheName = "fpodcache",refreshInterval=60000, decoratedCacheType= DecoratedCacheType.REFRESHING_SELF_POPULATING_CACHE)
public List<Account> getAccount(String key) {
// Fetch version_db
// Fetch version_cache
// Check if version_db>version_cache
// Is true --> Run a database query to fetch the data.
// Else --> Return existing data
}

Setting Time To Live (TTL) from Java - sample requested

EDIT:
This is basically what I want to do, only in Java
Using ElasticSearch, we add documents to an index bypassing IndexRequest items to a BulkRequestBuilder.
I would like for the documents to be dropped from the index after some time has passed (time to live/ttl)
This can be done either by setting a default for the index, or on a per-document basis. Either approach is fine by me.
The code below is an attempt to do it per document. It does not work. I think it's because TTL is not enabled for the index. Either show me what Java code I need to add to enable TTL so the code below works, or show me different code that enables TTL + sets default TTL value for the index in Java I know how to do it from the REST API but I need to do it from Java code, if at all possible.
logger.debug("Indexing record ({}): {}", id, map);
final IndexRequest indexRequest = new IndexRequest(_indexName, _documentType, id);
final long debug = indexRequest.ttl();
if (_ttl > 0) {
indexRequest.ttl(_ttl);
System.out.println("Setting TTL to " + _ttl);
System.out.println("IndexRequest now has ttl of " + indexRequest.ttl());
}
indexRequest.source(map);
indexRequest.operationThreaded(false);
bulkRequestBuilder.add(indexRequest);
}
// execute and block until done.
BulkResponse response;
try {
response = bulkRequestBuilder.execute().actionGet();
Later I check in my unit test by polling this method, but the document count never goes down.
public long getDocumentCount() throws Exception {
Client client = getClient();
try {
client.admin().indices().refresh(new RefreshRequest(INDEX_NAME)).actionGet();
ActionFuture<CountResponse> response = client.count(new CountRequest(INDEX_NAME).types(DOCUMENT_TYPE));
CountResponse countResponse = response.get();
return countResponse.getCount();
} finally {
client.close();
}
}
After a LONG day of googling and writing test programs, I came up with a working example of how to use ttl and basic index/object creation from the Java API. Frankly most of the examples in the docs are trivial, and some JavaDoc and end-to-end examples would go a LONG way to help those of us who are using the non-REST interfaces.
Ah well.
Code here: Adding mapping to a type from Java - how do I do it?

Resources