In a spring boot environment, I have the following Hazelcast configuration.
#Bean
public Config hazelCastConfig() {
final Config config = new Config().setInstanceName("hazelcast-cache")
.addMapConfig(new MapConfig().setName("hazelcast-cache")
.setMaxSizeConfig(
new MaxSizeConfig(200, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU).setTimeToLiveSeconds(5))
.setClassLoader(Thread.currentThread().getContextClassLoader());
final UserCodeDeploymentConfig distCLConfig = config.getUserCodeDeploymentConfig();
distCLConfig.setEnabled(true)
.setClassCacheMode(UserCodeDeploymentConfig.ClassCacheMode.ETERNAL)
.setProviderMode(UserCodeDeploymentConfig.ProviderMode.LOCAL_CLASSES_ONLY);
return config;
}
This is how use the cacheable in our code
#Cacheable(value = "presetCategoryMaster", key = "{#storeCode, #validDisplayFlag}")
public List<PresetCategoryMasterEntity> getPresetMasterCategoryForStoreCdAndValdiDisplayFlag(
final Integer storeCode, final Short validDisplayFlag) {
----------------
----------------
}
But the TTL is never honored. We confirmed in the trace logs also. But after the first call, once the cache entries are made, it never gets evicted unless you explicitly call CacehEvict() or CachePut(). All though we have TTL value to 5 secs, the cache is not cleared even after an hour.
Any help is appreciated.
The cache name is presetCategoryMaster
#Cacheable(value = "presetCategoryMaster"
The configuration uses hazelcast-cache, so doesn't match.
new MapConfig().setName("hazelcast-cache")
You can use the Management Center or getDistributedObjects()
call to find out what's created and watch for them expiring.
Related
I'm using Spring Boot caching in a project and am using Caffiene. I've added some default configuration for Caffeine to the project, and I can get the most recent object from the cache using the following code:
private final CaffeineCache caffeineCache = (CaffeineCache) caffeineCacheManager.getCache("myCacheName");
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
cache.policy().eviction().get().hottest(1);
I don't actually want the object itself, but I'd want to know when it was added to the cache. Is there a way to find out when this object was added to the cache?
Thanks to Ben Maines comment, I found a solution to my problem. I calculated the cache time using the expiration timestamp:
CacheResponse cacheResponse = new CacheResponse();
Cache<Object, Object> cache = this.caffeineCache.getNativeCache();
if(cache.policy().eviction().get().hottest(1).keySet().iterator().hasNext()) {
// Time in milliseconds since object was cached
OptionalLong time = cache.policy().expireAfterWrite().get().ageOf(cache.policy().eviction().get().hottest(1).keySet().iterator().next(), TimeUnit.MILLISECONDS);
LOGGER.info("Calculating last UTC cached date");
// Calculate last cached time and set the object in the cache response
cacheResponse.setTime(Instant.now().minusMillis(time.getAsLong()).atZone(ZoneOffset.UTC).toLocalDateTime());
} else {
LOGGER.info("Cache appears to be empty.");
}
cacheResponse.setNumCachedItems(cache.estimatedSize());
return cacheResponse;
I am using Spring + Hazelcast 3.8.2 and have configured a map like this using the Spring configuration:
<hz:map name="test.*" backup-count="1"
max-size="0" eviction-percentage="30" read-backup-data="true"
time-to-live-seconds="900"
eviction-policy="NONE" merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy">
<hz:near-cache max-idle-seconds="300"
time-to-live-seconds="0"
max-size="0" />
</hz:map>
I've got two clients connected (both on same machine [test env], using different ports).
When I change a value in the map on one client the other client still has the old value until it will get evicted from the near cache due to the expired idle time.
I found a similar issue like this here: Hazelcast near-cache eviction doesn't work
But I'm unsure if this is really the same issue, at least it is mentioned that this was a bug in version 3.7 and we are using 3.8.2.
Is this a correct behaviour or am I doing something wrong? I know that there is a property invalidate-on-change, but as a default this is true, so I don't expect I have to set this one.
I also tried setting the read-backup-data to false, doesn't help.
Thanks for your support
Christian
I found the solution myself.
The issue is that Hazelcast sends the invalidations by default in batches and thus it waits a few seconds until the invalidations will be sent out to all other nodes.
You can find more information about this here: http://docs.hazelcast.org/docs/3.8/manual/html-single/index.html#near-cache-invalidation
So I had to set the property hazelcast.map.invalidation.batch.enabled to false which will immediately send out invalidations to all nodes. But as mentioned in the documentation this should only be used when there aren't too many put/remove/... operations expected, as this will then make the event system very busy.
Nevertheless, even though this property is set it will not guarantee that all nodes will directly invalidate the near cache entries. I noticed that after directly accessing the values on the different node sometimes it's fine, sometimes not.
Here is the JUnit test I built up for this:
#Test
public void testWithInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "true");
doTest();
}
#Test
public void testWithoutInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "false");
doTest();
}
#After
public void shutdownNodes() {
Hazelcast.shutdownAll();
}
protected void doTest() throws Exception {
// first config for normal cluster member
Config c1 = new Config();
c1.getNetworkConfig().setPort(5709);
// second config for super client
Config c2 = new Config();
c2.getNetworkConfig().setPort(5710);
// map config is the same for both nodes
MapConfig testMapCfg = new MapConfig("test");
NearCacheConfig ncc = new NearCacheConfig();
ncc.setTimeToLiveSeconds(10);
testMapCfg.setNearCacheConfig(ncc);
c1.addMapConfig(testMapCfg);
c2.addMapConfig(testMapCfg);
// start instances
HazelcastInstance h1 = Hazelcast.newHazelcastInstance(c1);
HazelcastInstance h2 = Hazelcast.newHazelcastInstance(c2);
IMap<Object, Object> mapH1 = h1.getMap("test");
IMap<Object, Object> mapH2 = h2.getMap("test");
// initial filling
mapH1.put("a", -1);
assertEquals(mapH1.get("a"), -1);
assertEquals(mapH2.get("a"), -1);
int updatedH1 = 0, updatedH2 = 0, runs = 0;
for (int i = 0; i < 5; i++) {
mapH1.put("a", i);
// without this short sleep sometimes the nearcache is updated in time, sometimes not
Thread.sleep(100);
runs++;
if (mapH1.get("a").equals(i)) {
updatedH1++;
}
if (mapH2.get("a").equals(i)) {
updatedH2++;
}
}
assertEquals(runs, updatedH1);
assertEquals(runs, updatedH2);
}
testWithInvalidationBatchEnabled finishs only sometimes successfully, testWithoutInvalidationBatchEnabled finishs always successfully.
During a stability test of our apache ignite cluster we got a memory related problem, where the used memory heap space increased to 100% and didnt go down like we expected. This is what we did:
Created a cache with eviction policy to FifoEvictionPolicy( max: 10000, batchSize:100)
20 simultaneous threads that executed the following scenario over and over again for a couple of hours:
Added a unique entry to the cache and then fetched the value to verify that it was added.
This scenario created about 2.3 million entries during the test.
Our expectation was due to our quite restricted eviction policy at maximum 10000 entries, the memory should been stabilized. However, the memory just kept rising until it reached max heap size. See attached memory graph:
Our question is:
Why is the memory used by the entries still allocated, even though eviction is done?
One thing to add to this, is that we executed the same test but with a deletion of the entry after that we added it. The memory was now stable:
Update with testcase and comment.
Below you will find a simple junit test to prove the memory leak. #a_gura seems to be correct - if we disable the ExpiryPolicy things work as expected. But if we enable ExpiryPolicy the heap seems to get filled up within the ExpiryPolicy-duration. Testcase:
public class IgniteTest {
String cacheName = "my_cache";
#Test
public void test() throws InterruptedException {
IgniteConfiguration configuration = new IgniteConfiguration();
Ignite ignite = Ignition.start(configuration);
//create a large string to use as test value.
StringBuilder testValue = new StringBuilder();
for (int i = 0; i < 10*1024; i ++) {
testValue.append("a");
}
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setEvictionPolicy(new FifoEvictionPolicy<>(10_000, 100));
Duration duration = new Duration(TimeUnit.HOURS, 12);
cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(duration));
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setBackups(0);
Cache<Object, Object> cache = ignite.getOrCreateCache(cacheCfg);
String lastKey = "";
for (int i = 0; i < 10_000_101; i++){
String key = "key#"+i;
String value = testValue + "value#"+i;
log.trace("storing {} {}", key, value);
if (i % 1_000 == 0) {
log.debug("storing {}", key);
}
cache.put(key, value);
lastKey = key;
Thread.sleep(1);
}
String verifyKey = "key#1";
Assert.assertThat("first key should be evicted", cache.containsKey(verifyKey), CoreMatchers.is(false));
Assert.assertThat("last key should NOT be evicted", cache.containsKey(lastKey), CoreMatchers.is(true));
ignite.destroyCache(cacheName);
}
}
This has been fixed in Ignite 1.8: https://issues.apache.org/jira/browse/IGNITE-3948.
Credit to #a_gura who filed the bug and to the development team.
We are serving javascript resources (and others) via wro in our webapp.
On the PROD environment, the browser gets (for example) the app.js angular webapp's content with an 'expires' headers one year in the future.
Meaning that for subsequent requests the browser takes it from cache without a request to the server.
If we deploy a new version of the webapp, the browser does not get the new version, as it takes it from the local cache.
The goal is to configure wro or/and spring so that the headers will be correctly set to have the browser perform the request each time, and the server return a 304 not modified. So we would have the clients automatically "updated" uppon new deployment.
Did someone already achieve this?
We use Spring's Java Configuration:
#Configuration
public class Wro4jConfiguration {
#Value("${app.webapp.web.minimize}")
private String minimize;
#Value("${app.webapp.web.disableCache}")
private String disableCache;
#Autowired
private Environment env;
#Bean(name = "wroFilter")
public WroFilter wroFilter() {
ConfigurableWroFilter filter = new ConfigurableWroFilter();
filter.setWroManagerFactory(new Wro4jManagerFactory());
filter.setWroConfigurationFactory(createProperties());
return filter;
}
private PropertyWroConfigurationFactory createProperties() {
Properties props = new Properties();
props.setProperty("jmxEnabled", "false");
props.setProperty("debug", String.valueOf(!env.acceptsProfiles(EnvConstants.PROD)));
props.setProperty("gzipResources", "false");
props.setProperty("ignoreMissingResources", "true");
props.setProperty("minimizeEnabled", minimize);
props.setProperty("resourceWatcherUpdatePeriod", "0");
props.setProperty("modelUpdatePeriod", "0");
props.setProperty("cacheGzippedContent", "false");
// let's see if server-side cache is disabled (DEV only)
if (Boolean.valueOf(disableCache)) {
props.setProperty("resourceWatcherUpdatePeriod", "1");
props.setProperty("modelUpdatePeriod", "5");
}
return new PropertyWroConfigurationFactory(props);
}
}
By default, WroFilter set the following headers: ETag (md5 checksum of the resource), Cache-Control (public, max-age=315360000), Expires (1 year since resource creation).
There are plenty of details about the significance of those headers. The short explanation is this:
When the server reads the ETag from the client request, the server can determine whether to send the file (HTTP 200) or tell the client to just use their local copy (HTTP 304). An ETag is basically just a checksum for a file that semantically changes when the content of the file changes. If only ETag is sent, the client will always have to make a request.
The Expires and Cache-Control headers are very similar and are used by the client (and proxies/caches) to determine whether or not it even needs to make a request to the server at all.
So really what you want to do is use BOTH headers - set the Expires header to a reasonable value based on how often the content changes. Then configure ETags to be sent so that when clients DO send a request to the server, it can more easily determine whether or not to send the file back.
If you want the client always to check for the latest resource version, you should not send the expires & cache-control headers.
Alternatively, there is a more aggressive caching technique: encode the checksum of the resource into its path. As result, every time a resource is changed, the path to that resource is changed. This approach guarantee that the client would always request the most recent version. For this approach, in theory the resources should never expire, since the checksum change every time a resource is changed.
Based on Alex's information and documentation reference, I ended up overriding WroFilter.setResponseHeaders to put appropriate expire values.
This is working fine. Wro already takes care of setting ETag, Date and others, so I only overwrite the expiration delay and date.
#Configuration
public class Wro4jConfiguration {
#Value("${app.webapp.web.browserCache.maxAgeInHours}")
private String maxAgeInHours;
#Bean(name = "wroFilter")
public WroFilter wroFilter() {
ConfigurableWroFilter filter = createFilter();
filter.setWroManagerFactory(new Wro4jManagerFactory());
filter.setWroConfigurationFactory(createProperties());
return filter;
}
private ConfigurableWroFilter createFilter() {
return new ConfigurableWroFilter() {
private final int BROWSER_CACHE_HOURS = Integer.parseInt(maxAgeInHours);
private final int BROWSER_CACHE_SECONDS = BROWSER_CACHE_HOURS * 60 * 60;
#Override
protected void setResponseHeaders(final HttpServletResponse response){
super.setResponseHeaders(response);
if (!getConfiguration().isDebug()) {
ZonedDateTime cacheExpires = ZonedDateTime.of(LocalDateTime.now(), ZoneId.of("GMT")).plusHours(BROWSER_CACHE_HOURS);
String cacheExpiresStr = cacheExpires.format(DateTimeFormatter.RFC_1123_DATE_TIME);
response.setHeader(HttpHeader.EXPIRES.toString(), cacheExpiresStr);
response.setHeader(HttpHeader.CACHE_CONTROL.toString(), "public, max-age=" + BROWSER_CACHE_SECONDS);
}
}
};
}
// Other config methods
}
EDIT:
This is basically what I want to do, only in Java
Using ElasticSearch, we add documents to an index bypassing IndexRequest items to a BulkRequestBuilder.
I would like for the documents to be dropped from the index after some time has passed (time to live/ttl)
This can be done either by setting a default for the index, or on a per-document basis. Either approach is fine by me.
The code below is an attempt to do it per document. It does not work. I think it's because TTL is not enabled for the index. Either show me what Java code I need to add to enable TTL so the code below works, or show me different code that enables TTL + sets default TTL value for the index in Java I know how to do it from the REST API but I need to do it from Java code, if at all possible.
logger.debug("Indexing record ({}): {}", id, map);
final IndexRequest indexRequest = new IndexRequest(_indexName, _documentType, id);
final long debug = indexRequest.ttl();
if (_ttl > 0) {
indexRequest.ttl(_ttl);
System.out.println("Setting TTL to " + _ttl);
System.out.println("IndexRequest now has ttl of " + indexRequest.ttl());
}
indexRequest.source(map);
indexRequest.operationThreaded(false);
bulkRequestBuilder.add(indexRequest);
}
// execute and block until done.
BulkResponse response;
try {
response = bulkRequestBuilder.execute().actionGet();
Later I check in my unit test by polling this method, but the document count never goes down.
public long getDocumentCount() throws Exception {
Client client = getClient();
try {
client.admin().indices().refresh(new RefreshRequest(INDEX_NAME)).actionGet();
ActionFuture<CountResponse> response = client.count(new CountRequest(INDEX_NAME).types(DOCUMENT_TYPE));
CountResponse countResponse = response.get();
return countResponse.getCount();
} finally {
client.close();
}
}
After a LONG day of googling and writing test programs, I came up with a working example of how to use ttl and basic index/object creation from the Java API. Frankly most of the examples in the docs are trivial, and some JavaDoc and end-to-end examples would go a LONG way to help those of us who are using the non-REST interfaces.
Ah well.
Code here: Adding mapping to a type from Java - how do I do it?