I am using Spring + Hazelcast 3.8.2 and have configured a map like this using the Spring configuration:
<hz:map name="test.*" backup-count="1"
max-size="0" eviction-percentage="30" read-backup-data="true"
time-to-live-seconds="900"
eviction-policy="NONE" merge-policy="com.hazelcast.map.merge.PassThroughMergePolicy">
<hz:near-cache max-idle-seconds="300"
time-to-live-seconds="0"
max-size="0" />
</hz:map>
I've got two clients connected (both on same machine [test env], using different ports).
When I change a value in the map on one client the other client still has the old value until it will get evicted from the near cache due to the expired idle time.
I found a similar issue like this here: Hazelcast near-cache eviction doesn't work
But I'm unsure if this is really the same issue, at least it is mentioned that this was a bug in version 3.7 and we are using 3.8.2.
Is this a correct behaviour or am I doing something wrong? I know that there is a property invalidate-on-change, but as a default this is true, so I don't expect I have to set this one.
I also tried setting the read-backup-data to false, doesn't help.
Thanks for your support
Christian
I found the solution myself.
The issue is that Hazelcast sends the invalidations by default in batches and thus it waits a few seconds until the invalidations will be sent out to all other nodes.
You can find more information about this here: http://docs.hazelcast.org/docs/3.8/manual/html-single/index.html#near-cache-invalidation
So I had to set the property hazelcast.map.invalidation.batch.enabled to false which will immediately send out invalidations to all nodes. But as mentioned in the documentation this should only be used when there aren't too many put/remove/... operations expected, as this will then make the event system very busy.
Nevertheless, even though this property is set it will not guarantee that all nodes will directly invalidate the near cache entries. I noticed that after directly accessing the values on the different node sometimes it's fine, sometimes not.
Here is the JUnit test I built up for this:
#Test
public void testWithInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "true");
doTest();
}
#Test
public void testWithoutInvalidationBatchEnabled() throws Exception {
System.setProperty("hazelcast.map.invalidation.batch.enabled", "false");
doTest();
}
#After
public void shutdownNodes() {
Hazelcast.shutdownAll();
}
protected void doTest() throws Exception {
// first config for normal cluster member
Config c1 = new Config();
c1.getNetworkConfig().setPort(5709);
// second config for super client
Config c2 = new Config();
c2.getNetworkConfig().setPort(5710);
// map config is the same for both nodes
MapConfig testMapCfg = new MapConfig("test");
NearCacheConfig ncc = new NearCacheConfig();
ncc.setTimeToLiveSeconds(10);
testMapCfg.setNearCacheConfig(ncc);
c1.addMapConfig(testMapCfg);
c2.addMapConfig(testMapCfg);
// start instances
HazelcastInstance h1 = Hazelcast.newHazelcastInstance(c1);
HazelcastInstance h2 = Hazelcast.newHazelcastInstance(c2);
IMap<Object, Object> mapH1 = h1.getMap("test");
IMap<Object, Object> mapH2 = h2.getMap("test");
// initial filling
mapH1.put("a", -1);
assertEquals(mapH1.get("a"), -1);
assertEquals(mapH2.get("a"), -1);
int updatedH1 = 0, updatedH2 = 0, runs = 0;
for (int i = 0; i < 5; i++) {
mapH1.put("a", i);
// without this short sleep sometimes the nearcache is updated in time, sometimes not
Thread.sleep(100);
runs++;
if (mapH1.get("a").equals(i)) {
updatedH1++;
}
if (mapH2.get("a").equals(i)) {
updatedH2++;
}
}
assertEquals(runs, updatedH1);
assertEquals(runs, updatedH2);
}
testWithInvalidationBatchEnabled finishs only sometimes successfully, testWithoutInvalidationBatchEnabled finishs always successfully.
Related
In a spring boot environment, I have the following Hazelcast configuration.
#Bean
public Config hazelCastConfig() {
final Config config = new Config().setInstanceName("hazelcast-cache")
.addMapConfig(new MapConfig().setName("hazelcast-cache")
.setMaxSizeConfig(
new MaxSizeConfig(200, MaxSizeConfig.MaxSizePolicy.FREE_HEAP_SIZE))
.setEvictionPolicy(EvictionPolicy.LRU).setTimeToLiveSeconds(5))
.setClassLoader(Thread.currentThread().getContextClassLoader());
final UserCodeDeploymentConfig distCLConfig = config.getUserCodeDeploymentConfig();
distCLConfig.setEnabled(true)
.setClassCacheMode(UserCodeDeploymentConfig.ClassCacheMode.ETERNAL)
.setProviderMode(UserCodeDeploymentConfig.ProviderMode.LOCAL_CLASSES_ONLY);
return config;
}
This is how use the cacheable in our code
#Cacheable(value = "presetCategoryMaster", key = "{#storeCode, #validDisplayFlag}")
public List<PresetCategoryMasterEntity> getPresetMasterCategoryForStoreCdAndValdiDisplayFlag(
final Integer storeCode, final Short validDisplayFlag) {
----------------
----------------
}
But the TTL is never honored. We confirmed in the trace logs also. But after the first call, once the cache entries are made, it never gets evicted unless you explicitly call CacehEvict() or CachePut(). All though we have TTL value to 5 secs, the cache is not cleared even after an hour.
Any help is appreciated.
The cache name is presetCategoryMaster
#Cacheable(value = "presetCategoryMaster"
The configuration uses hazelcast-cache, so doesn't match.
new MapConfig().setName("hazelcast-cache")
You can use the Management Center or getDistributedObjects()
call to find out what's created and watch for them expiring.
During a stability test of our apache ignite cluster we got a memory related problem, where the used memory heap space increased to 100% and didnt go down like we expected. This is what we did:
Created a cache with eviction policy to FifoEvictionPolicy( max: 10000, batchSize:100)
20 simultaneous threads that executed the following scenario over and over again for a couple of hours:
Added a unique entry to the cache and then fetched the value to verify that it was added.
This scenario created about 2.3 million entries during the test.
Our expectation was due to our quite restricted eviction policy at maximum 10000 entries, the memory should been stabilized. However, the memory just kept rising until it reached max heap size. See attached memory graph:
Our question is:
Why is the memory used by the entries still allocated, even though eviction is done?
One thing to add to this, is that we executed the same test but with a deletion of the entry after that we added it. The memory was now stable:
Update with testcase and comment.
Below you will find a simple junit test to prove the memory leak. #a_gura seems to be correct - if we disable the ExpiryPolicy things work as expected. But if we enable ExpiryPolicy the heap seems to get filled up within the ExpiryPolicy-duration. Testcase:
public class IgniteTest {
String cacheName = "my_cache";
#Test
public void test() throws InterruptedException {
IgniteConfiguration configuration = new IgniteConfiguration();
Ignite ignite = Ignition.start(configuration);
//create a large string to use as test value.
StringBuilder testValue = new StringBuilder();
for (int i = 0; i < 10*1024; i ++) {
testValue.append("a");
}
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setEvictionPolicy(new FifoEvictionPolicy<>(10_000, 100));
Duration duration = new Duration(TimeUnit.HOURS, 12);
cacheCfg.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(duration));
cacheCfg.setCacheMode(CacheMode.LOCAL);
cacheCfg.setBackups(0);
Cache<Object, Object> cache = ignite.getOrCreateCache(cacheCfg);
String lastKey = "";
for (int i = 0; i < 10_000_101; i++){
String key = "key#"+i;
String value = testValue + "value#"+i;
log.trace("storing {} {}", key, value);
if (i % 1_000 == 0) {
log.debug("storing {}", key);
}
cache.put(key, value);
lastKey = key;
Thread.sleep(1);
}
String verifyKey = "key#1";
Assert.assertThat("first key should be evicted", cache.containsKey(verifyKey), CoreMatchers.is(false));
Assert.assertThat("last key should NOT be evicted", cache.containsKey(lastKey), CoreMatchers.is(true));
ignite.destroyCache(cacheName);
}
}
This has been fixed in Ignite 1.8: https://issues.apache.org/jira/browse/IGNITE-3948.
Credit to #a_gura who filed the bug and to the development team.
EDIT:
This is basically what I want to do, only in Java
Using ElasticSearch, we add documents to an index bypassing IndexRequest items to a BulkRequestBuilder.
I would like for the documents to be dropped from the index after some time has passed (time to live/ttl)
This can be done either by setting a default for the index, or on a per-document basis. Either approach is fine by me.
The code below is an attempt to do it per document. It does not work. I think it's because TTL is not enabled for the index. Either show me what Java code I need to add to enable TTL so the code below works, or show me different code that enables TTL + sets default TTL value for the index in Java I know how to do it from the REST API but I need to do it from Java code, if at all possible.
logger.debug("Indexing record ({}): {}", id, map);
final IndexRequest indexRequest = new IndexRequest(_indexName, _documentType, id);
final long debug = indexRequest.ttl();
if (_ttl > 0) {
indexRequest.ttl(_ttl);
System.out.println("Setting TTL to " + _ttl);
System.out.println("IndexRequest now has ttl of " + indexRequest.ttl());
}
indexRequest.source(map);
indexRequest.operationThreaded(false);
bulkRequestBuilder.add(indexRequest);
}
// execute and block until done.
BulkResponse response;
try {
response = bulkRequestBuilder.execute().actionGet();
Later I check in my unit test by polling this method, but the document count never goes down.
public long getDocumentCount() throws Exception {
Client client = getClient();
try {
client.admin().indices().refresh(new RefreshRequest(INDEX_NAME)).actionGet();
ActionFuture<CountResponse> response = client.count(new CountRequest(INDEX_NAME).types(DOCUMENT_TYPE));
CountResponse countResponse = response.get();
return countResponse.getCount();
} finally {
client.close();
}
}
After a LONG day of googling and writing test programs, I came up with a working example of how to use ttl and basic index/object creation from the Java API. Frankly most of the examples in the docs are trivial, and some JavaDoc and end-to-end examples would go a LONG way to help those of us who are using the non-REST interfaces.
Ah well.
Code here: Adding mapping to a type from Java - how do I do it?
I'm trying to run the benchmark software yscb on ElasticSearch
The problem I'm having is that after the load, the data seems to get removed during cleanup.
I'm struggling to understand what is supposed to happen?
If I comment out the cleanup, it still fails because it cannot find the index during the "run" phase.
Can someone please explain what is supposed to happen in YSCB?
I mean I think it would have
1. load phase: load say 1,000,000 records
2. run phase: query the records loaded during the "load phase"
Thanks,
Okay I have discovered by running Couchbase in YCSB that the data shouldn't be removed.
Looking at cleanup() for ElasticSearchClient I see no reason why the files would be deleted (?)
#Override
public void cleanup() throws DBException {
if (!node.isClosed()) {
client.close();
node.stop();
node.close();
}
}
The init is as follows: any reason this would not persist on the filesystem?
public void init() throws DBException {
// initialize OrientDB driver
Properties props = getProperties();
this.indexKey = props.getProperty("es.index.key", DEFAULT_INDEX_KEY);
String clusterName = props.getProperty("cluster.name", DEFAULT_CLUSTER_NAME);
Boolean newdb = Boolean.parseBoolean(props.getProperty("elasticsearch.newdb", "false"));
Builder settings = settingsBuilder()
.put("node.local", "true")
.put("path.data", System.getProperty("java.io.tmpdir") + "/esdata")
.put("discovery.zen.ping.multicast.enabled", "false")
.put("index.mapping._id.indexed", "true")
.put("index.gateway.type", "none")
.put("gateway.type", "none")
.put("index.number_of_shards", "1")
.put("index.number_of_replicas", "0");
//if properties file contains elasticsearch user defined properties
//add it to the settings file (will overwrite the defaults).
settings.put(props);
System.out.println("ElasticSearch starting node = " + settings.get("cluster.name"));
System.out.println("ElasticSearch node data path = " + settings.get("path.data"));
node = nodeBuilder().clusterName(clusterName).settings(settings).node();
node.start();
client = node.client();
if (newdb) {
client.admin().indices().prepareDelete(indexKey).execute().actionGet();
client.admin().indices().prepareCreate(indexKey).execute().actionGet();
} else {
boolean exists = client.admin().indices().exists(Requests.indicesExistsRequest(indexKey)).actionGet().isExists();
if (!exists) {
client.admin().indices().prepareCreate(indexKey).execute().actionGet();
}
}
}
Thanks,
Okay what I am finding is as follows
(any help from ElasticSearch-ers much appreciated!!!!
because I'm obviously doing something wrong )
Even when the load shuts down leaving the data behind, the "run" still cannot find the data on startup
ElasticSearch node data path = C:\Users\Pl_2\AppData\Local\Temp\/esdata
org.elasticsearch.action.NoShardAvailableActionException: [es.ycsb][0] No shard available for [[es.ycsb][usertable][user4283669858964623926]: routing [null]]
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.perform(TransportShardSingleOperationAction.java:140)
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.start(TransportShardSingleOperationAction.java:125)
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:72)
at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:47)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:61)
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:83)
The github README has been updated.
It looks like you need to specify using:
-p path.home=<path to folder to persist data>
We are trying to migrate to use Microsoft Enterprise Library - Caching block. However, cache manager initialization seems to be pretty tied to the config file entries and our application creates inmemory "containers" on the fly. Is there anyway by which an instance of cache manager can be instantiated on the fly using pre-configured set of values (inmemory only).
Enterprise Library 5 has a fluent configuration which makes it easy to programmatically configure the blocks. For example:
var builder = new ConfigurationSourceBuilder();
builder.ConfigureCaching()
.ForCacheManagerNamed("MyCache")
.WithOptions
.UseAsDefaultCache()
.StoreInIsolatedStorage("MyStore")
.EncryptUsing.SymmetricEncryptionProviderNamed("MySymmetric");
var configSource = new DictionaryConfigurationSource();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.Current
= EnterpriseLibraryContainer.CreateDefaultContainer(configSource);
Unfortunately, it looks like you need to configure the entire block at once so you wouldn't be able to add CacheManagers on the fly. (When I call ConfigureCaching() twice on the same builder an exception is thrown.) You can create a new ConfigurationSource but then you lose your previous configuration. Perhaps there is a way to retrieve the existing configuration, modify it (e.g. add a new CacheManager) and then replace it? I haven't been able to find a way.
Another approach is to use the Caching classes directly.
The following example uses the Caching classes to instantiate two CacheManager instances and stores them in a static Dictionary. No configuration required since it's not using the container. I'm not sure it's a great idea -- it feels a bit wrong to me. It's pretty rudimentary but hopefully helps.
public static Dictionary<string, CacheManager> caches = new Dictionary<string, CacheManager>();
static void Main(string[] args)
{
IBackingStore backingStore = new NullBackingStore();
ICachingInstrumentationProvider instrProv = new CachingInstrumentationProvider("myInstance", false, false,
new NoPrefixNameFormatter());
Cache cache = new Cache(backingStore, instrProv);
BackgroundScheduler bgScheduler = new BackgroundScheduler(new ExpirationTask(null, instrProv), new ScavengerTask(0,
int.MaxValue, new NullCacheOperation(), instrProv), instrProv);
CacheManager cacheManager = new CacheManager(cache, bgScheduler, new ExpirationPollTimer(int.MaxValue));
cacheManager.Add("test1", "value1");
caches.Add("cache1", cacheManager);
cacheManager = new CacheManager(new Cache(backingStore, instrProv), bgScheduler, new ExpirationPollTimer(int.MaxValue));
cacheManager.Add("test2", "value2");
caches.Add("cache2", cacheManager);
Console.WriteLine(caches["cache1"].GetData("test1"));
Console.WriteLine(caches["cache2"].GetData("test2"));
}
public class NullCacheOperation : ICacheOperations
{
public int Count { get { return 0; } }
public Hashtable CurrentCacheState { get { return new System.Collections.Hashtable(); } }
public void RemoveItemFromCache(string key, CacheItemRemovedReason removalReason) {}
}
If expiration and scavenging policies are the same perhaps it might be better to create one CacheManager and then use some intelligent key names to represent the different "containers". E.g. the key name could be in the format "{container name}:{item key}" (assuming that a colon will not appear in a container or key name).
You can using UnityContainer:
IUnityContainer unityContainer = new UnityContainer();
IContainerConfigurator configurator = new UnityContainerConfigurator(unityContainer);
configurator.ConfigureCache("MyCache1");
IContainerConfigurator configurator2 = new UnityContainerConfigurator(unityContainer);
configurator2.ConfigureCache("MyCache2");
// here you can access both MyCache1 and MyCache2:
var cache1 = unityContainer.Resolve<ICacheManager>("MyCache1");
var cache2 = unityContainer.Resolve<ICacheManager>("MyCache2");
And this is an extension class for IContainerConfigurator:
public static void ConfigureCache(this IContainerConfigurator configurator, string configKey)
{
ConfigurationSourceBuilder builder = new ConfigurationSourceBuilder();
DictionaryConfigurationSource configSource = new DictionaryConfigurationSource();
// simple inmemory cache configuration
builder.ConfigureCaching().ForCacheManagerNamed(configKey).WithOptions.StoreInMemory();
builder.UpdateConfigurationWithReplace(configSource);
EnterpriseLibraryContainer.ConfigureContainer(configurator, configSource);
}
Using this you should manage an static IUnityContainer object and can add new cache, as well as reconfigure existing caching setting anywhere you want.