I'm using Hazelcast as second cache of Hibernate in a Spring Boot application and I'm facing a problem to get some cache statistics. Calling
this.hazelcastInstance.getMap(cacheName).getLocalMapStats()
I'm getting values that don't seem to me to be true. For example, for a given cacheName
this.hazelcastInstance.getMap(cacheName).getLocalMapStats().getPutOperationCount() // = 0
and the same cacheName has a map of 14 elements when looking at
hazelcastInstance.getDistributedObjects()
Do you have any idea why LocalMapStats object doesn't seem to have the right value?
I have a 2 node setup distributed cache setup which needs persistence setup for both members.
I have MapSore and Maploader implemented and the same code is deployed on both nodes.
The MapStore and MapLoader work absolutely ok on a single member setup, but after another member joins, MapStore and Maploader continue to work on the first member and all insert or updates by the second member are persisted to disk via the first member.
My requirement is that each member should be able to persist to disk independently so that distributed cache is backed up on all members and not just the first member.
Is there a setting I can change to achieve this.
Here is my Hazlecast Spring Configuration.
#Bean
public HazelcastInstance hazelcastInstance(H2MapStorage h2mapStore) throws IOException{
MapStoreConfig mapStoreConfig = new MapStoreConfig();
mapStoreConfig.setImplementation(h2mapStore);
mapStoreConfig.setWriteDelaySeconds(0);
YamlConfigBuilder configBuilder=null;
if(new File(hazelcastConfiglocation).exists()) {
configBuilder = new YamlConfigBuilder(hazelcastConfiglocation);
}else {
configBuilder = new YamlConfigBuilder();
}
Config config = configBuilder.build();
config.setProperty("hazelcast.jmx", "true");
MapConfig mapConfig = config.getMapConfig("requests");
mapConfig.setMapStoreConfig(mapStoreConfig);
return Hazelcast.newHazelcastInstance(config);
}
Here is my hazlecast yml config - This is placed in /opt/hazlecast.yml which is picked up by my spring config up above.
hazelcast:
group:
name: tsystems
management-center:
enabled: false
url: http://localhost:8080/hazelcast-mancenter
network:
port:
auto-increment: true
port-count: 100
port: 5701
outbound-ports:
- 0
join:
multicast:
enabled: false
multicast-group: 224.2.2.3
multicast-port: 54327
tcp-ip:
enabled: true
member-list:
- 192.168.1.13
Entire code is available here :
[https://bitbucket.org/samrat_roy/hazelcasttest/src/master/][1]
This might just be bad luck and low data volumes, rather than an actual error.
On each node, try the running the localKeySet() method and printing the results.
This will tell you which keys are on which node in the cluster. The node that owns key "X" will invoke the map store for that key, even if the update was initiated by another node.
If you have low data volumes, it may not be a 50/50 data split. At an extreme, 2 data records in a 2-node cluster could have both data records on the same node.
If you have a 1,000 data records, it's pretty unlikely that they'll all be on the same node.
So the other thing to try is add more data and update all data, to see if both nodes participate.
Ok after struggling a lot I noticed a teeny tiny buy critical detail.
Datastore needs to be a centralized system that is accessible from all Hazelcast members. Persistence to a local file system is not supported.
This is absolutely in line with what I was observing
[https://docs.hazelcast.org/docs/latest/manual/html-single/#loading-and-storing-persistent-data]
However not be discouraged, I found out that I could use event listeners to do the same thing I needed to do.
#Component
public class HazelCastEntryListner
implements EntryAddedListener<String,Object>, EntryUpdatedListener<String,Object>, EntryRemovedListener<String,Object>,
EntryEvictedListener<String,Object>, EntryLoadedListener<String,Object>, MapEvictedListener, MapClearedListener {
#Autowired
#Lazy
private RequestDao requestDao;
I created this class and hooked it into the config as so
MapConfig mapConfig = config.getMapConfig("requests");
mapConfig.addEntryListenerConfig(new EntryListenerConfig(entryListner, false, true));
return Hazelcast.newHazelcastInstance(config);
This worked flawlessly, I am able to replicate data over to both the embedded databases on each node.
My use case was to cover HA failover edge-cases. During HA failover, The slave node needed to know the working memory of the active node.
I am not using hazelcast as a cache, rather I am using as a data syncing mechanism.
I'm using out of the box Laravel's REDIS implementation. I'm caching collections of queries that are needed for the site only, so I get lots of keys that have heavy serialized objects stored in a key as a String Type.
With this, REDIS consumes between 22GB - 25GB(Max memory). This leads sometimes to key evictions which we want to avoid at all costs
Should this be addressed from code POV by optimization (only storing query resultset) or is there something that we're doing wrong on REDIS?
used_memory:25182306344
used_memory_human:23.45G
used_memory_rss:24106418176
used_memory_rss_human:22.45G
used_memory_peak:25238402912
used_memory_peak_human:23.51G
used_memory_peak_perc:99.78%
used_memory_overhead:14926818
used_memory_startup:508096
used_memory_dataset:25167379526
used_memory_dataset_perc:99.94%
total_system_memory:32899166208
total_system_memory_human:30.64G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:26843545600
maxmemory_human:25.00G
maxmemory_policy:allkeys-lru
mem_fragmentation_ratio:0.96
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0
I'm working on micorservice using springboot . I have three questions here . Answers to any/all are much appreciated .Thanks in advance
Background: We need to read some key from vault during application startup and save it in variable for later use (to avoid hits on vault) . There will be TTL for this value so application should refresh and take whenever new value configured in vault.
Q1 : How to load and ensure values are loaded only once(i.e vault hit only once)
Q2 :How to get the new values whenever there is a change
Q3 : How to test locally.
Use guava cache to store values (assuming they are strings, but you can change it to any type) like this:
LoadingCache<String, String> vaultData = CacheBuilder.newBuilder()
.expireAfterAccess(10, TimeUnit.MINUTES)
.build(
new CacheLoader<String, String>() {
public String load(String key) throws AnyException {
return actuallyLoadFromVault(String);
}
});
This way when your code will read some key from vaultData for the first time it will loaded using actuallLoadFromVault (which you need to write of cause) and after that any new access to that key via vaultData will hit the cached value that is stored in memory.
With proper configuration after 10 minutes the value will be wiped from the cache (please read https://github.com/google/guava/wiki/CachesExplained#when-does-cleanup-happen and How does Guava expire entries in its CacheBuilder? to configure that correctly).
You might need to set max cache size to limit the memory consumption. See documentation for details.
Is it possible to change Hazelcast configuration at runtime and if so what parameters are modifiable.
It seems to be possible using Hazelcast Management Center but can't find any examples/references in official docos/forums.
Might be a bit late to answer your question but better late than never :)
You can modify some of the map config properties after the map has been created using the MapService:
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
// create map
IMap<String, Integer> myMap = instance.getMap("myMap");
// create a new map config
MapConfig newMapConfig = instance.getConfig().getMapConfig("myMap").setAsyncBackupCount(1);
// submit the new map config to the map service
MapService mapService = (MapService)(((AbstractDistributedObject)instance.getDistributedObject(MapService.SERVICE_NAME, "")).getService());
mapService.getMapServiceContext().getMapContainer("myMap").setMapConfig(newMapConfig);
Note that this API is not visible/documented so it might not work in future versions.
We are using this in our application when we need to insert several million entries in a distributed map at startup. Disabling the backup cut the insertion time by 30%. After the data are inserted, we enable the backup.
The Hazelcast internals are not really designed to be modifiable. What do you want to modify?