We have configured a Key-Value pair cache using Ignite and Spring Cache integration however we are facing a high CPU usage issue when we try to access the cache object
Cache is initialized with the following parameters:
#Bean
public SpringCacheManager getCacheManager(#Autowired Ignite
ignite) {
SpringCacheManager cacheManager = new SpringCacheManager();
cacheConfig = new CacheConfiguration<Object, Object>("defaultDynamicCacheConfig")
.setCacheMode(CacheMode.REPLICATED);
cacheManager.setDynamicCacheConfiguration(cacheConfig);
return cacheManager;
}
We have tried various settings such as
setOnheapCacheEnabled(true)
setSqlOnheapCacheEnabled(true)
but these settings did not help. We also tried nearCache but since we are running Ignite in Server mode it failed.
We are seeing the following in the stack trace when profiling:
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1717)
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1778)
...
org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177)
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67)
org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:125)
org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1773)
Related
I am using RedisTemplate for caching purpose in my spring boot service. Now I want to check cache hit/cache miss through end point actuator/prometheus. But can not see cache hit/cache miss for the cache.
The code I have written is something like below
#EnableCaching
#Configuration
public class CachingConfiguration {
#Bean
public RedisTemplate<String, SomeData> redisTemplate(LettuceConnectionFactory connectionFactory, ObjectMapper objectMapper)
{
RedisTemplate<String, SomeData> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);
var valueSerializer = new Jackson2JsonRedisSerializer<SomeData>(SomeData.class);
valueSerializer.setObjectMapper(objectMapper);
template.setValueSerializer(valueSerializer);
return template;
}
}
Now am doing like below to get and save into cache
to get:-
redisTemplate.opsForValue().get(key);
And to save:-
redisTemplate.opsForValue().set(key, obj, some_time_limit);
My cache is working properly, am getting able to save into cache and getting proper data.
But I don't see cache hit/miss related data inside actuator/prometheus.
In my application.yml file I have added below
cache:
redis:
enable-statistics: 'true'
I would assume that in order for Springboot Cache Monitoring to apply (Including Hits/Misses), you would need to depend on AutoConfiguration.
In your case you are creating the RedisTemplate yourself, and probably enable-statistics is not actually applied.
Can you remove the redistemplate creation and use #Cacheable annotation abstraction? That way any supported Cache library will work out of the box, without you having to create #Bean and manually configuring it.
Otherwise, generally if you wanted to enable statistics on a cache manager manually, you will need to call RedisCacheManager.RedisCacheManagerBuilder enableStatistics():
https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/cache/RedisCacheManager.RedisCacheManagerBuilder.html
For Reference:
Auto-configuration enables the instrumentation of all available Cache
instances on startup, with metrics prefixed with cache. Cache
instrumentation is standardized for a basic set of metrics.
Additional, cache-specific metrics are also available.
Metrics are tagged by the name of the cache and by the name of the
CacheManager, which is derived from the bean name.
Only caches that are configured on startup are bound to the registry. For caches not
defined in the cache’s configuration, such as caches created on the
fly or programmatically after the startup phase, an explicit
registration is required. A CacheMetricsRegistrar bean is made
available to make that process easier.
I had exactly the same question and spent a good number of hours trying to figure out how to enable cache metrics for my manually created RedisTemplate instance.
What I eventually realised is that it's only RedisCache class which collects and exposes CacheStatistics through getStatistics() method. As far as I can see there is nothing like that for RedisTemplate, which means you either need to switch to using RedisCache through RedisCacheManager and #Cacheable annotation or implement your custom metrics collection.
I have Ignite server nodes in my application with the following configuration, and this application is clustered hence there can be multiple ignite servers.
Ignite config looks like this:
#Bean
public Ignite igniteInstance(JdbcIpFinderDialect ipFinderDialect, DataSource dataSource) {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setGridLogger(new Slf4jLogger());
cfg.setMetricsLogFrequency(0);
TcpDiscoverySpi discoSpi = new TcpDiscoverySpi()
.setIpFinder(new TcpDiscoveryJdbcIpFinder(ipFinderDialect).setDataSource(dataSource)
.setInitSchema(false));
cfg.setDiscoverySpi(discoSpi);
cfg.setCacheConfiguration(cacheConfigurations.toArray(new CacheConfiguration[0]));
cfg.setFailureDetectionTimeout(igniteFailureDetectionTimeout);
return Ignition.start(cfg);
}
But at some point after running it for a day or so, ignite falls over with errors in line with the followings.
o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Node is out of topology (probably, due to short-time network problems
o.a.i.i.m.d.GridDiscoveryManager : Local node SEGMENTED: TcpDiscoveryNode [id=db3eb958-df2c-4211-b2b4-ba660bc810b0, addrs=[10.0.0.1], sockAddrs=[sd-9fdb-a8cb.nam.nsroot.net/10.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1612755975209, loc=true, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]
ROOT : Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeFailureHandler [super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SEGMENTATION, err=null]]
o.a.i.i.p.failure.FailureProcessor : Ignite node is in invalid state due to a critical failure.
ROOT : Stopping local node on Ignite failure: [failureCtx=FailureContext [type=SEGMENTATION, err=null]]
o.a.i.i.m.d.GridDiscoveryManager : Node FAILED: TcpDiscoveryNode [id=4d84f811-1c04-4f80-b269-a0003fbf7861, addrs=[10.0.0.1], sockAddrs=[sd-dc95-412b.nam.nsroot.net/10.0.0.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1612707966704, loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]
o.a.i.i.p.cache.GridCacheProcessor : Stopped cache [cacheName=cacheOne]
o.a.i.i.p.cache.GridCacheProcessor : Stopped cache [cacheName=cacheTwo]
And whenever my applications' client nodes try to write in the server cache they fail with an error,
java.lang.IllegalStateException: class org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to perform cache operation (cache is stopped): cacheOne
I am looking for a way to restart my Ignite Server node if it fails for such SEGMENTATION faults or any, some suggestions say that I will have to implement AbstractFailureHandler and setFailureHandler as that implementation but failed to find any examples.
You cannot restart an Ignite server node, so if you're using it in a Spring context you need a new context (usually means restarting an application).
Client node will try to reconnect, but if it can't, the same will apply.
I have a need for a java-cache with file storage, that survives JVM crashes.
Previously I used ehcache, configured with .heap().disk().
However, it has a problem with unclear JVM shutdowns - next startup clears the store.
My only requirement is that at least parts of the data survive a restart.
I tried to use hazelcast, however with following code snippet, even subsequent run of the program returns prints "null".
Please suggest how to configure hazelcast, so that cache.put is written to a disk and loaded on startup.
public class HazelcastTest {
public static void main(String[] args) throws InterruptedException {
System.setProperty("hazelcast.jcache.provider.type", "server");
Config config = new Config();
HotRestartPersistenceConfig hotRestartPersistenceConfig = new HotRestartPersistenceConfig()
.setEnabled(true)
.setBaseDir(new File("cache"))
.setBackupDir(new File("cache/backup"))
.setParallelism(1)
.setClusterDataRecoveryPolicy(HotRestartClusterDataRecoveryPolicy.FULL_RECOVERY_ONLY);
config.setHotRestartPersistenceConfig(hotRestartPersistenceConfig);
HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);
CacheConfig<String, String> cacheConfig = new CacheConfig<>();
cacheConfig.getHotRestartConfig().setEnabled(true);
cacheConfig.getHotRestartConfig().setFsync(true);
CachingProvider cachingProvider = Caching.getCachingProvider();
Cache<String, String> data = cachingProvider.getCacheManager().createCache("data", cacheConfig);
System.out.println(data.get("test"));
data.put("test", "value");
data.close();
instance.shutdown();
}
}
Suggestions for other frameworks that could complete the task are also welcome.
#Igor, Hot Restart is an Enterprise Feature of Hazelcast. You need to use Hazelcast Enterprise edition with a valid License Key.
Do you really need to store in a file, or just persist cache data somewhere else? If you can use a database, you can use MapStore which is available in Open Source version & write data to a persistent data store. You can even use write-behind mode to speed up writes.
See these sample project: https://github.com/hazelcast/hazelcast-code-samples/tree/master/distributed-map/mapstore
I have published project
https://github.com/armdev/ignite-spring-boot
with Spring data JPA, Mysql and Apache Ignite configuration.
This is Ignite cache configuration
#Bean
public Ignite igniteInstance() {
IgniteConfiguration cfg = new IgniteConfiguration();
// Setting some custom name for the node.
cfg.setIgniteInstanceName("springDataNode");
// Enabling peer-class loading feature.
cfg.setPeerClassLoadingEnabled(true);
// Defining and creating a new cache to be used by Ignite Spring Data
// repository.
CacheConfiguration ccfg = new CacheConfiguration("FlightCache");
// Setting SQL schema for the cache.
ccfg.setIndexedTypes(Long.class, Flight.class);
cfg.setActiveOnStart(true);
cfg.setCacheConfiguration(ccfg);
return Ignition.start(cfg);
}
Project has 2 API, one works without Ignite, but repository which is configured with Ignite does not work. I do not understand reason.
You need to configure a CacheStore that will operate on top of the MySQL data source.
You need to enable write-through and read-through behavior as well.
I would like to configure a distributed cache with Apache Ignite using the JCache API (JSR107, javax.cache). Is this possible?
The examples I have found either create a local cache with the JCache API or create a distributed cache (or datagrid) using the Apache Ignite API.
JCache allows to provide provider-specific configuration when creating a cache. I.e., you can do this:
// Get or create a cache manager.
CacheManager cacheMgr = Caching.getCachingProvider().getCacheManager();
// This is an Ignite configuration object (org.apache.ignite.configuration.CacheConfiguration).
CacheConfiguration<Integer, String> cfg = new CacheConfiguration<>();
// Specify cache mode and/or any other Ignite-specific configuration properties.
cfg.setCacheMode(CacheMode.PARTITIONED);
// Create a cache based on configuration create above.
Cache<Integer, String> cache = cacheMgr.createCache("a", cfg);
Also note that partitioned mode is actually the default one in Ignite, so you are not required to specify it explicitly.
UPD. In addition, CachingProvider.getCacheManager(..) method accepts a provider-specific URI that in case of Ignite should point to XML configuration file. Discovery, communication and other parameters can be provided there.
Please note that JCache specification does not specify all the configurations that apply to individual cache providers in terms of configuring via CacheManager for creating a Grid. The requirement for creating a CacheManager is standard but not everything relevant to how the manager itself is configured.
Following code will demonstrate how to create a grid using Apache Ignite in SpringBoot
#Bean
#SuppressWarnings("unchecked")
public org.apache.ignite.cache.spring.SpringCacheManager cacheManager() {
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setGridName("petclinic-ignite-grid");
//igniteConfiguration.setClassLoader(dynamicClassLoaderWrapper());
igniteConfiguration.setCacheConfiguration(this.createDefaultCache("petclinic"),
this.createDefaultCache("org.hibernate.cache.spi.UpdateTimestampsCache"),
this.createDefaultCache("org.hibernate.cache.internal.StandardQueryCache"));
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setConfiguration(igniteConfiguration);
springCacheManager.setDynamicCacheConfiguration(this.createDefaultCache(null));
return springCacheManager;
}
private org.apache.ignite.configuration.CacheConfiguration createDefaultCache(String name) {
org.apache.ignite.configuration.CacheConfiguration cacheConfiguration = new org.apache.ignite.configuration.CacheConfiguration();
cacheConfiguration.setName(name);
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheConfiguration.setStatisticsEnabled(true);
cacheConfiguration.setEvictSynchronized(true);
return cacheConfiguration;
}
}
If we were to create another instance of this service and have it register to the same grid as igniteConfiguration.setGridName("petclinic-ignite-grid"), an IMDG will be created. Please note that the 2 service instances with this version of partitioned, embedded distributed cache should be able to talk to each other via required PORTS. Please refer to Apache Ignite - Data Grid for more details.
Hope this helps.