How to restart ignite server with Spring config? - spring

I have Ignite server nodes in my application with the following configuration, and this application is clustered hence there can be multiple ignite servers.
Ignite config looks like this:
#Bean
public Ignite igniteInstance(JdbcIpFinderDialect ipFinderDialect, DataSource dataSource) {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setGridLogger(new Slf4jLogger());
cfg.setMetricsLogFrequency(0);
TcpDiscoverySpi discoSpi = new TcpDiscoverySpi()
.setIpFinder(new TcpDiscoveryJdbcIpFinder(ipFinderDialect).setDataSource(dataSource)
.setInitSchema(false));
cfg.setDiscoverySpi(discoSpi);
cfg.setCacheConfiguration(cacheConfigurations.toArray(new CacheConfiguration[0]));
cfg.setFailureDetectionTimeout(igniteFailureDetectionTimeout);
return Ignition.start(cfg);
}
But at some point after running it for a day or so, ignite falls over with errors in line with the followings.
o.a.i.spi.discovery.tcp.TcpDiscoverySpi : Node is out of topology (probably, due to short-time network problems
o.a.i.i.m.d.GridDiscoveryManager : Local node SEGMENTED: TcpDiscoveryNode [id=db3eb958-df2c-4211-b2b4-ba660bc810b0, addrs=[10.0.0.1], sockAddrs=[sd-9fdb-a8cb.nam.nsroot.net/10.0.0.1:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1612755975209, loc=true, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]
ROOT : Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeFailureHandler [super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SEGMENTATION, err=null]]
o.a.i.i.p.failure.FailureProcessor : Ignite node is in invalid state due to a critical failure.
ROOT : Stopping local node on Ignite failure: [failureCtx=FailureContext [type=SEGMENTATION, err=null]]
o.a.i.i.m.d.GridDiscoveryManager : Node FAILED: TcpDiscoveryNode [id=4d84f811-1c04-4f80-b269-a0003fbf7861, addrs=[10.0.0.1], sockAddrs=[sd-dc95-412b.nam.nsroot.net/10.0.0.1:47500], discPort=47500, order=2, intOrder=2, lastExchangeTime=1612707966704, loc=false, ver=2.7.5#20190603-sha1:be4f2a15, isClient=false]
o.a.i.i.p.cache.GridCacheProcessor : Stopped cache [cacheName=cacheOne]
o.a.i.i.p.cache.GridCacheProcessor : Stopped cache [cacheName=cacheTwo]
And whenever my applications' client nodes try to write in the server cache they fail with an error,
java.lang.IllegalStateException: class org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to perform cache operation (cache is stopped): cacheOne
I am looking for a way to restart my Ignite Server node if it fails for such SEGMENTATION faults or any, some suggestions say that I will have to implement AbstractFailureHandler and setFailureHandler as that implementation but failed to find any examples.

You cannot restart an Ignite server node, so if you're using it in a Spring context you need a new context (usually means restarting an application).
Client node will try to reconnect, but if it can't, the same will apply.

Related

Apache Ignite: enabling Peer Class loading did not auto deploy StoreAdapter and Pojo classes

I am using Apache Ignite 2.10.0, i want read/write through feature to load/write data into cache from and to the third party persistence, in order to do it i implemented PersonStore which extends CacheStoreAdapter class. I want my classes(PersonStore, pojo and others) to get auto deployed remotely to the Ignite server node from client node, to do this i enabled the peerClassLoading in CacheConfiguration, on starting server i see
java.lang.RuntimeException: Failed to create an instance of com.demoIgnite.adapter.PersonStore
at javax.cache.configuration.FactoryBuilder$ClassFactory.create(FactoryBuilder.java:134)
.....
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:745)
Caused by:java.lang.ClassNotFoundException: com.demoIgnite.adapter.PersonStore
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at javax.cache.configuration.FactoryBuilder$ClassFactory.create(FactoryBuilder.java:130)
However, if i manually try to place the jar to the ignite libs it works absolutely fine. But via this approach i had to rebuild, replace and restart the Ignite server each time when there is a code modification which i wanted to avoid.
I am new to Apache Ignite and after reading the ignite documents was assuming that this could be taken care automatically if peerClassLoading is enabled, please help me if i am missing something there. Also, please suggest me a way to make this automated.
My cache configuration:
CacheConfiguration<String, Person> cachecfg = new CacheConfiguration<String, Person>();
cachecfg.setName("person-store");
cachecfg.setCacheMode(CacheMode.PARTITIONED);
cachecfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cachecfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cachecfg.setReadThrough(true); cachecfg.setWriteThrough(true);
cachecfg.setCacheStoreFactory(FactoryBuilder.factoryOf(PersonStore.class));
IgniteConfiguration :
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("my-ignite");
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
cfg.setCacheConfiguration(cacheCfg);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:10800"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Cache stores and POJO classes cannot be peer loaded.
Peer loading is mostly for compute callables, services (event based mode), some listeners, etc.

Apache Ignite Key-Value Map high CPU usage

We have configured a Key-Value pair cache using Ignite and Spring Cache integration however we are facing a high CPU usage issue when we try to access the cache object
Cache is initialized with the following parameters:
#Bean
public SpringCacheManager getCacheManager(#Autowired Ignite
ignite) {
SpringCacheManager cacheManager = new SpringCacheManager();
cacheConfig = new CacheConfiguration<Object, Object>("defaultDynamicCacheConfig")
.setCacheMode(CacheMode.REPLICATED);
cacheManager.setDynamicCacheConfiguration(cacheConfig);
return cacheManager;
}
We have tried various settings such as
setOnheapCacheEnabled(true)
setSqlOnheapCacheEnabled(true)
but these settings did not help. We also tried nearCache but since we are running Ignite in Server mode it failed.
We are seeing the following in the stack trace when profiling:
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1717)
org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1778)
...
org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:798)
org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:143)
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:177)
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:67)
org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinaryIfNeeded(CacheObjectContext.java:125)
org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinaryIfNeeded(GridCacheContext.java:1773)

Apache ignite: Disable peer class loading

I am trying to connect to a Apache Ignite Server from a Spring Boot Application.
Example code:
ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
try (IgniteClient client = Ignition.startClient(cfg)) {
Object cachedName = client.query(
new SqlFieldsQuery("SELECT name from Person WHERE id=?").setArgs("foo").setSchema("PUBLIC")
).getAll().iterator().next().iterator().next();
}
I get this error:
Caused by: class org.apache.ignite.IgniteCheckedException: Remote node
has peer class loading enabled flag different from local
[locId8=459833a1, locPeerClassLoading=true, rmtId8=83ea88ca,
rmtPeerClassLoading=false,
rmtAddrs=[ignite-0.ignite.default.svc.cluster.local/0:0:0:0:0:0:0:1%lo,
/10.4.2.49, /127.0.0.1], rmtNode=ClusterNode
[id=83ea88ca-da77-4887-9357-267ac7397767, order=1,
addr=[0:0:0:0:0:0:0:1%lo, 10.x.x.x, 127.0.0.1], daemon=false]]
So the PeerClassLoading needs to be deactivated in my Java code. How can I do that?
As noted in the comments, the error is from a thick client (or another server) connecting to the cluster but the code is from a thin client.
If you’re just reading/writing data and don’t need to execute code, the thin client is a perfectly good option.
To use a thick client, you need to make sure both the thick client and server have the same peer-class loading configuration. That would be either:
<property name=“peerClassLoadingEnabled” value=“false” />
in your Spring configuration file. Or:
IgniteConfiguration cfg = new IgniteConfiguration()
...
.setPeerClassLoadingEnabled(false);
(I’ve used false here as that’s your current server configuration. Having said that, you probably want it to be switched on.)

How to safely resume cache operation on client side after Hazelcast restart?

Whenever I restart hazelcast server, without restarting client in spring boot. I'm getting following error :
03-01-2018 16:44:17.966 [http-nio-8080-exec-7] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet].log - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.hazelcast.client.HazelcastClientNotActiveException: Partition does not have owner. partitionId : 203] with root cause
java.io.IOException: Partition does not have owner. partitionId : 203
at com.hazelcast.client.spi.impl.ClientSmartInvocationServiceImpl.invokeOnPartitionOwner(ClientSmartInvocationServiceImpl.java:43)
at com.hazelcast.client.spi.impl.ClientInvocation.invokeOnSelection(ClientInvocation.java:142)
at com.hazelcast.client.spi.impl.ClientInvocation.invoke(ClientInvocation.java:122)
at com.hazelcast.client.spi.ClientProxy.invokeOnPartition(ClientProxy.java:152)
at com.hazelcast.client.spi.ClientProxy.invoke(ClientProxy.java:147)
at com.hazelcast.client.proxy.ClientMapProxy.getInternal(ClientMapProxy.java:245)
at com.hazelcast.client.proxy.ClientMapProxy.get(ClientMapProxy.java:240)
at com.hazelcast.spring.cache.HazelcastCache.lookup(HazelcastCache.java:139)
at com.hazelcast.spring.cache.HazelcastCache.get(HazelcastCache.java:57)
at org.springframework.cache.interceptor.AbstractCacheInvoker.doGet(AbstractCacheInvoker.java:71)
If I enabled hot-restart, the issue is solved. But is there a way to resume client application without restarting it and hot-restart is disabled ?
Hazelcast client tries to reconnect to the cluster if the connection drops. It uses ClientNetworkConfig.connectionAttemptLimit and ClientNetworkConfig.connectionAttemptPeriod elements to configure how frequently it will try. connectionAttemptLimit defines the number of attempts on a disconnection and connectionAttemptPeriod defines the period between two retries in ms. Please see the usage example below:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getNetworkConfig().setConnectionAttemptLimit(5);
clientConfig.getNetworkConfig().setConnectionAttemptPeriod(5000);
Starting with Hazelcast 3.9, you can use reconnect-mode property to configure how the client will reconnect to the cluster after it disconnects. It has three options:
The option OFF disables the reconnection.
ON enables reconnection in a blocking manner where all the waiting invocations will be blocked until a cluster connection is established or failed.
The option ASYNC enables reconnection in a non-blocking manner where all the waiting invocations will receive a HazelcastClientOfflineException.
Its default value is ON. You can see a configuration example below:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getConnectionStrategyConfig()
.setReconnectMode(ClientConnectionStrategyConfig.ReconnectMode.ON);
By using these configuration elements, you can resume your client without restarting it.

ClassNotFound error for Ignite User Defined Function in Flink Cluster

I am trying to cache the data, streamed by Apache flink, into Apache Ignite cache. I also want to run the query which uses a User Defined Function. As per Ignite, I am using cacheConf.setSqlFunctionClasses(GetCacheKey.class) setting while declaring the cache. The class declaration is as follows,
public static class GetCacheKey implements Serializable{
#QuerySqlFunction
public static long getCacheKey(int mac, long local) {
long key=(local << 5) + mac;
return key;
}
}
When I run the code locally with Apache Flink, it works. But when I go for cluster execution of the code in Flink Cluster, I got an error that GetCacheKey class is not found. What will be the reason behind this?
Please, check if GetCacheKey.class is in ignite nodes classpaths.
The Flink directory must be available on every worker under the same path. You can use a shared NFS directory, or copy the entire Flink directory to every worker node.
Also ensure the Ignite libs are present in worker nodes classpath.

Resources