Apache ignite: Disable peer class loading - spring-boot

I am trying to connect to a Apache Ignite Server from a Spring Boot Application.
Example code:
ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
try (IgniteClient client = Ignition.startClient(cfg)) {
Object cachedName = client.query(
new SqlFieldsQuery("SELECT name from Person WHERE id=?").setArgs("foo").setSchema("PUBLIC")
).getAll().iterator().next().iterator().next();
}
I get this error:
Caused by: class org.apache.ignite.IgniteCheckedException: Remote node
has peer class loading enabled flag different from local
[locId8=459833a1, locPeerClassLoading=true, rmtId8=83ea88ca,
rmtPeerClassLoading=false,
rmtAddrs=[ignite-0.ignite.default.svc.cluster.local/0:0:0:0:0:0:0:1%lo,
/10.4.2.49, /127.0.0.1], rmtNode=ClusterNode
[id=83ea88ca-da77-4887-9357-267ac7397767, order=1,
addr=[0:0:0:0:0:0:0:1%lo, 10.x.x.x, 127.0.0.1], daemon=false]]
So the PeerClassLoading needs to be deactivated in my Java code. How can I do that?

As noted in the comments, the error is from a thick client (or another server) connecting to the cluster but the code is from a thin client.
If you’re just reading/writing data and don’t need to execute code, the thin client is a perfectly good option.
To use a thick client, you need to make sure both the thick client and server have the same peer-class loading configuration. That would be either:
<property name=“peerClassLoadingEnabled” value=“false” />
in your Spring configuration file. Or:
IgniteConfiguration cfg = new IgniteConfiguration()
...
.setPeerClassLoadingEnabled(false);
(I’ve used false here as that’s your current server configuration. Having said that, you probably want it to be switched on.)

Related

Apache Ignite: enabling Peer Class loading did not auto deploy StoreAdapter and Pojo classes

I am using Apache Ignite 2.10.0, i want read/write through feature to load/write data into cache from and to the third party persistence, in order to do it i implemented PersonStore which extends CacheStoreAdapter class. I want my classes(PersonStore, pojo and others) to get auto deployed remotely to the Ignite server node from client node, to do this i enabled the peerClassLoading in CacheConfiguration, on starting server i see
java.lang.RuntimeException: Failed to create an instance of com.demoIgnite.adapter.PersonStore
at javax.cache.configuration.FactoryBuilder$ClassFactory.create(FactoryBuilder.java:134)
.....
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:745)
Caused by:java.lang.ClassNotFoundException: com.demoIgnite.adapter.PersonStore
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at javax.cache.configuration.FactoryBuilder$ClassFactory.create(FactoryBuilder.java:130)
However, if i manually try to place the jar to the ignite libs it works absolutely fine. But via this approach i had to rebuild, replace and restart the Ignite server each time when there is a code modification which i wanted to avoid.
I am new to Apache Ignite and after reading the ignite documents was assuming that this could be taken care automatically if peerClassLoading is enabled, please help me if i am missing something there. Also, please suggest me a way to make this automated.
My cache configuration:
CacheConfiguration<String, Person> cachecfg = new CacheConfiguration<String, Person>();
cachecfg.setName("person-store");
cachecfg.setCacheMode(CacheMode.PARTITIONED);
cachecfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cachecfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cachecfg.setReadThrough(true); cachecfg.setWriteThrough(true);
cachecfg.setCacheStoreFactory(FactoryBuilder.factoryOf(PersonStore.class));
IgniteConfiguration :
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("my-ignite");
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
cfg.setDeploymentMode(DeploymentMode.CONTINUOUS);
cfg.setCacheConfiguration(cacheCfg);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:10800"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Cache stores and POJO classes cannot be peer loaded.
Peer loading is mostly for compute callables, services (event based mode), some listeners, etc.

How to setup embeded Jetty to use JDBC sessions

I am using embedded jetty (group: 'org.eclipse.jetty', name: 'jetty-webapp', version: '9.4.27.v20200227') and I am trying to programmatically setup it to use JDBC for session storage. All the documentation/examples I can find is about standalone jetty.
Do you know how to setup it?
I don't know all that much about JDBC or session storage, but looking at the documentation Persistent Sessions: JDBC for standalone jetty, it is telling you to enable the module session-store-jdbc. By looking at session-store-jdbc.mod you can see that it uses etc/sessions/jdbc/session-store.xml and these XML files can be directly translated into java code.
So it looks like its adding a JDBCSessionDataStoreFactory as a bean onto the server. So some equivalent code that you could try would look something like:
// Configure a JDBCSessionDataStoreFactory.
JDBCSessionDataStoreFactory sessionDataStoreFactory = new JDBCSessionDataStoreFactory();
sessionDataStoreFactory.setGracePeriodSec(3600);
sessionDataStoreFactory.setSavePeriodSec(0);
sessionDataStoreFactory.setDatabaseAdaptor(...);
JDBCSessionDataStore.SessionTableSchema schema = new JDBCSessionDataStore.SessionTableSchema();
schema.setAccessTimeColumn("accessTime");
schema.setContextPathColumn("contextPath");
// ... more configuration here
sessionDataStoreFactory.setSessionTableSchema(schema);
// Add the SessionDataStoreFactory as a bean on the server.
server.addBean(sessionDataStoreFactory);

IIB is not finding the jdbc driver

I have created a JDBCProvider service in an IBM integration bus (IIB v10) in Windows called TESTDDBB, which is also the name of the database. I have a JavaCompute node where I'm trying to generate a connection to call an oracle function
TESTDDBB
connectionUrlFormat='jdbc:oracle:thin:#x.x.x.x:1521:TESTDDBB;'
connectionUrlFormatAttr1=''
connectionUrlFormatAttr2=''
connectionUrlFormatAttr3=''
connectionUrlFormatAttr4=''
connectionUrlFormatAttr5=''
databaseName='TESTDDBB'
databaseSchemaNames='PROM'
databaseType='Oracle'
databaseVersion='11.2.0.4.0'
description='default_Description'
environmentParms='default_none'
jarsURL='C:\jdbc\lib'
jdbcProviderXASupport='true'
maxConnectionPoolSize='200'
portNumber='1521'
securityIdentity='devCredentials'
serverName='x.x.x.x'
type4DatasourceClassName='oracle.jdbc.xa.client.OracleXADataSource'
type4DriverClassName='oracle.jdbc.OracleDriver'
useDeployedJars='true'
public class GetUserData_JavaCompute extends MbJavaComputeNode {
public void evaluate(MbMessageAssembly inAssembly) throws MbException {
...
Connection conn = getJDBCType4Connection("TESTDDBB",JDBC_TransactionType.MB_TRANSACTION_AUTO);
try(CallableStatement callableStmt = conn.prepareCall("{ ? = call PROM.pkg_prop_2.getUserData(?)}");) {
...
}
...
}
}
The problem is that when IIB tried to get the conenction, it isn't finding the datasource java class
...
com.ibm.broker.jdbctype4.jdbcdbasemgr.JDBCType4Connection#-53d4c850.createXAConnection 'java.lang.ClassNotFoundException: oracle.jdbc.xa.client.OracleXADataSource at java.net.URLClassLoader.findClass(URLClassLoader.java:609) at com.ibm.broker.classloading.JavaResourceClassLoader.findClass(JavaResourceClassLoader.java:181) at com.ibm.broker.classloading.SharedClassLoader.findClass(SharedClassLoader.java:215) at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:925)
...
I have the ojdbc6.jar driver in the folder C:\jdbc\lib and deployed in a shared library in the integration server, library that is referenced by the RESTAPI app that contains the JavaCompute node. What am I missing? I have tried using useDeployedJars true and false, and jarsURL also with 'C:\jdbc\lib\ojdbc6' without success. Where are the common libraries of the integration server in windows?
From one side you can place the jar to root of you java source folder and connect this jar to Integration Toolkit. This give you ability to use this jar. From other side you can configure JDBC provider for Integration node and use created alias in getJDBCType4Connection call. More information about working with databases from JavaCompute you can see here 1

GemFire - Spring Boot Configuration

I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)

How to use HSQLDB as a datasource in Websphere Application Server?

I try to set up a local development infrastructure and I want to use HSQLDB as a datasource with my WAS 6.1. I already know that I have to use Apache DBCP to get a connection pooling, but I'm stuck when my application tries to get the first connection.
What I've done
In WAS I created a JDBC provider with the class org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS and removed everything from the classpath input field. Then I put commons-dbcp.jar, commons-pool.jar and hsqldb.jar in MYAPPSERVERDIRECTORY/lib/ext.
Then I created a new datasource with that provider. I added the following custom properties:
driver=org.hsqldb.jdbc.JDBCDriver
url=jdbc:hsqldb:file:///C:/mydatabase.db;shutdown=true
user=SA
password=
My Problem
When I run my application and the first connection to the database is made, I get the following exception:
---- Begin backtrace for Nested Throwables
java.sql.SQLException: No suitable driverDSRA0010E: SQL-Status = 08001, Fehlercode = 0
at java.sql.DriverManager.getConnection(DriverManager.java:592)
at java.sql.DriverManager.getConnection(DriverManager.java:196)
at org.apache.commons.dbcp.cpdsadapter.DriverAdapterCPDS.getPooledConnection(DriverAdapterCPDS.java:205)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper$1.run(InternalGenericDataStoreHelper.java:918)
at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:118)
at com.ibm.ws.rsadapter.spi.InternalGenericDataStoreHelper.getPooledConnection(InternalGenericDataStoreHelper.java:955)
at com.ibm.ws.rsadapter.spi.WSRdbDataSource.getPooledConnection(WSRdbDataSource.java:1437)
at com.ibm.ws.rsadapter.spi.WSManagedConnectionFactoryImpl.createManagedConnection(WSManagedConnectionFactoryImpl.java:1089)
at com.ibm.ejs.j2c.FreePool.createManagedConnectionWithMCWrapper(FreePool.java:1837)
at com.ibm.ejs.j2c.FreePool.createOrWaitForConnection(FreePool.java:1568)
at com.ibm.ejs.j2c.PoolManager.reserve(PoolManager.java:2338)
at com.ibm.ejs.j2c.ConnectionManager.allocateMCWrapper(ConnectionManager.java:909)
at com.ibm.ejs.j2c.ConnectionManager.allocateConnection(ConnectionManager.java:599)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:439)
at com.ibm.ws.rsadapter.jdbc.WSJdbcDataSource.getConnection(WSJdbcDataSource.java:408)
Any tips on this? I suspect I'm using a wrong class from hsqldb, or maybe my JDBC url is wrong...
In the example given in BDCP docs, the org.hsqldb.jdbcDriver class is used as the driver. The org.hsqldb.jdbc.JDBCDriver is supported only in HSQLDB 2.x, but the other class is supported by all versions of HSQLDB.

Resources