Apache Ignite is slow with Hibernate L2 Replicated Cache - spring

I'm trying to set up Apache Ignite 2.13 for Hibernate L2 Cache in a Spring Application.
I have it working but compared to Ehcache is really slow (x2 latency). As far as I understand, the replicated cache mode in Ignite works in the same JVM as the application so no network latency is implied.
I also know that performance testing should be done in a distributed environment for Ignite, but even with many instances deployed in Kubernetes, latency is high.
Am I missing something?
I let attached logs from Statistics and my ignite.xml configuration
2022-09-23 13:24:25.268 INFO [xxxx] 1 --- [io-9000-exec-10] i.StatisticalLoggingSessionEventListener : Session Metrics {
513900 nanoseconds spent acquiring 1 JDBC connections;
0 nanoseconds spent releasing 0 JDBC connections;
278500 nanoseconds spent preparing 1 JDBC statements;
1175200 nanoseconds spent executing 1 JDBC statements;
0 nanoseconds spent executing 0 JDBC batches;
0 nanoseconds spent performing 0 L2C puts;
80603200 nanoseconds spent performing 490 L2C hits;
0 nanoseconds spent performing 0 L2C misses;
3686600 nanoseconds spent executing 1 flushes (flushing a total of 306 entities and 573 collections);
328100 nanoseconds spent executing 1 partial-flushes (flushing a total of 1 entities and 1 collections)
}
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/util https://www.springframework.org/schema/util/spring-util.xsd">
<bean id="hibernateCache" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
<property name="cacheMode" value="REPLICATED"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="writeSynchronizationMode" value="FULL_ASYNC"/>
<property name="onheapCacheEnabled" value="true"/>
<property name="evictionPolicyFactory">
<bean class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
<!-- 1 GB-->
<property name="maxSize" value="900000000"/>
<property name="maxMemorySize" value="1000000000"/>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="MINUTES"/>
<constructor-arg value="10"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="clientMode" value="false"/>
<property name="igniteInstanceName" value="hibernate-grid"/>
<property name="metricsLogFrequency" value="0"/>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
</bean>
</property>
</bean>
</property>
<property name="cacheConfiguration">
<list>
<bean parent="hibernateCache">
<property name="name" value="com.example.model0"/>
</bean>
<bean parent="hibernateCache">
<property name="name" value="com.example.model1"/>
</bean>
<bean parent="hibernateCache">
<property name="name" value="com.example.modelN"/>
</bean>
<bean parent="hibernateCache">
<property name="name" value="default-update-timestamps-region"/>
</bean>
<bean parent="hibernateCache">
<property name="name" value="default-query-results-region"/>
</bean>
</list>
</property>
</bean>
</beans>

Related

Ignite persistence performance hit and metrics

I am trying out native persistence in Apache Ignite. My setup is currently local, single node cluster. I enabled it by adding this property in my data region
<property name="persistenceEnabled" value="true"/>
My full data region configuration is as follows
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.local.input.trade"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="initialSize" value="#{200 * 1024 * 1024}"/>
<property name="maxSize" value="#{500 * 1024 * 1024}"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>
Now the entries are being persisted, i.e if I shutdown Ignite and restart it then my data comes back inside the cache.
I am seeing significant performance hit. Around 35% increased put operation latency compared to non-persisted data region. I have referred to Ignite persistence tuning page. From that I have singled out below properties and their properties
Property
Value
WAL Modes
LOG_ONLY
walCompactionLevel
3
walCompationEnabled
true
writeThrottlingEnabled
true
checkpointBufferSize
512 mb
checkpointFrequency
5 minutes
Is there anything more that I can tune? Is the performance hit I mentioned above is typical or can it be lowered much more?
Also I tried seeing JMX metrics related to persistence using JConsole. I was checking metrics under org.apache.368239c8.ignitelocal."Persistent Store". All metrics mentioned under this are showing as 0. Data is surely persisted, I can see in Ignite work dir and WAL dir. Am I looking at wrong metrics? Please help.
Attaching entire Ignite config below.
<?xml version="1.0" encoding="UTF-8"?>
<!--
Generated by Chef for ignite1.intranet.com
-->
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xmlns="http://www.springframework.org/schema/beans"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
<property name="searchSystemEnvironment" value="true"/>
</bean>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="sslContextFactory">
<bean class="org.apache.ignite.ssl.SslContextFactory">
<property name="keyStoreFilePath" value="/home/sysSvcDevOps/ssl/ignite1.keystore.jks"/>
<property name="keyStorePassword" value="KeyStore443"/>
<property name="keyStoreType" value="jks"/>
<property name="trustStoreFilePath" value="/home/sysSvcDevOps/ssl/cacerts/java.cacerts.jks"/>
<property name="trustStorePassword" value="changeit"/>
<property name="trustStoreType" value="jks"/>
</bean>
</property>
<property name="igniteInstanceName" value=".dev"/>
<property name="consistentId" value="ignite1.dev"/>
<property name="workDirectory" value="/apps/Svc/dev/Ignite/IgniteData/persistentstore/work"/>
<property name="rebalanceThreadPoolSize" value="8"/>
<property name="publicThreadPoolSize" value="32"/>
<property name="systemThreadPoolSize" value="64"/>
<property name="queryThreadPoolSize" value="64"/>
<property name="failureDetectionTimeout" value="30000"/>
<property name="authenticationEnabled" value="true"/>
<property name="metricsUpdateFrequency" value="30000"/>
<property name="peerClassLoadingEnabled" value="false"/>
<property name="clientMode" value="false"/>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<list>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_STOPPED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_NODES_LEFT"/>
</list>
</property>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="walSegmentSize" value="1073741824"/>
<property name="walSegments" value="20"/>
<property name="maxWalArchiveSize" value="10737418240"/>
<property name="walCompactionEnabled" value="true"/>
<property name="walCompactionLevel" value="4"/>
<property name="checkpointFrequency" value="300000"/>
<property name="checkpointThreads" value="16"/>
<property name="checkpointReadLockTimeout" value="60000"/>
<property name="lockWaitTime" value="45000"/>
<property name="checkpointWriteOrder" value="RANDOM"/>
<property name="pageSize" value="4096"/>
<property name="writeThrottlingEnabled" value="true"/>
<!-- wal storage paths -->
<property name="walPath" value="/apps/Svc/dev/Ignite/IgniteData"/>
<property name="walArchivePath" value="/apps/Svc/dev/Ignite/IgniteDataArchive"/>
<property name="storagePath" value="/apps/Svc/dev/Ignite/IgniteData/archive"/>
<property name="dataRegionConfigurations">
<list>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.referencedata"/>
<property name="persistenceEnabled" value="true"/>
<property name="initialSize" value="1073741824"/>
<property name="maxSize" value="4294969673"/>
<property name="checkpointPageBufferSize" value="1073741824"/>
</bean>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.input"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="#{4 * 1024 * 1024 * 1024}"/>
<property name="initialSize" value="12884901888"/>
<property name="maxSize" value="81604378624"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.input.exception"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="#{4 * 1024 * 1024 * 1024}"/>
<property name="initialSize" value="4294967296"/>
<property name="maxSize" value="21474836480"/>
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
</bean>
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="dr.dev.output"/>
<property name="initialSize" value="1073741824"/>
<property name="persistenceEnabled" value="true"/>
<property name="metricsEnabled" value="true"/>
<property name="checkpointPageBufferSize" value="#{2 * 1024 * 1024 * 1024}"/>
<property name="maxSize" value="2147483648"/>
</bean>
</list>
</property>
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="name" value="default_region"/>
<property name="persistenceEnabled" value="true"/>
<property name="initialSize" value="268435456"/>
<property name="maxSize" value="268435456"/>
</bean>
</property>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.zk.ZookeeperDiscoverySpi">
<property name="zkConnectionString" value="zk1.intranet.com:22001,zk2.intranet.com:22001"/>
<property name="zkRootPath" value="/ignite"/>
<property name="sessionTimeout" value="120000"/>
<property name="joinTimeout" value="10000"/>
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="socketWriteTimeout" value="60000"/>
</bean>
</property>
<property name="cacheConfiguration">
<list>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="referenceDataCacheTemplate*"/>
<property name="cacheMode" value="REPLICATED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.referencedata"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="sqlIndexMaxInlineSize" value="203"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputMetadataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="readFromBackup" value="false"/>
<property name="sqlIndexMaxInlineSize" value="211"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.ModifiedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="5"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputReconCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="readFromBackup" value="false"/>
<property name="sqlIndexMaxInlineSize" value="211"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="4"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="inputExceptionsCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input.exception"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="readFromBackup" value="false"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="15"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="outputDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.output"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="sqlSchema" value=""/>
<property name="statisticsEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="450"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="reconAuditDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.referencedata"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="sqlSchema" value=""/>
<property name="statisticsEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="fileDataCacheTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="queryParallelism" value="4"/>
<property name="eagerTtl" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="256"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="5"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
<bean id="cache-template-bean" abstract="true"
class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="shortLivedReferenceDataTemplate*"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="backups" value="1"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="dataRegionName" value="dr.dev.input.exception"/>
<property name="partitionLossPolicy" value="READ_WRITE_SAFE"/>
<property name="writeSynchronizationMode" value="PRIMARY_SYNC"/>
<property name="statisticsEnabled" value="true"/>
<property name="managementEnabled" value="true"/>
<property name="affinity">
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="partitions" value="64"/>
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>RACK_ID</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
</property>
<property name="expiryPolicyFactory">
<bean class="javax.cache.expiry.CreatedExpiryPolicy" factory-method="factoryOf">
<constructor-arg>
<bean class="javax.cache.expiry.Duration">
<constructor-arg value="DAYS"/>
<constructor-arg value="2"/>
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</list>
</property>
<property name="sqlSchemas">
<list>
<value>dataInput</value>
</list>
</property>
</bean>
</beans>
Speaking of the possible performance drop on writes.
In comparison to a pure in memory mode, the following disk interactions happen on updates:
in addition to a page modification in RAM, Ignite needs to provide consistency guarantees depending on your WAL mode, but unless it's not disabled, every update must be written to a WAL file. No data is flushed on disk yet; modification happens only in memory + WAL record is written.
Once you have too many dirty pages in RAM, or a timeout occurs, Ignite starts a checkpointing process flushing dirty pages on disk to the partition files on disk.
If WAL becomes too big, Ignite might perform segments rotation by copying them to a WAL archive to free up the space for new WAL updates.
As you can see, there are at least 3 major disk-related operations, meaning that it's crucial to have really fast disks for /wal, /walarchive and /db mounted folders. Again, it all depends on your use case, but in general it's strongly recommended to have the fastest available disks for WAL-related activity.
Possible performance drop on reads.
Again, it depends on a scenario, but if you can put all your data in memory (as it was before you turned persistence on), you will not see any performance differences.
It should be noted that after a restart, there will be no data in RAM at the start and Ignite must preload them first, i.e. to do a warm-up.
But, if you have more data than your configured data region size, a page replacement will take place rotating the data from and to disk. Worse scenario: say, you have a 10 GB RAM data region and 11 GB dataset. And you want to scan your data twice in alphabetical order.
There was no data in RAM yet; imagine that you did a restart. Ignite starts to read data from the disk and populate the data pages in memory. Imagine that after the letter W, our in-memory data set became full, and page rotation is required to load the remaining W-Z data. In that case, the oldest pages need to be evicted - meaning that, say, A-D chunk needs to go to the disk to load W-Z data instead. So, your in-memory data set is now something like W-Z, E-V. If we are going to make the same scan query, the whole data set needs to be replaced similarly.
Enable persistence metrics.
Check that you have the following property in your data region configuration, more details here.
<property name="metricsEnabled" value="true"/>
Also, there is no need for
<property name="pageEvictionMode" value="RANDOM_2_LRU"/>
It's only for non-persistent regions.

Dynamically create cache in Ignite server node and link it to PostgreSql table

I have one Ignite server node and one thick java client.
I'm able to create new caches in the server node by calling the Client node (I have exposed REST APIs).
I have a PostgreSQL DB where schema separated multitenancy is implemented, meaning:
Here Table1 in Schema1 and Schema2 is same by the properties. As it belongs to different schemas it will hold values for different tenants.
.
Here in PostgreSQL dynamically new schemas and tables could be created when new tenant is part of the project.
.
I was able to create a configuration where Table1 from all the existing schemas are loaded to Ignite Server node and values are loaded into tables from DB.
Through the client, I am able to get values as well.
Problem:
I'm not able to create a cache (for the newly created table in the new schema) from the Client node and link it to PostgreSQL.
I couldn't get a straightforward solution to my issue in Ignite developer documents.
Can anyone help me what should be the proper way to tackle this, and also link to an example where dynamically cache is created and linked to DB.
I get the following exception, I know I haven't attached code, if you need code to understand the problem better then I will push the code to github and link here in the question.
Dynamic cache creation: https://www.gridgain.com/docs/latest/developers-guide/key-value-api/basic-cache-operations
Here using ccfg.setCacheStoreFactory(cacheStoreFactory) I have linked DB.
NOTE: If I run the client in server mode "cfg.setClientMode(false)" then I'm able to successfully create a new cache and link to DB.
Does that mean new caches can be created only in Servers.?
2021-07-23 00:03:39.574 ERROR 8036 --- [nio-8081-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.] with root cause
org.apache.ignite.IgniteCheckedException: Failed to complete exchange process.
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.createExchangeException(GridDhtPartitionsExchangeFuture.java:3372) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendExchangeFailureMessage(GridDhtPartitionsExchangeFuture.java:3400) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3496) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3477) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1608) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:929) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3251) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:3097) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:119) ~[ignite-core-8.7.9.jar:8.7.9]
at java.lang.Thread.run(Thread.java:748) ~[na:na]
Suppressed: org.apache.ignite.IgniteCheckedException: Failed to initialize exchange locally [locNodeId=2753929a-755e-4243-b8d0-693e42b1a078]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:1345) ~[ignite-core-8.7.9.jar:8.7.9]
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:855) ~[ignite-core-8.7.9.jar:8.7.9]
... 4 common frames omitted
Caused by: org.apache.ignite.IgniteException: Failed to enrich cache configuration [cacheName=Users_cuddle_nand]
at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:128)
at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:61)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheContext(GridCacheProcessor.java:1881)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1849)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$55a0e703$1(GridCacheProcessor.java:1724)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCachesIfPossible$14(GridCacheProcessor.java:1694)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCaches(GridCacheProcessor.java:1721)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareStartCachesIfPossible(GridCacheProcessor.java:1692)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processCacheStartRequests(CacheAffinitySharedManager.java:971)
at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:857)
at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:1334)
... 5 common frames omitted
Caused by: org.apache.ignite.IgniteException: Failed to deserialize field storeFactory
at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.deserialize(CacheConfigurationEnricher.java:153)
at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:121)
... 15 common frames omitted
Caused by: org.apache.ignite.IgniteCheckedException: Failed to deserialize object with given class loader: sun.misc.Launcher$AppClassLoader#18b4aac2
at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:148)
at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:92)
at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:162)
at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10478)
at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.deserialize(CacheConfigurationEnricher.java:150)
... 16 common frames omitted
Caused by: java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.dataSrcFactory of type javax.cache.configuration.Factory in instance of org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2301)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1431)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2411)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:140)
... 21 common frames omitted
My guess is that you cannot easily create new caches in Postgres via Ignite because all caches must be provided in the config and they must match the base table in Postgres as well.
Some time ago I integrate Ignite with Postgres and I was able to use the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.1.xsd">
<bean id="postgre_con" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/postgres" />
<property name="username" value="postgres" />
<property name="password" value="qwerty" />
<property name="connectionProperties">
<props>
<prop key="socketTimeout">10</prop>
</props>
</property>
</bean>
<bean id="grid.cfg"
class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="failureDetectionTimeout" value="60000"/>
<property name="clientFailureDetectionTimeout" value="60000"/>
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="Person"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="atomicityMode" value="TRANSACTIONAL"/>
<property name="sqlSchema" value="PUBLIC"/>
<property name="keyConfiguration">
<list>
<bean class="org.apache.ignite.cache.CacheKeyConfiguration">
<constructor-arg name="keyCls" value="org.gridgain.essilor.model.Person"/>
</bean>
</list>
</property>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
<property name="dataSourceBean" value="postgre_con"/>
<property name="dialect">
<bean class="org.apache.ignite.cache.store.jdbc.dialect.BasicJdbcDialect"/>
</property>
<property name="types">
<list>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcType">
<property name="cacheName" value="Person"/>
<property name="keyType" value="java.lang.Integer"/>
<property name="valueType" value="org.gridgain.essilor.model.Person"/>
<property name="databaseSchema" value="PUBLIC"/>
<property name="databaseTable" value="Person"/>
<property name="keyFields">
<list>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
<constructor-arg>
<util:constant static-field="java.sql.Types.INTEGER"/>
</constructor-arg>
<constructor-arg value="person_id"/>
<constructor-arg value="int"/>
<constructor-arg value="person_id"/>
</bean>
</list>
</property>
<property name="valueFields">
<list>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
<constructor-arg>
<util:constant static-field="java.sql.Types.INTEGER"/>
</constructor-arg>
<constructor-arg value="company_id"/>
<constructor-arg value="int"/>
<constructor-arg value="company_id"/>
</bean>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
<constructor-arg>
<util:constant static-field="java.sql.Types.VARCHAR"/>
</constructor-arg>
<constructor-arg value="person_name"/>
<constructor-arg value="java.lang.String"/>
<constructor-arg value="person_name"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
</property>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<!-- Configure type metadata to enable queries. -->
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Integer"/>
<property name="valueType" value="org.gridgain.essilor.model.Person"/>
<property name="tableName" value="Person"/>
<property name="keyFieldName" value="person_id"/>
<property name="fields">
<map>
<entry key="person_id" value="java.lang.Integer"/>
<entry key="company_id" value="java.lang.Integer"/>
<entry key="person_name" value="java.lang.String"/>
</map>
</property>
</bean>
</list>
</property>
</bean>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="Company"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="atomicityMode" value="TRANSACTIONAL"/>
<property name="sqlSchema" value="PUBLIC"/>
<property name="keyConfiguration">
<list>
<bean class="org.apache.ignite.cache.CacheKeyConfiguration">
<constructor-arg name="keyCls" value="org.gridgain.essilor.model.Company"/>
</bean>
</list>
</property>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
<property name="dataSourceBean" value="postgre_con"/>
<property name="dialect">
<bean class="org.apache.ignite.cache.store.jdbc.dialect.BasicJdbcDialect"/>
</property>
<property name="types">
<list>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcType">
<property name="cacheName" value="Company"/>
<property name="keyType" value="java.lang.Integer"/>
<property name="valueType" value="org.gridgain.essilor.model.Company"/>
<property name="databaseSchema" value="PUBLIC"/>
<property name="databaseTable" value="Company"/>
<property name="keyFields">
<list>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
<constructor-arg>
<util:constant static-field="java.sql.Types.INTEGER"/>
</constructor-arg>
<constructor-arg value="company_id"/>
<constructor-arg value="int"/>
<constructor-arg value="company_id"/>
</bean>
</list>
</property>
<property name="valueFields">
<list>
<bean class="org.apache.ignite.cache.store.jdbc.JdbcTypeField">
<constructor-arg>
<util:constant static-field="java.sql.Types.VARCHAR"/>
</constructor-arg>
<constructor-arg value="company_name"/>
<constructor-arg value="java.lang.String"/>
<constructor-arg value="company_name"/>
</bean>
</list>
</property>
</bean>
</list>
</property>
</bean>
</property>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
<property name="queryEntities">
<list>
<bean class="org.apache.ignite.cache.QueryEntity">
<property name="keyType" value="java.lang.Integer"/>
<property name="valueType" value="org.gridgain.essilor.model.Company"/>
<property name="tableName" value="Company"/>
<property name="keyFieldName" value="company_id"/>
<property name="fields">
<map>
<entry key="company_id" value="java.lang.Integer"/>
<entry key="company_name" value="java.lang.String"/>
</map>
</property>
</bean>
</list>
</property>
</bean>
</list>
</property>
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
<property name="name" value="Default_Region"/>
<property name="initialSize" value="#{500L * 1024 * 1024}"/>
<property name="maxSize" value="#{1L * 1024 * 1024 * 1024}"/>
</bean>
</property>
<property name="walMode" value="LOG_ONLY"/>
<property name="writeThrottlingEnabled" value="true"/>
<property name="pageSize" value="4096"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean
class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>127.0.0.1:47500..47501</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
<property name="communicationSpi">
<bean
class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<property name="localPort" value="48100"/>
<property name="localPortRange" value="10"/>
<property name="socketWriteTimeout" value="300000"/>
</bean>
</property>
</bean>
</beans>
My guess is that when you try to change the config in your client, then on the server side it cannot be merged with its config. The only way that seems to work here is to apply the new configuration to the entire cluster, discarding all cached data (you should just load it again from Postgres after the upgrade).

JobRepositoryFactoryBean error spring batch

I started to learn spring batch and I have a problem that when i want to
persist the state of the job in a database using JobRepositoryFactoryBean.
compiler displays :
"Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jobRepository' defined in class path resource [springConfig.xml]: Invocation of init method failed; nested exception is java.lang.NoClassDefFoundError: org/springframework/jdbc/core/simple/ParameterizedRowMapper"
but not error when i use MapJobRepositoryFactoryBean
I'm using spring 5
springconfig.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:batch="http://www.springframework.org/schema/batch"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:task="http://www.springframework.org/schema/task" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:jdbc="http://www.springframework.org/schema/jdbc"
xsi:schemaLocation="http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd
http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-4.3.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-4.3.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.3.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.3.xsd">
<context:component-scan base-package="springbatch" />
<context:annotation-config />
<bean id="personneReaderCSV" class="org.springframework.batch.item.file.FlatFileItemReader">
<property name="resource" value="input/personnes.txt" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean
class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="," />
<property name="names" value="id,nom,prenom,civilite" />
</bean>
</property>
<property name="fieldSetMapper">
<bean
class="org.springframework.batch.item.file.mapping.BeanWrapperFieldSetMapper">
<property name="targetType" value="springbatch.entities.Personne" />
</bean>
</property>
</bean>
</property>
</bean>
<bean id="jobLauncher"
class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean name="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url" value="jdbc:mysql://localhost:3306/spring_test" />
<property name="username" value="root" />
<property name="password" value="" />
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate5.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="annotatedClasses">
<list>
<value>springbatch.entities.Personne</value>
</list>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQL5Dialect</prop>
<prop key="hibernate.show_sql">true</prop>
<prop key="hibernate.hbm2ddl.auto">create</prop>
</props>
</property>
</bean>
<job id="importPersonnes" xmlns="http://www.springframework.org/schema/batch">
<step id="readWritePersonne">
<tasklet>
<chunk reader="personneReaderCSV"
processor="personProcessor"
writer="personWriter"
commit-interval="2" />
</tasklet>
</step>
</job>
<bean id="daoPersonne" class="springbatch.dao.PersonneDaoImp">
<property name="factory" ref="sessionFactory"></property>
</bean>
<bean id="personWriter" class="springbatch.batch.PersonneWriter">
<property name="dao" ref="daoPersonne"></property>
</bean>
<bean id="personProcessor" class="springbatch.batch.PersonneProcess">
</bean>
<bean id="batchLauncher" class="springbatch.MyBean">
</bean>
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.JobRepositoryFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="transactionManager" ref="transactionManager" />
<property name="databaseType" value="Mysql" />
</bean>
<task:scheduled-tasks>
<task:scheduled ref="batchLauncher" method="message"
cron=" 59 * * * * * " />
</task:scheduled-tasks>
<jdbc:initialize-database data-source="dataSource">
<jdbc:script location="org/springframework/batch/core/schema-drop-mysql.sql" />
<jdbc:script location="org/springframework/batch/core/schema-mysql.sql" />
</jdbc:initialize-database>
</beans>
but not error when i use :
<bean id="jobRepository"
class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean">
<property name="transactionManager" ref="transactionManager" />
</bean>
You are using Spring Batch v2.2 with Spring Framework 5. That's not going to work properly as ParameterizedRowMapper has been removed in Spring Framework 4.2+ (hence the exception).
I recommend that you use Spring Batch v4.1 (since v2.x is not maintained anymore) and your issue should be fixed.
The best way to manage Spring dependencies is to let Spring Boot do it for you either by generating a project from start.spring.io or by using the Spring Boot BOM. With both ways, you will have the correct Spring projects dependencies that are known to work well together.

Deadlock while syncronizing Spring entity manager query

I'm developing an application using Spring which directs the DB using entity manager with #PersistenceContext injection. the application is executed in multiple threads, each in its own #Transactional scope.
As part of the thread process it should go and call :
entityManager.createQuery("some statement").getResultList()
This should return if an object exists on the DB and if not it will persist it. because i am using multiple threads i have put the call in a syncronized block on a shared object like this:
synchronized (getLock())
{
List<SomeObject> listOfObjects = entityManager.createQuery("some statement").getResultList();
if(listOfObjects == null || listOfObjects.size() < 1)
{
createObject();
}
}
The problem is that when I'm running this the process get stuck on the call to the getResultList and never released. Moreover, when i run the code and creating only one thread I get the same behavior.
if i change the flushMode to COMMIT the process is released but the results are not reflected well because the different threads could end at different times and the one creating the object will not necessarily be the one to end first.
my persistence xml is:
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"
version="1.0">
<persistence-unit name="syncAdapterUnit"
transaction-type="RESOURCE_LOCAL">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/>
<property name="hibernate.max_fetch_depth" value="3"/>
<property name="hibernate.connection.pool_size" value="10"/>
<property name="format_sql" value="false"/>
<property name="hibernate.jdbc.batch_size" value="50" />
<property name="hibernate.cache.use_query_cache" value="false" />
<property name="hibernate.cache.use_second_level_cache" value="false" />
</properties>
</persistence-unit>
</persistence>
The spring bean xml:
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/beans"
xmlns:tx="http://www.springframework.org/schema/tx"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd">
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="managerFactoryBean" class="com.amdocs.dim.ManagerFactory" />
<bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
</bean>
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" destroy-method="destroy">
<property name="dataSource" ref="myDataSource"/>
<property name="jpaVendorAdapter" ref="hibernateVendorAdapter"/>
<property name="persistenceUnitName" value="syncAdapterUnit"/>
</bean>
<bean id="hibernateVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true"/>
<property name="generateDdl" value="false"/>
<property name="databasePlatform" value="org.hibernate.dialect.Oracle10gDialect"/>
<property name="database" value="ORACLE"/>
</bean>
<bean id="cifDBProperties" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="ignoreUnresolvablePlaceholders" value="true" />
<property name="location" value="classpath:APP-INF/conf/database.properties"/>
<property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
</bean>
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" scope="singleton">
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
</bean>
</beans>
I believe that the flush process is the one which blocks the app but didn't manage to understand why.

Connection link failure

this is my hibernate.cfg.xml file
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close"
p:driverClass="${app.jdbc.driverClassName}"
p:jdbcUrl="${app.jdbc.url}"
p:user="${app.jdbc.username}"
p:password="${app.jdbc.password}"
p:acquireIncrement="5"
p:idleConnectionTestPeriod="60"
p:maxPoolSize="100"
p:maxStatements="50"
p:minPoolSize="10" />
<!-- Declare a JPA entityManagerFactory-->
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" >
<property name="persistenceXmlLocation" value="classpath*:persistence.xml"></property>
<property name="persistenceUnitName" value="hibernatePersistenceUnit" />
<property name="dataSource" ref="dataSource"/>
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter" >
<property name="showSql" value="true"/>
</bean>
</property>
I am getting the error:
Params:purchaseOption=webfront&purchaseOptionId=-1
Exception:org.springframework.dao.DataAccessResourceFailureException: Communications link failure>
The last packet successfully received from the server was 989,906
milliseconds ago. The last packet sent successfully to the server was
0 milliseconds ago.; nested exception is
org.hibernate.exception.JDBCConnectionException: Communications link
failure
Checked many blogs , but not getting proper solution , my db is in same server .
Any idea?
I think this is due to NoRouteToHostException. That case for the network configuration state. Look this and those

Resources