Getting maxSize for data region in Apache Ignite - caching

I am Using Apache Ignite 2.8.0.
Now my persistence is disabled, and i was set my maxSize as 400Mib by,
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
<property name="metricsEnabled" value="true"/>
<property name="defaultDataRegionConfiguration">
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="metricsEnabled" value="true"/>
<property name="name" value="Default_Region1"/>
<!-- Setting the size of the default region to 4GB. -->
<property name="maxSize" value="#{400L * 1024 * 1024 }"/>
<!--<property name="persistenceEnabled" value="true"/> -->
</bean>
</property>
</bean>
</property>
Now i am running this in my browser,
http://localhost:8080/ignite?cmd=dataregion
this is my response for Default_Region1,
{"name":"Default_Region1","totalAllocatedPages":0,"totalUsedPages":0,"totalAllocatedSize":0,"allocationRate":0.0,"evictionRate":0.0,"largeEntriesPagesPercentage":0.0,"pagesFillFactor":0.0,"dirtyPages":0,"physicalMemoryPages":0,"physicalMemorySize":0,"usedCheckpointBufferPages":0,"usedCheckpointBufferSize":0,"checkpointBufferSize":0,"pageSize":4096,"offHeapSize":0,"pagesReplaceRate":0.0,"pagesReplaced":0,"pagesReplaceAge":0.0,"offheapUsedSize":0,"pagesRead":0,"pagesWritten":0}
i didn't find the maxSize as 400Mib in response, How can i get that my MaxSize is 400Mib by using http request?

I'm not sure about HTTP, but DataRegionMetricsMXBean has getMaxSize() accessor.
You can execute compute tasks from REST so you can implement a task that does that, call it via exe REST call.

Related

how to use limit and offset clause in JdbcPagingItemReader in spring batch?

The table has more than 200 million records, but i need to restrict the select top 5 million records. I have tried with jdbcCursorItemReader which is taking around 2-3 hrs to select and write it to the csv file using single step by chunk processing, So i choose to go with parallel processing that spring is batch offering.
i,e by having taskExecutor and JdbcPagingItemReader making each 5 individual files of million each but the problem is i am not able to specify the limit and offset clause in query parameters. please help me on this. Approach better than this too is appreciated.
<bean id="itemReader" class="org.springframework.batch.item.database.JdbcPagingItemReader" scope="step">
<property name="dataSource" ref="dataSource" />
<property name="rowMapper">
<bean class="MyRowMapper" />
</property>
<property name="queryProvider">
<bean class="org.springframework.batch.item.database.support.SqlPagingQueryProviderFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="sortKeys">
<map>
<entry key="esmeaddr" value="ASCENDING"/>
</map>
</property>
<property name="selectClause" value="elect cust_send,dest,msg,stime,dtime,dn_status,mid,rp,operator,circle,cust_mid,first_attempt,second_attempt,third_attempt,fourth_attempt,fifth_attempt,term_operator,term_circle,bindata,reason,tag1,tag2,tag3,tag4,tag5"
/>
<property name="fromClause" value="FROM bill_log " />
<property name="whereClause" value="where esmeaddr = '70897600000000' and country='India' and apptype='SMS' Limit 0,1000000" />
</bean>
</property>
<property name="pageSize" value="1000000" />
<property name="parameterValues">
<map>
<entry key="param1" value="#{jobExecutionContext[param1]}" />
<entry key="param2" value="#{jobExecutionContext[param2]}" />
</map>
</property>
</bean>
You can't use a SQL LIMIT clause within that reader since that's what the reader itself will do. Instead, Spring Batch has the functionality built into the JdbcPagingItemReader. To set the max number of items to process, you can configure the reader with JdbcPagingItemReader#setMaxItemCount(5000000) and if there is an offset, you would set the JdbcPagingItemReader#setCurrentItemCount(offset). That being said, the offset will be overriden on a restart with any value it finds in the ExecutionContext. You can read more about this in the javadoc here: https://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/database/JdbcPagingItemReader.html

org.apache.ignite.IgniteCheckedException: Cannot enable write-through

Here is the configuration for the cache. I want writeThrough to be enabled. why i got the below exception? what does "writer or store is not provided" mean?
Configuration:
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="txnCache"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="writeSynchronizationMode" value="FULL_SYNC"/>
<property name="writeThrough" value="true"/>
<property name="backups" value="1"/>
<!--property name="cacheMode" value="REPLICATED"/-->
<!-- <property name="atomicityMode" value="ATOMIC"/>
<property name="readFromBackup" value="true"/>
<property name="copyOnRead" value="true"/>-->
</bean>
</property>
Error:
[13:24:07,176][SEVERE][main][IgniteKernal] Got exception while starting (will rollback startup routine).
class org.apache.ignite.IgniteCheckedException: Cannot enable write-through (writer or store is not provided) for cache: txnCache
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.validate(GridCacheProcessor.java:482)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1462)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:885)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1013)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1895)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1647)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1075)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:573)
at org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:48)
at org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:76)
[13:24:07] Cancelled rebalancing from all nodes [topology=null]
[13:24:07] Cancelled rebalancing from all nodes [topology=null]
To configure write-through, you need to implement the CacheStore interface(or use one of the existing) and set cacheStoreFactory as well writeThrough property of CacheConfiguration, it will look like:
<bean id= "simpleDataSource" class="org.h2.jdbcx.JdbcDataSource"/>
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
...
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
...
<property name="writeThrough" value="true"/>
<property name="cacheStoreFactory">
<bean class="org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory">
<property name="dataSourceBean" value = "simpleDataSource" />
</bean>
</property>
</bean>
</list>
</property>
</bean>
Here is more information about cacheStore and writeThrough:
https://apacheignite.readme.io/v2.0/docs/persistent-store#section-read-through-and-write-through
what does "writer or store is not provided" mean?
It means that you didn't provide store in configuration.

Can we use DBCP 2 or Tomcat connection pool for distributed transactions in Spring? Can these connection pool be used along with JOTM or Atomikos?

Initially i was using different transaction manager for multiple data sources. But i had problem with managing rollback on all data sources if one of the data sources has transaction failure.I want to manage multiple datasources with single Transaction manager in Spring. So i opted for using JOTM or Atomikos. Both these transaction manager uses XA Connection pool(org.enhydra.jdbc.pool.StandardXAPoolDataSource). But in my project i was allowed to use only DBCP 2(org.apache.commons.dbcp.BasicDataSource) or Tomcat Connection Pool(org.apache.tomcat.jdbc.pool.DataSource). Is it possible to use either of this connection pools with JOTM or Atomikos. Please someone help me on this along with configuration example. Below is my configuration details,
<
bean id="jotm" class="org.springframework.transaction.jta.JotmFactoryBean"/>
<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction" ref="jotm" />
</bean>
<bean id="dataSource1" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
<property name="dataSource">
<bean class ="org.enhydra.jdbc.standard.StandardXADataSource " destroy-method ="shutdown">
<property name="transactionManager" ref="jotm" />
<property name="driverName" value="${jdbc.d1.driver}" />
<property name ="url" value = "${jdbc.d1.url}" />
</bean>
</property>
<property name="user" value="${jdbc.d1.username}" />
<property name = "password" value="${jdbc.d1.password}" />
</bean>
<bean id="dataSource2" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
<property name="dataSource">
<bean class ="org. enhydra.jdbc.standard.StandardXADataSource " destroy-method ="shutdown">
<property name="transactionManager" ref="jotm" />
<property name="driverName" value="${jdbc.d2.driver}" />
<property name="url" value="${jdbc.d2.url}" />
</bean>
</property>
<property name="user" value="${jdbc.d2.username}" />
<property name = "password" value ="${jdbc.d2.password}" />
</bean>
Also do help if any other possible ways to achieve this.

"client not initialized" error when using SSMCache with AWS elasticache autodiscovery

I am using Spring cache with AWS elasticache provider. I get this warning:
WARN c.g.code.ssm.spring.SSMCache - An error has occurred for cache defaultCache and key
java.lang.IllegalStateException: Client is not initialized
at net.spy.memcached.MemcachedClient.checkState(MemcachedClient.java:1623) ~[elasticache-java-cluster-client.jar:na]
at net.spy.memcached.MemcachedClient.enqueueOperation(MemcachedClient.java:1617) ~[elasticache-java-cluster-client.jar:na]
at net.spy.memcached.MemcachedClient.asyncGet(MemcachedClient.java:1013) ~[elasticache-java-cluster-client.jar:na]
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1235) ~[elasticache-java-cluster-client.jar:na]
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1256) ~[elasticache-java-cluster-client.jar:na]
at com.google.code.ssm.providers.elasticache.MemcacheClientWrapper.get(MemcacheClientWrapper.java:147) ~[aws-elasticache-provider.jar:na]
at com.google.code.ssm.CacheImpl.get(CacheImpl.java:271) ~[simple-spring-memcached.jar:na]
at com.google.code.ssm.CacheImpl.get(CacheImpl.java:106) ~[simple-spring-memcached.jar:na]
at com.google.code.ssm.spring.SSMCache.getValue(SSMCache.java:226) [spring-cache.jar:na]
at com.google.code.ssm.spring.SSMCache.get(SSMCache.java:100) [spring-cache.jar:na]
I am using the same memcache without spring cache and it works fine. I get this error only when I use spring cache.
I have verified that the security groups has the Inbound port specified and I am running my code on EC2.
UPDATE 1:
adding my config -
<bean name="cacheManager" class="com.google.code.ssm.spring.SSMCacheManager">
<property name="caches">
<set>
<bean class="com.google.code.ssm.spring.SSMCache">
<constructor-arg name="cache" index="0" ref="defaultMemcachedClient" />
<!-- 5 minutes -->
<constructor-arg name="expiration" index="1" value="3600" />
<!-- #CacheEvict(..., "allEntries" = true) won't work because allowClear is false,
so we won't flush accidentally all entries from memcached instance -->
<constructor-arg name="allowClear" index="2" value="false" />
</bean>
</set>
</property>
</bean>
<bean name="defaultMemcachedClient" class="com.google.code.ssm.CacheFactory">
<property name="cacheName" value="defaultCache" />
<property name="cacheClientFactory">
<bean name="cacheClientFactory" class="com.google.code.ssm.providers.elasticache.MemcacheClientFactoryImpl" />
</property>
<property name="addressProvider">
<bean class="com.google.code.ssm.config.DefaultAddressProvider">
<property name="address" value="127.0.0.1:11211" />
</bean>
</property>
<property name="configuration">
<bean class="com.google.code.ssm.providers.elasticache.ElastiCacheConfiguration">
<property name="consistentHashing" value="true" />
<property name="clientMode" value="#{T(net.spy.memcached.ClientMode).Dynamic}" />
</bean>
</property>
</bean>
Show your configuration and usage.
It seams that you haven't defined defaultCache or used Cacheable without 'value' param set.

Spring MDP not Consuming Message

I am implementing Spring MDP + JMSTemplate to send and receive the message. The message send mechanism is working fine, however the MDP is not getting invoked. I tried testing the via plain receiver, and was able to retrieve the message, but not via MDP. What could be the problem? I can see the messages getting accumulated in the request queue, but somehow the MDP is not getting trigger. Am I missing anything here in configurations or something else needs to be taken care of?
Here's the Spring Config. The Java class to send and received are pretty much standard ones.
<bean id="cookieRequestListener" class="com.text.jms.mq.mdp.RequestQueueMDP">
<property name="logger" ref="mqLogger" />
<property name="scoringEngine" ref="scoringEngine" />
<property name="mqSender" ref="jmsMQSender" />
</bean>
<bean id="CookieRequestContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="destination" ref="jmsRequestQueue" />
<property name="messageListener" ref="cookieRequestListener" />
</bean>
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate">
<ref bean="jndiTemplate" />
</property>
<property name="jndiName">
<value>java:/jms/queueCF</value>
</property>
</bean>
<!-- A cached connection to wrap the Queue connection -->
<bean id="cachedConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="jmsConnectionFactory"/>
<property name="sessionCacheSize" value="10" />
</bean>
<!-- jms Request Queue Configuration -->
<bean id="jmsRequestQueue" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate">
<ref bean="jndiTemplate" />
</property>
<property name="jndiName">
<value>java:/jms/cookieReqQ</value>
</property>
</bean>
<!-- jms Response Queue Configuration -->
<bean id="jmsResponseQueue" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate">
<ref bean="jndiTemplate" />
</property>
<property name="jndiName">
<value>java:/jms/cookieResQ</value>
</property>
</bean>
<bean id="jmsJMSTemplate" class="org.springframework.jms.core.JmsTemplate" >
<property name="connectionFactory" ref="cachedConnectionFactory" />
</bean>
<!-- jms MQ Utility -->
<bean id="jmsMQSender" class="com.text.jms.util.MQSender">
<property name="jmsTemplate">
<ref bean="jmsJMSTemplate"></ref>
</property>
<property name="defaultDestination">
<ref bean="jmsRequestQueue" />
</property>
<property name="logger" ref="mqLogger" />
</bean>

Resources