How to configure Delegating Session Factory in spring sftp inbound channel adaptor - spring

We want to delegate the session-factory at run time to spring sftp inbound channel adapter. For that we have done the below configuration.
We have gone through the spring-sftp integration docs but we are not sure how to set the session-factory attribute value via java. Could you please suggest us how to delegate the session-factory run time in spring sftp inbound channel adapter using Delegating Session Factory.
XML Configuration
<beans>
<bean id="defaultSftpSessionFactoryOne" class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="**.***.**.***" />
<property name="port" value="**" />
<property name="user" value="######" />
<property name="password" value="######" />
<property name="allowUnknownKeys" value="true" />
</bean>
<bean id="defaultSftpSessionFactoryTwo" class="org.springframework.integration.sftp.session.DefaultSftpSessionFactory">
<property name="host" value="**.***.**.***" />
<property name="port" value="**" />
<property name="user" value="######" />
<property name="password" value="######" />
<property name="allowUnknownKeys" value="true" />
</bean>
<bean id="delegatingSessionFactory" class="org.springframework.integration.file.remote.session.DelegatingSessionFactory">
<constructor-arg>
<bean id="factoryLocator"
class="org.springframework.integration.file.remote.session.DefaultSessionFactoryLocator">
<constructor-arg name="factories">
<map>
<entry key="one" value-ref="defaultSftpSessionFactoryOne"></entry>
<entry key="two" value-ref="defaultSftpSessionFactoryTwo"></entry>
</map>
</constructor-arg>
</bean>
</constructor-arg>
</bean>
<int:channel id="receiveChannel" />
<int-sftp:inbound-channel-adapter id="sftpInbondAdapter" auto-startup="false"
channel="receiveChannel" session-factory="delegatingSessionFactory"
local-directory="C:\\Users\\sftp" remote-directory="/tmp/archive"
auto-create-local-directory="true" delete-remote-files="false"
filename-regex=".*\.txt$">
<int:poller cron="0/10 * * * * ?">
</int:poller>
</int-sftp:inbound-channel-adapter>
java code
ApplicationContext ac = new ClassPathXmlApplicationContext("beans.xml");
DelegatingSessionFactory<String> dsf = (DelegatingSessionFactory<String>) ac.getBean("delegatingSessionFactory");
SessionFactory<String> one = dsf.getFactoryLocator().getSessionFactory("one");
SessionFactory<String> two = dsf.getFactoryLocator().getSessionFactory("two");
dsf.setThreadKey("two");
SourcePollingChannelAdapter spca = (SourcePollingChannelAdapter) ac.getBean("sftpInbondAdapter");
spca.start();

The delegating session factory was really intended for outbound adapters and gateways. Typically, inbound adapters don't switch to different servers.
Setting the thread key on the main thread like that does nothing.
You need to set/clear the key on the thread that invokes the adapter; this is shown for outbound adapters in the documentation.
For inbound adapters you need to do it on the poller thread.
I don't know what criteria you will use to select the factory, but you can use a smart poller to do it.

Related

ActiveMQ DefaultMessageListenerContainer Why Only One Connection?

<bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${mq.activemq.host}" />
<property name="userName" value="${mq.activemq.user}" />
<property name="password" value="${mq.activemq.pass}" />
<property name="maxThreadPoolSize" value="30" />
</bean>
<bean id="amqPooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory" ref="amqConnectionFactory" />
<property name="maxConnections" value="10" />
<property name="maximumActiveSessionPerConnection" value="300" />
<property name="idleTimeout" value="60000" />
</bean>
<bean id="queueListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="amqPooledConnectionFactory" />
<property name="destination" ref="queueDestination" />
<property name="messageListener" ref="queueAwareMessageListener" />
<property name="taskExecutor" ref="queueListenerTaskExecutor" />
<property name="concurrency" value="5-30" />
</bean>
What is the relationship between maxThreadPoolSize, maxConnections, maximumActiveSessionPerConnection, and concurrency?
Why I set maxConnections = 10, but Listener only one Connector in the Connections, can't increase more?
The number of consumers is correct. It has 5 with initialization,and gradually increases with change.
The bottom line regarding the number of connections which will be used is determined by the org.springframework.jms.listener.DefaultMessageListenerContainer which you have configured (since that is the only component here actually creating connections). As far as I can tell it will only ever create a single connection so using a connection pool here appears pointless. The concurrency parameter simply controls the concurrent number of consumers on the connection.
By setting maxConnections = 10 on the org.apache.activemq.pool.PooledConnectionFactory you are just limiting the size of the connection pool. However, since the queueListenerContainer won't ever call createConnection() more than once it doesn't really matter.
You can read about the maxThreadPoolSize of the org.apache.activemq.ActiveMQConnectionFactory in the ActiveMQ documentation.

Spring Batch & Spring Integration (JMS) & Load Balance Slaves

I'm using
Spring Batch
Step 1
Step 2 Master (Partitioner)
Step 3
Spring Integration (JMS) to communicate Master and Slave
The issue we are seeing is, the first slave handles all JMS messages instead of even distribution between slaves.
See configuration as below
Master
<bean id="PreProcess" class="com.job.tasklet.PreProcessTasklet" scope="step">
<constructor-arg index="0" value="${run.slave}"/>
<property name="maxNumberOfSlaves" value="#{jobParameters['max-slave-count']}"/>
</bean>
<bean id="PostProcess" class="com.job.tasklet.PostProcessTasklet" scope="prototype">
<constructor-arg index="0" ref="chpsJobDataSource"/>
</bean>
<bean id="partitioner" class="com.job.partition.DatabasePartitioner" scope="step">
<constructor-arg index="3" value="${max.row.count}"/>
</bean>
<bean id="partitionHandler" class="com.job.handler.StepExecutionAggregatorHandler">
<property name="stepName" value="processAutoHoldSlaveStep"/>
<property name="gridSize" value="${grid.size}"/>
<property name="replyChannel" ref="aggregatedGroupRuleReplyChannel"/>
<property name="messagingOperations">
<bean class="org.springframework.integration.core.MessagingTemplate">
<property name="defaultChannel" ref="groupRuleRequestsChannel"/>
</bean>
</property>
</bean>
<!-- Request Start -->
<int:channel id="groupRuleRequestsChannel" />
<int-jms:outbound-channel-adapter channel="groupRuleRequestsChannel" jms-template="jmsTemplateToSlave"/>
<bean id="jmsTemplateToSlave" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="receiveTimeout" value="5000"/>
<property name="defaultDestinationName" value="defaultRequest"/>
</bean>
<bean id="jmsTemplateFromSlave" class="org.springframework.jms.core.JmsTemplate" parent="jmsTemplateToSlave">
<property name="defaultDestinationName" value="defaultRequest"/>
</bean>
<!-- Response Test Start -->
<int:channel id="groupRuleReplyChannel">
<!-- <int:queue/> -->
</int:channel>
<int-jms:inbound-channel-adapter channel="groupRuleReplyChannel" jms-template="jmsTemplateFromSlave">
<int:poller id="defaultPoller" default="true" max-messages-per-poll="1" fixed-rate="3000" />
</int-jms:inbound-channel-adapter>
<!-- define aggregatedReplyChannel -->
<int:channel id="aggregatedGroupRuleReplyChannel">
<int:queue/>
</int:channel>
<int:aggregator ref="partitionHandler"
input-channel="groupRuleReplyChannel"
output-channel="aggregatedGroupRuleReplyChannel"
send-timeout="3600000"/>
Slave
<int:channel id="requestsChannel" />
<bean id="connectionFactory" class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<property name="brokerURL" value="${spring.activemq.broker-url}" />
<property name="trustAllPackages" value="true" />
</bean>
<int-jms:message-driven-channel-adapter id="jmsIn" destination-name="#{args[0]}" channel="requestsChannel" connection-factory="connectionFactory" max-messages-per-task="1"/>
<int:service-activator input-channel="requestsChannel" output-channel="replyChannel" ref="stepExecutionRequestHandler" />
<int:channel id="replyChannel" />
<int-jms:outbound-channel-adapter connection-factory="connectionFactory" destination-name="#{args[1]}" channel="replyChannel" />
Please advice if you have experience the issue.
Let me know if you need more information.
Note: I already search a lot at here and google but no luck for solution yet.
ActiveMQ uses a prefetch of 1000 by default see here.
In other words, the first (up to) 1000 partitions will go to the first consumer etc.
You can reduce the prefetch; 1 is probably fine for this application.

How to join Spring JMS transactions from two different connection factories?

I am using different connection factories for sending and receiving messages, having trouble with partial commit issues incase of delivey failures. jms:message-driven-channel-adapter uses the receiveConnectionFactory ro receive the messages from the queue. jms:outbound-channel-adapter uses the deliverConnectionFactory to send the messages multiple to downstream queues. We have only one JmsTransactionManager which uses the receiveConnectionFactory and the jms:outbound-channel-adapter configured with session-transacted="true".
<beans>
<bean id="transactionManager"
class="org.springframework.jms.connection.JmsTransactionManager">
<property name="connectionFactory" ref="receiveConnectionFactory" />
</bean>
<bean id="receiveConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory">
<bean class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${mq.host}" />
<property name="channel" value="${mq.channel}" />
<property name="port" value="${mq.port}" />
</bean>
</property>
<property name="sessionCacheSize" value="${receive.factory.cachesize}" />
<property name="cacheProducers" value="${receive.cache.producers.enabled}" />
<property name="cacheConsumers" value="${receive.cache.consumers.enabled}" />
</bean>
<bean id="deliverConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory">
<bean class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${mq.host}" />
<property name="channel" value="${mq.channel}" />
<property name="port" value="${mq.port}" />
</bean>
</property>
<property name="sessionCacheSize" value="${send.factory.cachesize}" />
<property name="cacheProducers" value="${send.cache.producers.enabled}" />
<property name="cacheConsumers" value="${send.cache.consumers.enabled}" />
</bean>
<tx:advice id="txAdviceNew" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="send" propagation="REQUIRES_NEW" />
</tx:attributes>
</tx:advice>
<aop:config>
<aop:advisor advice-ref="txAdviceNew" pointcut="bean(inputChannel)" />
<aop:advisor advice-ref="txAdviceNew" pointcut="bean(errorChannel)" />
</aop:config>
<jms:message-driven-channel-adapter
id="mdchanneladapter" channel="inputChannel" task-executor="myTaskExecutor"
connection-factory="receiveConnectionFactory" destination="inputQueue"
error-channel="errorChannel" concurrent-consumers="${num.consumers}"
max-concurrent-consumers="${max.num.consumers}" max-messages-per-task="${max.messagesPerTask}"
transaction-manager="transactionManager" />
<jms:outbound-channel-adapter
connection-factory="deliverConnectionFactory" session-transacted="true"
destination-expression="headers.get('Deliver')" explicit-qos-enabled="true" />
</beans>
When there is MQ exception on any one destination, the partial commit occurs and then the failure queue commit happens. I am looking to see if I am missing some configuration to join the transactions so that the partial commit never happens.
I tried with only one connection factory for both send and receive (receiveConnectionFactory) and the parital commit is not happening, everything works as expected.
I tried with only one connection factory for both send and receive (receiveConnectionFactory) and the parital commit is not happening, everything works as expected.
That's the right way to go in your case.
I see that your two ConnectionFactories are only different by their objects. Everything rest looks like the same target MQ server.
If you definitely can't live with only one ConnectionFactory, you should consider to use JtaTransactionManager or configure org.springframework.data.transaction.ChainedTransactionManager for two JmsTransactionManagers - one per connection factory.
See Dave Syer's article on the matter: https://www.javaworld.com/article/2077963/open-source-tools/distributed-transactions-in-spring--with-and-without-xa.html

Can we use DBCP 2 or Tomcat connection pool for distributed transactions in Spring? Can these connection pool be used along with JOTM or Atomikos?

Initially i was using different transaction manager for multiple data sources. But i had problem with managing rollback on all data sources if one of the data sources has transaction failure.I want to manage multiple datasources with single Transaction manager in Spring. So i opted for using JOTM or Atomikos. Both these transaction manager uses XA Connection pool(org.enhydra.jdbc.pool.StandardXAPoolDataSource). But in my project i was allowed to use only DBCP 2(org.apache.commons.dbcp.BasicDataSource) or Tomcat Connection Pool(org.apache.tomcat.jdbc.pool.DataSource). Is it possible to use either of this connection pools with JOTM or Atomikos. Please someone help me on this along with configuration example. Below is my configuration details,
<
bean id="jotm" class="org.springframework.transaction.jta.JotmFactoryBean"/>
<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="userTransaction" ref="jotm" />
</bean>
<bean id="dataSource1" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
<property name="dataSource">
<bean class ="org.enhydra.jdbc.standard.StandardXADataSource " destroy-method ="shutdown">
<property name="transactionManager" ref="jotm" />
<property name="driverName" value="${jdbc.d1.driver}" />
<property name ="url" value = "${jdbc.d1.url}" />
</bean>
</property>
<property name="user" value="${jdbc.d1.username}" />
<property name = "password" value="${jdbc.d1.password}" />
</bean>
<bean id="dataSource2" class="org.enhydra.jdbc.pool.StandardXAPoolDataSource" destroy-method="shutdown">
<property name="dataSource">
<bean class ="org. enhydra.jdbc.standard.StandardXADataSource " destroy-method ="shutdown">
<property name="transactionManager" ref="jotm" />
<property name="driverName" value="${jdbc.d2.driver}" />
<property name="url" value="${jdbc.d2.url}" />
</bean>
</property>
<property name="user" value="${jdbc.d2.username}" />
<property name = "password" value ="${jdbc.d2.password}" />
</bean>
Also do help if any other possible ways to achieve this.

Spring DefaultMessageListenerContainer MDP Initialization

What is the best way to perform initialization on DefaultMessageListenerContainer initialization? Currently I am waiting for first message, and keep track of it using a boolean variable, which isn't so pretty. Is there a better way ? I want to read and load certain data into the cache once the Message Driven POJO is started up, so the message processing is faster.
(Edited)
Spring Config Fragement:
<bean id="itemListener" class="com.test.ItemMDPImpl" autowire="byName" />
<bean id="itemListenerAdapter" class="org.springframework.jms.listener.adapter.MessageListenerAdapter">
<property name="delegate" ref="itemListener" />
<property name="defaultListenerMethod" value="processItems" />
<property name="messageConverter" ref="itemMessageConverter" />
</bean>
<bean class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="itemMqConnectionFactory" />
<property name="destinationName" value="${item_queue_name}" />
<property name="messageListener" ref="itemListenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<property name="receiveTimeout" value="3000" />
</bean>
I would like to have some initialization done before any message is received by the listener.
Can't you just use #PostConstruct to annotate a method on ItemMDPImpl to perform startup initialization, just like any other Spring bean?
4.9.6 #PostConstruct and #PreDestroy

Resources