Transaction handling while using message driven channel adapter & service activator - jms

I am working on a POC which does the following
Uses a message driven channel adapter to recieve message in a transaction
Calls the Service Activator which uses a handler to insert the message recieved from the adapter to DB and also post message to outbound channel.
Now, if the DB insert of the message fails i want the JMS message returned back to the queue so that it can be re-tried later.
With my below configuration it doesnt seems to work.(i.e. even if there is a failure while inserting into the database the message is removed from the queue.
Any pointers or sample configuration would be helpful.
<integration:channel id="jmsInChannel">
<integration:queue/>
</integration:channel>
<int-jms:message-driven-channel-adapter id="jmsIn"
transaction-manager="transactionManager"
connection-factory="sConnectionFactory"
destination-name="emsQueue"
acknowledge="client" channel="jmsInChannel"
extract-payload="false"/>
<integration:service-activator input-channel="jmsInChannel"
output-channel="fileNamesChannel" ref="handler" method="process" />
<bean id="handler" class="com.irebalpoc.integration.MessageProcessor">
<property name="jobHashTable" ref="jobsMapping" />
</bean>

Set acknowledge="transacted" and, I presume the transactionManager is a JDBC (or JTA) transaction manager.
You also need to remove <queue/> from JmsInChannel so that the database transaction occurs on the same thread.
Spring will synchronize the database transaction with the JMS transaction.
However, read http://www.javaworld.com/javaworld/jw-01-2009/jw-01-spring-transactions.html for the implications.
If you can't make your service idempotent, you may need to look at an XA transaction manager.

Related

JMS Connection constantly "session is closed" on our oracle queues

We are using spring boot to run our queue polling program.
The queue is being polled about every 2 mins, and every 2 mins the session is closed, then refreshed.
The connection is a shared connection from the external tomcat, this connection is shared with a dozen other applications.
2018-11-20 11:59:21.263 WARN [serviceRequestAdapter.container-3] org.springframework.jms.listener.DefaultMessageListenerContainer - Setup of JMS message listener invoker failed for destination 'NPP.SERVICE_REQUEST' -
trying to recover. Cause: JMS-131: Session is closed
2018-11-20 11:59:21.265 INFO [serviceRequestAdapter.container-3] org.springframework.jms.listener.DefaultMessageListenerContainer -
Successfully refreshed JMS Connection
2018-11-20 12:01:21.781 WARN [serviceRequestAdapter.container-4] org.springframework.jms.listener.DefaultMessageListenerContainer - Setup of JMS message listener invoker failed for destination 'NPP.SERVICE_REQUEST' -
trying to recover. Cause: JMS-131: Session is closed
2018-11-20 12:01:21.823 INFO [serviceRequestAdapter.container-4] org.springframework.jms.listener.DefaultMessageListenerContainer -
Successfully refreshed JMS Connection
This doesn't actually appear to be affecting functionality, as messages posted get consumed and processed.
Is this actually a problem, if so how do I fix it?
If it isn't a problem how do I hide these messages without reducing my log level to error?
our jms-context.xml
<context:annotation-config/>
<tx:annotation-driven/>
<int:message-history/>
<int:channel id="jms-inbound"/>
<int:channel id="voucher-create-inbound"/>
<int:channel id="voucher-update-inbound"/>
<int:channel id="default-inbound"/>
<orcl:aq-jms-connection-factory id="connectionFactory"
connection-factory-type="QUEUE_CONNECTION"
use-local-data-source-transaction="true"/>
<int:recipient-list-router input-channel="jms-inbound" default-output-channel="default-inbound"
id="action-type-router">
<int:recipient channel="voucher-create-inbound"
selector-expression="headers.actionType == 'CREATE VOUCHER'"/>
<int:recipient channel="voucher-update-inbound"
selector-expression="headers.actionType == 'UPDATE VOUCHER'"/>
</int:recipient-list-router>
<int-jms:message-driven-channel-adapter
id="serviceRequestAdapter"
channel="jms-inbound"
cache-level="3"
connection-factory="connectionFactory"
destination-name="${oracle.rqst-q-name}"/>
<int:service-activator id="createVoucherActivator"
input-channel="voucher-create-inbound"
requires-reply="false"
method="onMessage">
<beans:bean class="VoucherRequestConsumer"/>
</int:service-activator>
<int:service-activator id="updateVoucherActivator"
input-channel="voucher-update-inbound"
requires-reply="false"
method="onMessage">
<beans:bean class="VoucherRequestConsumer"/>
</int:service-activator>
<beans:bean id="defaultRequestConsumer"
class="DefaultRequestConsumer"/>
<int:service-activator id="defaultActivator"
input-channel="default-inbound"
requires-reply="false"
ref="defaultRequestConsumer"
method="onMessage">
</int:service-activator>
</beans:beans>
I am bit puzzled . . . what are you asking? I mean I don't see a question.
Are you just looking for confirmation that it's ok?
In any event, consider this doc - https://docs.spring.io/spring-data/jdbc/old-docs/2.0.0.M1/reference/html/orcl.streamsaq.html, specifically Section 4.3 Configuring the Connection Factory to use the same local transaction as your data access code. which specifically talks about implications on JMS Session when use-local-data-source-transaction is set to true.

Using DefaultMessageListenerContainer to connect to WebSphere MQ throws "Connection closed" repeatedly

I need to connect from a web application running on a WebSphere AS 6.1 application server to a remote WebSphere MQ on z/OS queue. On WebSphere AS, I configured both QueueConnectionFactory and Queue (an object containing a part of the remote queue data), with most of the settings set to their default values - I just needed to set queue name, channel, host, port, and transport type which is CLIENT. I inject them in the following Spring 3.2 configuration using JNDI lookup:
<jee:jndi-lookup id="destination" jndi-name="MyMQQueue" expected-type="javax.jms.Queue" />
<jee:jndi-lookup id="targetConnectionFactory" jndi-name="MyMQQCF" expected-type="javax.jms.QueueConnectionFactory" />
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="targetConnectionFactory"
p:defaultDestination-ref="destination" />
<bean id="simpleMessageListener" class="my.own.SimpleMessageListener"/>
<bean id="msgListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="targetConnectionFactory" />
<property name="destination" ref="destination" />
<property name="messageListener" ref="simpleMessageListener" />
<property name="taskExecutor" ref="managedThreadsTaskExecutor" />
<property name="receiveTimeout" value="5000" />
<property name="recoveryInterval" value="5000" />
</bean>
<bean id="managedThreadsTaskExecutor" class="org.springframework.scheduling.commonj.WorkManagerTaskExecutor">
<property name="workManagerName" value="wm/default" />
</bean>
JmsTemplate sends and receives (synchronously) messages correctly. DefaultMessageListenerContainer, an asynchronous message receiver, reads some (previously sent) messages off the MQ queue during WebSphere AS start, but chokes soon afterwards, and begins to throw repeatedly "connection closed" exception. On each such occasion it notifies me that
DefaultMessag W org.springframework.jms.listener.DefaultMessageListenerContainer handleListenerSetupFailure Setup of JMS message listener invoker failed for destination 'queue://myqueue' - trying to recover. Cause: Connection closed
DefaultMessag I org.springframework.jms.listener.DefaultMessageListenerContainer refreshConnectionUntilSuccessful Successfully refreshed JMS Connection
but stops taking messages off the queue.
Digging a bit into Spring code, I found that setting on DefaultMessageListenerContainer
<property name="cacheLevel" value="0"/>
solves the problem, in the sense that the messages are now being read off the queue every time I send them. However, looking at the TCP traffic to WebSphere MQ I find that MQCLOSE/MQOPEN commands are sent to it repeatedly as in:
Wireshark captured traffic
which probably means that the connection gets continuously closed and reopened.
Can anybody suggest what might be the cause for caching not working properly, and whether there is perhaps a relatively simple way to modify Spring code (extending DefaultMessageListenerContainer, for example), or perhaps set some property on MQ queue connection factory/queue, to get it working?
EDIT:
Searching further the internet, I have found the following link
http://forum.spring.io/forum/spring-projects/integration/jms/89532-defaultmessagelistenercontainer-cachingconnectionfactory-tomcat-and-websphere-mq
which seems to describe a similar problem occurring on Tomcat. The solution there is to set a certain exceptionListener on DefaultMessageListenerContainer. However, trying to do this on WebSphere throws the exception "javax.jms.IllegalStateException: Method setExceptionListener not permitted". The underlying cause seems to be that J2EE 1.4 spec forbids calling setExceptionListener on JMS connections.
https://www.ibm.com/developerworks/library/j-getmess/j-getmess-pdf.pdf
It seems that setting
<property name="cacheLevel" value="0"/>
on DefaultMessageListenerContainer is actually the correct solution.
I mislead myself by interpreting MQCLOSE/MQOPEN I saw on Wireshark captured TCP traffic in this case, as the heavyweight connection opening.
First, the newly created Connection Factory on the administrative console WebSphere AS 6.1 has by default a JMS connection pool (max size 10). By debugging the base class of DefaultMessageListenerContainer, AbstractPollingMessageListenerContainer, (specifically the method
protected boolean doReceiveAndExecute(
Object invoker, Session session, MessageConsumer consumer, TransactionStatus status)
)
one sees that neither the call to create a connection, neither the call to create a session from connection generate TCP traffic, and TCP traffic is generated only by creating a consumer (considered to be a "lightweight operation" if I understand correctly), trying to receive a message from the queue, and closing the consumer.
So it seems that the connection is taken from the respective pool, and also the session is somehow "cached".
So instead of caching by Spring, the caching appears to be done here by the application server.

Update database and send JMS message in single transacion?

I'm using Spring's DataSourceTransactionManager for transaction managment and JmsTemplate for sending messages to ActiveMQ queue. My problem is force to work in single transaction next algorithm:
Step 1: update DB;
Step 2: send message to queue;
Step 3: update DB;
Step 4: send message to queue.
As I understand from documentation for JmsTemplate, in my case I must set parameter "sessionTransacted" = true:
Setting this flag to "true" will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. The latter has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction.(c)
My jms-configuration file contains only this:
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" ref="url"/>
<property name="userName" ref="username"/>
<property name="password" ref="password"/>
</bean>
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="defaultDestinationName" value="SomeQueue"/>
<property name="sessionTransacted" value="true"/>
</bean>
After that I try to test it in simple way:
Case A:
#Transactional
public void sendMessageTransactionalErr(Object message, List<String> queueDestinationNames) throws Exception {
sender.sendMessage(message, queueDestinationNames);
throw new Exception("FatalException!");
}
Case B:
#Transactional
public void sendMessageTransactionalOK(Object message, List<String> queueDestinationNames) throws Exception {
sender.sendMessage(message, queueDestinationNames);
}
But in both cases after request execution message is send to queue. Even if JDBC transaction rolled back, JMS transaction commit succesfull.
What should I do to make it work as I need to?
You need to use a transaction manager that handles BOTH your JMS transaction and database transaction. Your JMS transaction is separate from the database transaction.
I don't recall exactly, but when I had this problem I created an instance of org.springframework.jms.connection.JmsTransactionManager. Create a JTA transaction manager and make sure your it is aware of this AND the database transaction manager.
Use #Transactional("jtaTransactionManager") for the annotation. I may have tried Bitronix or JOTM for this use case.
See Spring Integration and Transaction with JMS and DB
Reference: http://www.javaworld.com/article/2077963/open-source-tools/distributed-transactions-in-spring--with-and-without-xa.html?page=2

ServiceMix, Apache ActiveMQ, Camel sending "done" for consuming messages

The issue I have is this:
using service mix and camel routing I am sending a JSON message via activeMQ to consumer.
The issue is that the time that this consumer takes to process this message is X so it is possible the consumer get stopped or crashed during the consuming of the message. In this case the message will be half consumer and will be already deleted from the queue because well it was delivered.
Is it possible to make the queue to not remove messages when they are consumed but instead to wait for some confirmation from the consumer that the processing of this message is done before deleting it?
In a typical importing files from filesystem you will remove the file or rename it to done only at the end ones the file is fully processed and a transaction is fully committed. So how in the ESB world we can say keep the message till I finish and I tell you to remove it.
i am using spring jms:listener-container and jms:listeners for consuming this messages currently.
Your problem is what JMS transactions solves every day.
Some notes from ActiveMQ about it here
You could easily use local transactions in JMS, and setup a listener container like this (note true on sessionTransacted):
<bean id="myListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="myQueue" />
<property name="messageListener" ref="myConsumerBean" />
<property name="sessionTransacted" value="true" />
</bean>
Then you have a transacted session. The message will be rolled back to the queue if the message listener fails to consume the message. The transaction will not commit(=message removed from queue) unless the "onMessage" method in the message listener bean returns successfully (no exceptions thrown). I guess this is what you want

spring integration prevent polling when database not available

we are using Spring Integration 2.1 for persisting messages into database sent by clients.
There is a queue which will be filled by a custom adapter. The configured service activator polls this queue and releases the message to a spring managed #Repository bean. All errors will be captured to an error channel and will be handled by a service. The configuration works so far fine.
My concern is that if the database is not available the service-activators polls all incoming message from the queue and puts them into the error channel. Is there a way to prevent the service-activator to poll the message if the database is obviously not available, for example by sending a test query ?
My configuraton:
<int:channel id="inChannel">
<int:queue />
</int:channel>
<bean id="service" class="some.service.Service" />
<int:service-activator ref="service"
method="write" input-channel="inChannel">
<int:poller fixed-rate="100" task-executor="srvTaskExecutor"
receive-timeout="90" error-channel="errChannel" />
</int:service-activator>
<task:executor id="srvTaskExecutor" pool-size="2-10"
queue-capacity="0" rejection-policy="DISCARD" />
<int:channel id="errChannel" />
<int:service-activator input-channel="errChannel"
ref="errorService" method="write"/>
Regards.
If you give the polling service-activator an "id", you can refer to that instance and call start() or stop() on it based on the DB being available or not. Most likely you'd want to set auto-startup="false" on that service-activator as well.
Additionally, you can even define a "control-bus" element and then send messages like "myActivator.start()" and "myActivator.stop()" to that control bus' input-channel.
Hope that helps,
Mark

Resources