WebSphere Camel JMS, spring, taskExecutor, haninging Threads - spring

I'm trying to integrate Camel with WebSphere. It is working fine, for all but one thing.
The scenario looks like:
JMS (WMQ) -> routing/transformation -> BEAN (which does a JPA (OpenJPA1.2/DB2) commit).
To be able to plug into WAS transaction manager and mangaed threads, I'm inserting the work manager as taskExecutor in camel:
<!-- Selected parts of the spring config -->
<tx:jta-transaction-manager/>
<bean id="wasTaskExecutor"
class="org.springframework.scheduling.commonj.WorkManagerTaskExecutor">
<property name="workManagerName" value="wm/default" />
</bean>
<bean id="camelTransactionRequired" class="org.apache.camel.spring.spi.SpringTransactionPolicy" depends-on="transactionManager">
<property name="transactionManager" ref="transactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/>
</bean>
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="connectionFactory"/>
<property name="taskExecutor" ref="wasTaskExecutor"/>
<property name="transacted" value="true"/>
<property name="transactionManager" ref="transactionManager"/>
</bean>
Then a route, something like:
from("jms:queue:MY.QUEUE")
.transacted("camelTransactionRequired")
.log(..)
.bean(storeJPA);
This wasTaskExecutor bean is used in one stand alone spring message listener (same jms provider, WMQ) in the application as well with expected behaviour.
When deployed/started, ONE message can be processed this way (first log line below) - then threads starts to hang.
[5/12/12 22:14:55:890 CEST] 00000055 SystemOut O INFO routeFromBackend - Message pulled from queue to message box
[5/12/12 22:27:00:638 CEST] 00000031 ThreadMonitor W WSVR0605W: Thread "Default : 1" (0000001e) has been active for 739306 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may be hung.
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:196)
at com.ibm.ws.util.BoundedBuffer.waitPut_(BoundedBuffer.java:214)
at com.ibm.ws.util.BoundedBuffer.put(BoundedBuffer.java:324)
at com.ibm.ws.util.ThreadPool.execute(ThreadPool.java:1296)
at com.ibm.ws.util.ThreadPool.execute(ThreadPool.java:1100)
at com.ibm.ws.asynchbeans.WorkItemImpl$PoolExecuteProxy.run(WorkItemImpl.java:198)
at com.ibm.ws.asynchbeans.WorkItemImpl.executeOnPool(WorkItemImpl.java:219)
at com.ibm.ws.asynchbeans.WorkManagerImpl.queueWorkItemForDispatch(WorkManagerImpl.java:433)
at com.ibm.ws.asynchbeans.WorkManagerImpl.schedule(WorkManagerImpl.java:1074)
at com.ibm.ws.asynchbeans.WorkManagerImpl.schedule(WorkManagerImpl.java:846)
at org.springframework.scheduling.commonj.WorkManagerTaskExecutor.execute(WorkManagerTaskExecutor.java:154)
at org.springframework.jms.listener.DefaultMessageListenerContainer.doRescheduleTask(DefaultMessageListenerContainer.java:669)
at org.springframework.jms.listener.AbstractJmsListeningContainer.resumePausedTasks(AbstractJmsListeningContainer.java:536)
at org.springframework.jms.listener.AbstractJmsListeningContainer.doStart(AbstractJmsListeningContainer.java:285)
at org.springframework.jms.listener.AbstractJmsListeningContainer.start(AbstractJmsListeningContainer.java:263)
at org.springframework.jms.listener.DefaultMessageListenerContainer.start(DefaultMessageListenerContainer.java:555)
at org.apache.camel.component.jms.JmsConsumer.startListenerContainer(JmsConsumer.java:84)
Has anyone seen this?

The thread is "hung" waiting to submit work to a full queue, and the work manager is configured to block rather than throw an error when the queue is full. To resolve the "hang", either increase the number of threads in the work manager thread pool, or change the queue full action to be "error" rather than "wait". Alternatively, investigate if the work items being submitted to the work manager are taking too long for some reason.

Related

Huge latency observed while committing transacted persistent AMQ messages

I have the following AMQ consumer configuration file which tries to consume 'persistent' messages from the queue. Also, the messages are 'transacted' (as I need to rollback if a message can't be processed in an expected way).
I see a problem with this configuration:
Whenever the consumer calls session.commit() after consuming the message, I see the commit call taking ~8 seconds to come out. I believe this is not expected.
Can someone point me if I have any issues with the below config?
<bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<bean id="simpleMessageListener" class="listener.StompSpringListener" scope="prototype" />
<bean id="destination" class="org.apache.activemq.command.ActiveMQQueue">
<property name="physicalName" value="JOBS.notifications"/>
</bean>
<bean id="poolMessageListener" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="simpleMessageListener"/>
<property name="maxSize" value="200"/>
</bean>
<bean id="messageListenerBean" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource" ref="poolMessageListener"/>
</bean>
<jms:listener-container
container-type="default"
connection-factory="amqConnectionFactory"
acknowledge="transacted"
concurrency="200-200"
cache="consumer"
prefetch="10">
<jms:listener destination="JOBS.notifications" ref="messageListenerBean" method="onMessage"/>
</jms:listener-container>
Take a look at the the ActiveMQ Spring support page. What I can see immediately is that you should be using a PoolingConnectionFactory instead of an ActiveMQConnectionFactory. This will ensure that only a single TCP connection is set up to the broker from your code, and will be shared between polling threads.

Setup of JMS message listener invoker failed for destination 'queue:XYZ:No JTA UserTransaction available

We are upgrading our project from Spring 2.5.6 to 3.2.3 and Hibernate/JPA to 4.2.3.
In spring-ds.xml for transaction management we replaced original below config
<bean id="transactionManager"
class="org.springframework.transaction.jta.WebSphereUowTransactionManager">
<!-- This property is specifically required for JMS -->
<property name="transactionManager" ref="baseTransactionManager" />
</bean>
<bean id="baseTransactionManager"
class="org.springframework.transaction.jta.WebSphereTransactionManagerFactoryBean" />
<tx:annotation-driven transaction-manager="transactionManager" />
to below as WebSphereTransactionManagerFactoryBean class is superseded in latest WAS :
<bean id="transactionManager"
class="org.springframework.transaction.jta.WebSphereUowTransactionManager" />
and JMS msg listener config looks like below :
<bean id="xxtMsgListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="jmsxxConnectionFactory" />
<property name="destination" ref="jmsxxQueue" />
<property name="messageListener" ref="xxMessageListener" />
<property name="transactionManager" ref="transactionManager" />
<property name="taskExecutor" ref="taskExecutor" />
</bean>
With above config we are getting below error in WAS logs :
Setup of JMS message listener invoker failed for destination
queue://xxQueue?busName=zzBus' - trying to recover. Cause: No JTA UserTransactionavailable - programmatic PlatformTransactionManager.getTransaction usage not supported
Is there any other config/property required to upgrade to spring 3.2.3 ? or to config WebSphereUowTransactionManager do we need to set any property ?
In case you are using Hibernate in your application, the actual Hibernate version used can be the root cause of the problem.
We spent half a day debugging it (on a WebSphere box), and then found that indeed it was the hibernate version upgrade (from 4.2.7.Final to 4.2.12.Final) which caused issue, not the JMS configuration.
UPDATE: It seems that Hibernate includes transaction-api jboss-transaction-api_1.1_spec which was not compatible with the one present on Websphere. Simply excluding this from hibernate resolved the issue.
on the DefaultMessageListenerContainer, try setting the sessionTransacted property to true. this should enable transaction support with WebSphere
The error happens because you have used JTA transaction manager, while your connection factory is not XA capable. Essentially injected implementation of ConnectionFactory does not implement JTA interfaces. Thus transaction manager isn't able of enrolling a message consumption into a new instance of UserTransaction.
In other to fix this issues the one needs to use XA capable ConnectionFactory, or other non-jta transaction manager like Spring's JmsTransactionManager.

WebSphere MQ and Atomikos - Messages Lost on process termination

My app (a spring message listener) reads from a queue and writes to the database in a single transaction. I use Atomikos to provide the XA transaction behaviour. When the app is abruptly terminated with kill statements for example, I see messages are lost. Is there any specific configuration I need to use? Should the queues be persistent? Currently the queues are non-persistent. My MQ version is v7.1.
Spring config for listener container looks like:
<bean id="listenerContainer" class="com.miax.test.TestListenerMDPImpl" autowire="byName">
<property name="connectionFactory" ref="mqConnFactory" />
<property name="destinationName" value="QUEUE" />
<property name="messageListener" ref="listenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<!-- receive time out, should be less than tranaction time out -->
<property name="receiveTimeout" value="3000" />
<!-- retry connection every 1 seconds -->
<property name="recoveryInterval" value="1000" />
<property name="autoStartup" value="true" />
<property name="sessionAcknowledgeMode" value="0" />
</bean>
Any other info will be given as needed.
Thanks.
The client you are using must be the Extended Transactional Client if downloaded prior to May of this year. Any of the V7.0.1 and higher clients as of May 2012 have the XA capability built in. If in doubt, go download a current release of the WMQ client and install.
Second, the XA transaction manager must have it's own connection to the queue manager independent of the application. This is so that it can connect and reconcile the transactions if the application fails to restart. To do this, the transaction manager must be configured with an XX_OPEN string and a switch file as described in the Infocenter topic Configuring XA-compliant transaction managers.
For what its worth, there is no such thing as a persistent queue in WMQ. It is the messages themselves that are persistent (or not). For more on that, please see my blog post on the topic. This is a rather important topic because when people assume that the queue itself is persistent they tend to devise solutions that produce unexpected results. Please read the blog post!

Handle activemq-spring connection errors

I have configured (with spring) my application to listen to a jms que with activemq, and everything works fine.
My activemq server is installed on another server and sometime it can go offline and I would like to handle the connection error. Is that possible?
Here is my spring configuration
<amq:connectionFactory id="jmsFactory" brokerURL="tcp://xxx.xxx.xxx.xxx:61616" />
<bean id="messageConverter" class="com.unic.thesting.main.jms.message.TheStingMessageConverter" scope="tenant"/>
<jms:listener-container concurrency="10" connection-factory="thestingJmsFactory" destination-type="queue" message-converter="thestingMessageConverter">
<jms:listener destination="in" ref="orderStatusConsumer" method="consume"/>
</jms:listener-container>
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate" scope="tenant">
<property name="messageConverter" ref="messageConverter" />
<property name="connectionFactory">
<bean class="org.springframework.jms.connection.SingleConnectionFactory" scope="tenant">
<property name="targetConnectionFactory">
<ref local="jmsFactory" />
</property>
</bean>
</property>
</bean>
The DefaultMessageListenerContainer which gets registered when you use ` handles recovering connections to the JMS provider if it gets dropped for any reason (it by default retries every 5 seconds until the connection is restored), so you don't have to do anything on the listener front.
On the sending side with jmsTemplate, you would receive a runtime org.springframework.jms.JmsException if there is any issues in sending a message. You should be able to catch it for any custom processing

Spring JMS not sending to queue in transaction

I am trying to use Spring to send a message to a queue. It works fine when I don't try to enable transaction handling. However, when I add transaction handling the message doesn't seem to send to the appropriate queue. All i add is a #Transactional attribute on the method and the following to the application context.
<tx:annotation-driven/>
<bean id="transactionManager" class="org.springframework.jms.connection.JmsTransactionManager">
<property name="connectionFactory" ref="connectionFactory"/>
</bean>

Resources