My app (a spring message listener) reads from a queue and writes to the database in a single transaction. I use Atomikos to provide the XA transaction behaviour. When the app is abruptly terminated with kill statements for example, I see messages are lost. Is there any specific configuration I need to use? Should the queues be persistent? Currently the queues are non-persistent. My MQ version is v7.1.
Spring config for listener container looks like:
<bean id="listenerContainer" class="com.miax.test.TestListenerMDPImpl" autowire="byName">
<property name="connectionFactory" ref="mqConnFactory" />
<property name="destinationName" value="QUEUE" />
<property name="messageListener" ref="listenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<!-- receive time out, should be less than tranaction time out -->
<property name="receiveTimeout" value="3000" />
<!-- retry connection every 1 seconds -->
<property name="recoveryInterval" value="1000" />
<property name="autoStartup" value="true" />
<property name="sessionAcknowledgeMode" value="0" />
</bean>
Any other info will be given as needed.
Thanks.
The client you are using must be the Extended Transactional Client if downloaded prior to May of this year. Any of the V7.0.1 and higher clients as of May 2012 have the XA capability built in. If in doubt, go download a current release of the WMQ client and install.
Second, the XA transaction manager must have it's own connection to the queue manager independent of the application. This is so that it can connect and reconcile the transactions if the application fails to restart. To do this, the transaction manager must be configured with an XX_OPEN string and a switch file as described in the Infocenter topic Configuring XA-compliant transaction managers.
For what its worth, there is no such thing as a persistent queue in WMQ. It is the messages themselves that are persistent (or not). For more on that, please see my blog post on the topic. This is a rather important topic because when people assume that the queue itself is persistent they tend to devise solutions that produce unexpected results. Please read the blog post!
Related
In order to perform parallel processing of jms messages, I have configured the JmsComponent and connectionFactory as below.
After reading some posts and the official tutorial, seems that the below configuration should work for ActiveMQ. However, my testing shows that it doesn't work on Solace. Can someone give me a hint on this? Thanks.
// Route Definition - Camel Java DSL
from(INBOUND_ENDPOINT).setExchangePattern(ExchangePattern.InOnly).threads(5).bean(ThroughputMeasurer.class);
<!-- JMS Config -->
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="acknowledgementModeName" value="AUTO_ACKNOWLEDGE" />
<property name="deliveryPersistent" value="false" />
<property name="asyncConsumer" value="true" />
<property name="concurrentConsumers" value="5" />
</bean>
<!-- jndiTemplate is omitted here -->
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="ceConnectionFactory" />
</bean>
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
<property name="sessionCacheSize" value="30" />
</bean>
I believe the problem here is that your consumer is bound to an exclusive queue, and only 1 consumer is allowed to process message. Binding to an non-exclusive queue should solve the problem.
Exclusive queues only allow the first consumer to consume from the queue.
All other consumers that are bound to the queue will not receive data. In the event that the first consumer disconnects, the next oldest consumer will begin receiving data. The other consumers can be thought of as being "standby" consumers that will take over the moment the first consumer disconnects.
Non-exclusive queues are used for load balancing. Messages that are spooled to the queue will be distributed to all consumers in a round-robin manner.
Check:
Are there any exception messages in the log?
Is the jndiName correct? Perhaps it should be jms/ceConnectionFactory?
Is the URI INBOUND_ENDPOINT correct?
...
Try to setup ActiveMQ first, and migrate the configuration to Solace.
I have a use case where I want to create multiple listeners(6) in an application. I want to subscribe to multiple destinations(6 topics). All the subscription are durable. I am using separate default message listener container(DMLC) for each listener and using different client id but I am confused about how connection factory should be used.
Should I use single ActiveMQ pooled connection factory with maxConnection specified to 6. OR should I use different pooled connection factory for each listener ?
Is there any harm is using pooledConnectionFactory with maxConnection for durable subscriber?
Source code:
Connection Factory:
<bean id="jmsFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>${jms.broker.url}</value>
</property>
</bean>
</property>
<property name="maxConnections" value="6" />
and my listener is using it as: (I have 6 similar listeners similar to this using different destinations and client ID)
<bean id="listenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
<property name="connectionFactory" ref="jmsFactory" />
<property name="destination" ref="topic_pnlCompleteTopic" />
<property name="durableSubscriptionName" value="FAGCompletion" />
<property name="pubSubDomain" value="true" />
<property name="subscriptionDurable" value="${jms.fagsListener.durable}" />
<property name="clientId" value="${jms.fagsListener.clientId}" />
<property name="messageListener" ref="pnlMessageListener" />
<property name="messageSelector" value="JMSType = 'FAG Completion'" />
</bean>
That sounds all fine to me as long as you're not using this connection factory for something else. There is no reason to limit the number of connections to 6, you can put a higher number if you want and it will be only used if necessary. ConnectionFactory is typically the thing that you share for your whole app. See it as a kind of DataSource for JMS access
You listeners will typically only use one session each, since you are using topics. There is no reason to specify a limit on your pool, or to use multiple pools. You typically use one pooled connection factory for your application unless you see a real reason to limit or split it up into different pools.
I am currently building a Mule ESB server application which uses a request-response jms connector. Since it is being used in a highly concurrent environment, we enabled spring jms cache in our MQ config.
<spring:beans>
<mule>
<!-- MQ Factory -->
<spring:bean id="testMsgMqFactoryBean1" name="testMsgMqFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<spring:property name="channel" value="${test.msg.mq.channel.1}" />
<spring:property name="queueManager" value="${test.msg.mq.queueManager.1}" />
<spring:property name="hostName" value="${test.msg.mq.hostName.1}" />
<spring:property name="port" value="${test.msg.mq.port.1}" />
<spring:property name="transportType" value="${mq.jms.transportType}" />
</spring:bean>
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<!-- <spring:property name="cacheProducers" value="false" /> -->
</spring:bean>
<!-- MQ Connector 1 -->
<jms:custom-connector name="testMsgMqConnector.1" class="org.mule.transport.jms.websphere.WebsphereJmsConnector" doc:name="Custom JMS">
<spring:property name="specification" value="1.1" />
<spring:property name="connectionFactory" ref="testMsgMqFactoryBeanCache1" />
<spring:property name="persistentDelivery" value="false" />
<spring:property name="disableTemporaryReplyToDestinations" value="true" />
<spring:property name="numberOfConsumers" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="maxRedelivery" value="-1" />
<receiver-threading-profile maxThreadsActive="${test.threading.profile.maxThreadsActive}" maxBufferSize="${test.threading.profile.maxBufferSize}" maxThreadsIdle="${test.threading.profile.maxThreadsIdle}"/>
<reconnect frequency="${mq.jms.reconnection.frequency}" count="${mq.jms.reconnection.count}" blocking="false" />
</jms:custom-connector>
<!-- msgworks inbound and outbound MQ setup -->
<!-- Rewards -->
<jms:endpoint exchange-pattern="request-response" queue="${test.msg.mq.inbound.account.queue}" name="testQueue1" connector-ref="testMsgMqConnector.1" doc:name="JMS" />
</mule>
</spring:beans>
This configuration runs fine when the client uses a static replyTo queue. However, we have some customers who are using dynamic/temporary replyTo queue. Since org.springframework.jms.connection.CachingConnectionFactory caches producers, for every temporary replyTo queue, a producer object is cached and never closed. After processing hundreds of requests, the application started throwing exceptions:
********************************************************************************
Message : Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage
Code : MULE_ERROR-42999
--------------------------------------------------------------------------------
Exception stack is:
1. MQJE001: Completion Code 2, Reason 2017 (com.ibm.mq.MQException)
com.ibm.mq.MQQueueManager:2808 (null)
2. MQJMS2008: failed to open MQ queue TESTret5a975v53AF980F2006BE02(JMS Code: MQJMS2008) (javax.jms.ResourceAllocationException)
com.ibm.mq.jms.MQQueueServices:398 (http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/jms/ResourceAllocationException.html)
3. Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage (org.mule.api.transport.DispatchException)
org.mule.transport.jms.JmsReplyToHandler:173 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
After investigating the MQ error code (MQJE001: Completion Code 2, Reason 2017), I found that the reason behind this error is because we never closed producers, and the producers exhausted MQ Handles on the Queue Manager. The quick and easy fix is to uncomment the line in spring jms cache config to close Producers each and every time.
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<spring:property name="cacheProducers" value="false" />
</spring:bean>
Now I am not seeing the MQ issue, but came up with another performance issue, because no producers are cached, so a new producer is created every single time.
My question is, how to deal with this scenario? Since the client won't change the way they receive reply messages from temporary queues, how can we avoid exhausting MQ handlers without impacting the performance.
Thank you very much
- Lei
This is a very interesting use case. However, I'm afraid there is nothing out of the box to fix this. There are the more obvious solutions: disable the caching or extend the spring cache provider.
Temporary queues and performance are definitely not two things that you can have at the same time. I would suggest another posibility:
If you are using the temporary queues to have the responses come back to a given consumer only, probably in adition to having old messages discarded on reconnection:
You could use a well know queue for the replies, using a combination of a header with the hostname that should receive the message plus a selector different on each consumer node and a TTL on the messages sent to make them dissapear after a while.
I am trying to configure a JMS connection pool in spring/camel for Websphere MQ. I am seeing class cast exception, when tried to use CachingConnectionFactory from spring. Could not find a pool from WMQ, have anybody done connection pooling with WMQ, i didnt find any examples. There are lot of examples for ActiveMQ.
here is what i have so far, that is producing class cast exception.
<bean id="inCachingConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="inboundMqConnectionFactory1" />
<property name="sessionCacheSize" value="5" />
</bean>
<bean id="inboundWebsphereMq1" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="inCachingConnectionFactory" />
<property name="destinationResolver" ref="jmsDestinationResolver" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="txManager1" />
</bean>
<bean id="inboundMqConnectionFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${isi.inbound.queue.host2}" />
<property name="port" value="${isi.inbound.queue.port}" />
<property name="queueManager" value="${isi.inbound.queue.queuemanager2}" />
<property name="channel" value="${isi.inbound.queue.channel2}" />
<property name="transportType" value="${isi.queue.transportType}" />
</bean>
The exception i see is here
trying to recover. Cause: com.sun.proxy.$Proxy37 cannot be cast to com.ibm.mq.jms.MQQueueSession
In general:
do not use QueueConnectionFactory or
TopicConnectionFactory, as ConnectionFactory (JMS 1.1) is replacement for both
Each ConnectionFactory from v7 WMQ JMS client jars provide caching logic on each own so in general you don't need CachingConnection Factory.
Now try it this way:
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory"
p:queueManager="${QM_NAME}"
p:hostName="${QM_HOST_NAME}"
p:port="${QM_HOST_PORT}"
p:channel="${QM_CHANNEL}"
p:clientID="${QM_CLIENT_ID}">
<property name="transportType">
<util:constant static-field="com.ibm.msg.client.wmq.WMQConstants.WMQ_CM_CLIENT" />
</property>
</bean>
<bean id="userConnectionFactory" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="mqConnectionFactory"
p:username="${QM_USERNAME}"
p:password="${QM_PASSWORD}" />
<!-- this will work -->
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="userConnectionFactory"
p:cacheConsumers="true"
p:reconnectOnException="true" />
Of course you can cache sessions instead of consumers if you want it that way. By my experience WMQ session caching is measurable performance improvement but only if you are limited on CPU power on WMQ machine or by actual message throughput; both situations are rare in majority of world applications. Caching consumers avoids excessive MQ OPEN calls which is expensive operation on WMQ so it helps too.
My rule of the thumb is consumer + session caching performance benefit is equal to 1/2 of performance benefit of connection caching and usually not wort of pursuing in your everyday JEE stack unless you are hardware limited.
Since WMQ v7, asynchronous consumers are realy realy fast with literally no CPU overhead when compared to spring MC, and are preferred way of consuming messages if you are HW limited. Most of the days I still use Spring as I prefer its easy-going nature.
Hope it helps.
I am using Spring 3.0 - DefaultMessageListenerContainer to connect to a Websphere 6 MQ. There are some messages already present on the MQ. When I run my test, the listener implementing SessionAwareMessageListener is started. But the onMessage() does not get invoked. So the problem is that the messages already in the queue are not read.
As per the docs, autoStartup is true by default (and I have not changed this). As per my underatanding, On startup, the listener should read the Queue for any existing messages and onMessage() should get invoked. Please let me know if this understanding is wrong.
Here is the snippet from the config file:
<bean id="jmsContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="jmsQueueConnectionFactory" />
<property name="destinationName">
<value>${queue}</value>
</property>
<property name="messageListener" ref="exampleMessageListener" />
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="1" />
<property name="idleTaskExecutionLimit" value="4" />
<property name="maxMessagesPerTask" value="4" />
<property name="receiveTimeout" value="5000" />
<property name="recoveryInterval" value="5000" />
<property name="sessionTransacted" value="true" />
<property name="transactionManager" ref="jmsTransActionManager" />
</bean>
Note: There is no error/exception, the test app starts up just fine.
Any pointers to resolve this will be of great help.
Thanks,
RJ
The issue is resolved. The test class was terminating after the listener got hold of the message but before it could show the message as output. So the first message (highest priority one) was getting lost from the queue.
Later as I had included a Transaction Manager, the listener was putting the message back on the queue (showing a warning as Rejecting received message because of the listener container having been stopped in the meantime). As this was a warning and my logger was at a Debug level, I missed this earlier.
Putting a thread.sleep in the test class made sure that it is running for a longer time and the listener could read all the messages in the queue in the order of priority :)
cheers,
RJ