Handle out of sync point issue via IBM JMS from Mule - jms

I am connecting MQ-8.x from Mule via JMS and recently I had faced an issue that seems like MQ write operation is going out of sync point range and due to this and also the huge inbound load, MQ went in to deadlock state.
<spring:bean id="ConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory" name="ConnectionFactory">
<spring:property name="hostName" value="xxxx" />
<spring:property name="port" value="xxxx"/>
<spring:property name="queueManager" value="xxxx"/>
<spring:property name="transportType" value="1"/>
<spring:property name="channel" value="xxxx"/>
</spring:bean>
<jms:connector name="JmsConsumer" username="xxxx" password="xxxx" specification="1.1" connectionFactory-ref="ConnectionFactory" numberOfConsumers="1" validateConnections="true" persistentDelivery="true" doc:name="JMS"/>
<jms:outbound-endpoint queue="xxxx" connector-ref="JmsConsumer" doc:name="Audits"/>
My operation volume will be move but its just a PUT operation, so I am really not sure whether XA or other Transaction manager to be needed in this.

This has been handled in MQ-9.x version that MQ itself will manage the out of sync point implicitly. MQ-9.x upgrade will be the solution for this kind of issue.
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q026865_.html

This message is produced because persistent messages are being produced out side of a transaction. MQ is highly optimized to process transactional persistent messages and this warning is informing us that the queue isn't being processed as efficiently as possible. The system will receive a significant performance improvement if you make the actions inside a transaction/syncpoint or if non persistent is good enough turn the persistent flag off.

Related

Mule WMQ Connection not spreading across Queue Manager

We have build an Mule applications with 1 mule nodes and 2 WMQ Manager. We are using HA proxy to route the traffic to multiple queue manager.(in properties file, we specify queuemanager = * )
Though we create multiple connections from Mule, all the connections are going to the same queue manager.( lets say, I keep numberofConsumer = 16 ) all the 16 connections are going to the same queue manager.
Did anyone encountered this issue? any work around. Composite component solves this issue but composite element is not in mule 4.
I am using the IBM jar. com.ibm.mq.allclient-8.0.0.3.jar. I am using the below connection factory and MQ Connector Factory
<wmq:connector name="drs-Request" port="${drs.mq.port}" transportType="CLIENT_MQ_TCPIP" specification="1.1" targetClient="JMS_COMPLIANT" validateConnections="true" maxRedelivery="-1" numberOfConsumers="${drs.mq.no.of.consumers}" connectionFactory-ref="drsConnectionFactory" doc:name="WMQ">
<reconnect-forever blocking="false" frequency="${drs.mq.reconnection.frequency}"/>
</wmq:connector>
<spring:beans>
<spring:bean id="drsConnectionFactory" name="drsConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory">
<spring:property name="channel" value="${drs.mq.channel}"/>
<spring:property name="hostName" value="${drs.mq.hostname}"/>
<spring:property name="port" value="${drs.mq.port}"/>
<spring:property name="queueManager" value="${drs.mq.queuemanager}"/>
<spring:property name="transportType" value="1"/>
<spring:property name="sSLCipherSuite" value="${drs.mq.ciphersuite}"/>
</spring:bean>
Thanks

How to resolve jms server performance issue when client uses temporary replyTo queue?

I am currently building a Mule ESB server application which uses a request-response jms connector. Since it is being used in a highly concurrent environment, we enabled spring jms cache in our MQ config.
<spring:beans>
<mule>
<!-- MQ Factory -->
<spring:bean id="testMsgMqFactoryBean1" name="testMsgMqFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<spring:property name="channel" value="${test.msg.mq.channel.1}" />
<spring:property name="queueManager" value="${test.msg.mq.queueManager.1}" />
<spring:property name="hostName" value="${test.msg.mq.hostName.1}" />
<spring:property name="port" value="${test.msg.mq.port.1}" />
<spring:property name="transportType" value="${mq.jms.transportType}" />
</spring:bean>
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<!-- <spring:property name="cacheProducers" value="false" /> -->
</spring:bean>
<!-- MQ Connector 1 -->
<jms:custom-connector name="testMsgMqConnector.1" class="org.mule.transport.jms.websphere.WebsphereJmsConnector" doc:name="Custom JMS">
<spring:property name="specification" value="1.1" />
<spring:property name="connectionFactory" ref="testMsgMqFactoryBeanCache1" />
<spring:property name="persistentDelivery" value="false" />
<spring:property name="disableTemporaryReplyToDestinations" value="true" />
<spring:property name="numberOfConsumers" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="maxRedelivery" value="-1" />
<receiver-threading-profile maxThreadsActive="${test.threading.profile.maxThreadsActive}" maxBufferSize="${test.threading.profile.maxBufferSize}" maxThreadsIdle="${test.threading.profile.maxThreadsIdle}"/>
<reconnect frequency="${mq.jms.reconnection.frequency}" count="${mq.jms.reconnection.count}" blocking="false" />
</jms:custom-connector>
<!-- msgworks inbound and outbound MQ setup -->
<!-- Rewards -->
<jms:endpoint exchange-pattern="request-response" queue="${test.msg.mq.inbound.account.queue}" name="testQueue1" connector-ref="testMsgMqConnector.1" doc:name="JMS" />
</mule>
</spring:beans>
This configuration runs fine when the client uses a static replyTo queue. However, we have some customers who are using dynamic/temporary replyTo queue. Since org.springframework.jms.connection.CachingConnectionFactory caches producers, for every temporary replyTo queue, a producer object is cached and never closed. After processing hundreds of requests, the application started throwing exceptions:
********************************************************************************
Message : Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage
Code : MULE_ERROR-42999
--------------------------------------------------------------------------------
Exception stack is:
1. MQJE001: Completion Code 2, Reason 2017 (com.ibm.mq.MQException)
com.ibm.mq.MQQueueManager:2808 (null)
2. MQJMS2008: failed to open MQ queue TESTret5a975v53AF980F2006BE02(JMS Code: MQJMS2008) (javax.jms.ResourceAllocationException)
com.ibm.mq.jms.MQQueueServices:398 (http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/jms/ResourceAllocationException.html)
3. Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage (org.mule.api.transport.DispatchException)
org.mule.transport.jms.JmsReplyToHandler:173 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
After investigating the MQ error code (MQJE001: Completion Code 2, Reason 2017), I found that the reason behind this error is because we never closed producers, and the producers exhausted MQ Handles on the Queue Manager. The quick and easy fix is to uncomment the line in spring jms cache config to close Producers each and every time.
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<spring:property name="cacheProducers" value="false" />
</spring:bean>
Now I am not seeing the MQ issue, but came up with another performance issue, because no producers are cached, so a new producer is created every single time.
My question is, how to deal with this scenario? Since the client won't change the way they receive reply messages from temporary queues, how can we avoid exhausting MQ handlers without impacting the performance.
Thank you very much
- Lei
This is a very interesting use case. However, I'm afraid there is nothing out of the box to fix this. There are the more obvious solutions: disable the caching or extend the spring cache provider.
Temporary queues and performance are definitely not two things that you can have at the same time. I would suggest another posibility:
If you are using the temporary queues to have the responses come back to a given consumer only, probably in adition to having old messages discarded on reconnection:
You could use a well know queue for the replies, using a combination of a header with the hostname that should receive the message plus a selector different on each consumer node and a TTL on the messages sent to make them dissapear after a while.

how to configure (spring) JMS connection Pool for WMQ

I am trying to configure a JMS connection pool in spring/camel for Websphere MQ. I am seeing class cast exception, when tried to use CachingConnectionFactory from spring. Could not find a pool from WMQ, have anybody done connection pooling with WMQ, i didnt find any examples. There are lot of examples for ActiveMQ.
here is what i have so far, that is producing class cast exception.
<bean id="inCachingConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="inboundMqConnectionFactory1" />
<property name="sessionCacheSize" value="5" />
</bean>
<bean id="inboundWebsphereMq1" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="inCachingConnectionFactory" />
<property name="destinationResolver" ref="jmsDestinationResolver" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="txManager1" />
</bean>
<bean id="inboundMqConnectionFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${isi.inbound.queue.host2}" />
<property name="port" value="${isi.inbound.queue.port}" />
<property name="queueManager" value="${isi.inbound.queue.queuemanager2}" />
<property name="channel" value="${isi.inbound.queue.channel2}" />
<property name="transportType" value="${isi.queue.transportType}" />
</bean>
The exception i see is here
trying to recover. Cause: com.sun.proxy.$Proxy37 cannot be cast to com.ibm.mq.jms.MQQueueSession
In general:
do not use QueueConnectionFactory or
TopicConnectionFactory, as ConnectionFactory (JMS 1.1) is replacement for both
Each ConnectionFactory from v7 WMQ JMS client jars provide caching logic on each own so in general you don't need CachingConnection Factory.
Now try it this way:
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory"
p:queueManager="${QM_NAME}"
p:hostName="${QM_HOST_NAME}"
p:port="${QM_HOST_PORT}"
p:channel="${QM_CHANNEL}"
p:clientID="${QM_CLIENT_ID}">
<property name="transportType">
<util:constant static-field="com.ibm.msg.client.wmq.WMQConstants.WMQ_CM_CLIENT" />
</property>
</bean>
<bean id="userConnectionFactory" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="mqConnectionFactory"
p:username="${QM_USERNAME}"
p:password="${QM_PASSWORD}" />
<!-- this will work -->
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="userConnectionFactory"
p:cacheConsumers="true"
p:reconnectOnException="true" />
Of course you can cache sessions instead of consumers if you want it that way. By my experience WMQ session caching is measurable performance improvement but only if you are limited on CPU power on WMQ machine or by actual message throughput; both situations are rare in majority of world applications. Caching consumers avoids excessive MQ OPEN calls which is expensive operation on WMQ so it helps too.
My rule of the thumb is consumer + session caching performance benefit is equal to 1/2 of performance benefit of connection caching and usually not wort of pursuing in your everyday JEE stack unless you are hardware limited.
Since WMQ v7, asynchronous consumers are realy realy fast with literally no CPU overhead when compared to spring MC, and are preferred way of consuming messages if you are HW limited. Most of the days I still use Spring as I prefer its easy-going nature.
Hope it helps.

WebSphere MQ and Atomikos - Messages Lost on process termination

My app (a spring message listener) reads from a queue and writes to the database in a single transaction. I use Atomikos to provide the XA transaction behaviour. When the app is abruptly terminated with kill statements for example, I see messages are lost. Is there any specific configuration I need to use? Should the queues be persistent? Currently the queues are non-persistent. My MQ version is v7.1.
Spring config for listener container looks like:
<bean id="listenerContainer" class="com.miax.test.TestListenerMDPImpl" autowire="byName">
<property name="connectionFactory" ref="mqConnFactory" />
<property name="destinationName" value="QUEUE" />
<property name="messageListener" ref="listenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<!-- receive time out, should be less than tranaction time out -->
<property name="receiveTimeout" value="3000" />
<!-- retry connection every 1 seconds -->
<property name="recoveryInterval" value="1000" />
<property name="autoStartup" value="true" />
<property name="sessionAcknowledgeMode" value="0" />
</bean>
Any other info will be given as needed.
Thanks.
The client you are using must be the Extended Transactional Client if downloaded prior to May of this year. Any of the V7.0.1 and higher clients as of May 2012 have the XA capability built in. If in doubt, go download a current release of the WMQ client and install.
Second, the XA transaction manager must have it's own connection to the queue manager independent of the application. This is so that it can connect and reconcile the transactions if the application fails to restart. To do this, the transaction manager must be configured with an XX_OPEN string and a switch file as described in the Infocenter topic Configuring XA-compliant transaction managers.
For what its worth, there is no such thing as a persistent queue in WMQ. It is the messages themselves that are persistent (or not). For more on that, please see my blog post on the topic. This is a rather important topic because when people assume that the queue itself is persistent they tend to devise solutions that produce unexpected results. Please read the blog post!

No new consumers on activemq queue after a while

Since a month we have a reoccurring issue with activemq and spring. After a some time (between a day and a week) we have no more consumers and no new ones get started and the queue starts to fill up.
This setup ran for over a year, without any issues and as far as we can see nothing relevant has been changed.
An other queue we use also started to show the same behavior, but less frequent.
from the activemq webconsole ( as you can see lots of pending messages and no consumers)
Name ↑ Number Of Pending Messages Number Of Consumers Messages Enqueued Messages Dequeued Views Operations
queue.readresult 19595 0 40747 76651 Browse Active Consumers
contents of our bundle-context.xml
<!-- JMS -->
<bean id="jmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="maxConnections" value="5" />
<property name="maximumActive" value="5" />
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>tcp://localhost:61616</value>
</property>
</bean>
</property>
</bean>
<bean id="ResultMessageConverter" class="com.bla.ResultMessageConverter" />
<jms:listener-container connection-factory="jmsConnectionFactory" destination-resolver="jmsDestinationResolver"
concurrency="2" prefetch="10" acknowledge="auto" cache="none" message-converter="ResultMessageConverter">
<jms:listener destination="queue.readresult" ref="ReaderRequestManager" method="handleReaderResult" />
</jms:listener-container>
There are no exception in any of the logs. Does anyone knows of a reason why after a while no new consumers could be started.
Thanks
I've run into issues before where "consumers stop consuming," but haven't seen consumers stop existing. You may be running out of memory and/or connections in the pool. Do you have to restart ActiveMQ to fix the problem or just your application?
Here are a couple suggestions:
Set the queue prefetch to 0
Add "useKeepAlive=false" on the connection string
Increase the memory limits for the queues
I can see no obvious reason in the config provided why it should fail. So you need to resort to classic trouble shooting.
Try to set logging to debug and recreate the issue. Then you should be able to see more and you might be able to detect the root cause of it.
Also, check out the JMS exception listener and try to attach it your implementation of it to get a grasp of the real problem.
http://docs.oracle.com/javaee/6/api/javax/jms/ExceptionListener.html

Resources