We have build an Mule applications with 1 mule nodes and 2 WMQ Manager. We are using HA proxy to route the traffic to multiple queue manager.(in properties file, we specify queuemanager = * )
Though we create multiple connections from Mule, all the connections are going to the same queue manager.( lets say, I keep numberofConsumer = 16 ) all the 16 connections are going to the same queue manager.
Did anyone encountered this issue? any work around. Composite component solves this issue but composite element is not in mule 4.
I am using the IBM jar. com.ibm.mq.allclient-8.0.0.3.jar. I am using the below connection factory and MQ Connector Factory
<wmq:connector name="drs-Request" port="${drs.mq.port}" transportType="CLIENT_MQ_TCPIP" specification="1.1" targetClient="JMS_COMPLIANT" validateConnections="true" maxRedelivery="-1" numberOfConsumers="${drs.mq.no.of.consumers}" connectionFactory-ref="drsConnectionFactory" doc:name="WMQ">
<reconnect-forever blocking="false" frequency="${drs.mq.reconnection.frequency}"/>
</wmq:connector>
<spring:beans>
<spring:bean id="drsConnectionFactory" name="drsConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory">
<spring:property name="channel" value="${drs.mq.channel}"/>
<spring:property name="hostName" value="${drs.mq.hostname}"/>
<spring:property name="port" value="${drs.mq.port}"/>
<spring:property name="queueManager" value="${drs.mq.queuemanager}"/>
<spring:property name="transportType" value="1"/>
<spring:property name="sSLCipherSuite" value="${drs.mq.ciphersuite}"/>
</spring:bean>
Thanks
Related
I am connecting MQ-8.x from Mule via JMS and recently I had faced an issue that seems like MQ write operation is going out of sync point range and due to this and also the huge inbound load, MQ went in to deadlock state.
<spring:bean id="ConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory" name="ConnectionFactory">
<spring:property name="hostName" value="xxxx" />
<spring:property name="port" value="xxxx"/>
<spring:property name="queueManager" value="xxxx"/>
<spring:property name="transportType" value="1"/>
<spring:property name="channel" value="xxxx"/>
</spring:bean>
<jms:connector name="JmsConsumer" username="xxxx" password="xxxx" specification="1.1" connectionFactory-ref="ConnectionFactory" numberOfConsumers="1" validateConnections="true" persistentDelivery="true" doc:name="JMS"/>
<jms:outbound-endpoint queue="xxxx" connector-ref="JmsConsumer" doc:name="Audits"/>
My operation volume will be move but its just a PUT operation, so I am really not sure whether XA or other Transaction manager to be needed in this.
This has been handled in MQ-9.x version that MQ itself will manage the out of sync point implicitly. MQ-9.x upgrade will be the solution for this kind of issue.
https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.dev.doc/q026865_.html
This message is produced because persistent messages are being produced out side of a transaction. MQ is highly optimized to process transactional persistent messages and this warning is informing us that the queue isn't being processed as efficiently as possible. The system will receive a significant performance improvement if you make the actions inside a transaction/syncpoint or if non persistent is good enough turn the persistent flag off.
We are trying to connect from our mule service to queue. This queue is on Websphere application server and we are using Websphere default messages provider.
How can we set our connector configuration to match this queue ??
We are using default JMS connector..
You need to refer to the section of the IBM Knowledge Center explaining JNDI Connections to SiBus.
If you are using EE, then I recommend using WMQ connector. Documentation available in: http://www.mulesoft.org/documentation...
If you have to use JMS, you need to create a spring bean for WebSphere connection factory and use it in your JMS connector connectionFactory-ref attribute.
<spring:bean name="MQConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<spring:property name="hostName" value="localhost"/>
<spring:property name="port" value="1414"/>
<spring:property name="queueManager" value="localmanager"/>
<spring:property name="transportType" value="1"/>
</spring:bean>
Don't forget to copy com.ibm.mqjms.jar into your Mule's classpath.
You need to use the com.ibm.mq.jms.MQXAQueueConnectionFactory class instead if you are using XA transactions.
I have a use case where I want to create multiple listeners(6) in an application. I want to subscribe to multiple destinations(6 topics). All the subscription are durable. I am using separate default message listener container(DMLC) for each listener and using different client id but I am confused about how connection factory should be used.
Should I use single ActiveMQ pooled connection factory with maxConnection specified to 6. OR should I use different pooled connection factory for each listener ?
Is there any harm is using pooledConnectionFactory with maxConnection for durable subscriber?
Source code:
Connection Factory:
<bean id="jmsFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>${jms.broker.url}</value>
</property>
</bean>
</property>
<property name="maxConnections" value="6" />
and my listener is using it as: (I have 6 similar listeners similar to this using different destinations and client ID)
<bean id="listenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
<property name="connectionFactory" ref="jmsFactory" />
<property name="destination" ref="topic_pnlCompleteTopic" />
<property name="durableSubscriptionName" value="FAGCompletion" />
<property name="pubSubDomain" value="true" />
<property name="subscriptionDurable" value="${jms.fagsListener.durable}" />
<property name="clientId" value="${jms.fagsListener.clientId}" />
<property name="messageListener" ref="pnlMessageListener" />
<property name="messageSelector" value="JMSType = 'FAG Completion'" />
</bean>
That sounds all fine to me as long as you're not using this connection factory for something else. There is no reason to limit the number of connections to 6, you can put a higher number if you want and it will be only used if necessary. ConnectionFactory is typically the thing that you share for your whole app. See it as a kind of DataSource for JMS access
You listeners will typically only use one session each, since you are using topics. There is no reason to specify a limit on your pool, or to use multiple pools. You typically use one pooled connection factory for your application unless you see a real reason to limit or split it up into different pools.
I am currently building a Mule ESB server application which uses a request-response jms connector. Since it is being used in a highly concurrent environment, we enabled spring jms cache in our MQ config.
<spring:beans>
<mule>
<!-- MQ Factory -->
<spring:bean id="testMsgMqFactoryBean1" name="testMsgMqFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<spring:property name="channel" value="${test.msg.mq.channel.1}" />
<spring:property name="queueManager" value="${test.msg.mq.queueManager.1}" />
<spring:property name="hostName" value="${test.msg.mq.hostName.1}" />
<spring:property name="port" value="${test.msg.mq.port.1}" />
<spring:property name="transportType" value="${mq.jms.transportType}" />
</spring:bean>
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<!-- <spring:property name="cacheProducers" value="false" /> -->
</spring:bean>
<!-- MQ Connector 1 -->
<jms:custom-connector name="testMsgMqConnector.1" class="org.mule.transport.jms.websphere.WebsphereJmsConnector" doc:name="Custom JMS">
<spring:property name="specification" value="1.1" />
<spring:property name="connectionFactory" ref="testMsgMqFactoryBeanCache1" />
<spring:property name="persistentDelivery" value="false" />
<spring:property name="disableTemporaryReplyToDestinations" value="true" />
<spring:property name="numberOfConsumers" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="maxRedelivery" value="-1" />
<receiver-threading-profile maxThreadsActive="${test.threading.profile.maxThreadsActive}" maxBufferSize="${test.threading.profile.maxBufferSize}" maxThreadsIdle="${test.threading.profile.maxThreadsIdle}"/>
<reconnect frequency="${mq.jms.reconnection.frequency}" count="${mq.jms.reconnection.count}" blocking="false" />
</jms:custom-connector>
<!-- msgworks inbound and outbound MQ setup -->
<!-- Rewards -->
<jms:endpoint exchange-pattern="request-response" queue="${test.msg.mq.inbound.account.queue}" name="testQueue1" connector-ref="testMsgMqConnector.1" doc:name="JMS" />
</mule>
</spring:beans>
This configuration runs fine when the client uses a static replyTo queue. However, we have some customers who are using dynamic/temporary replyTo queue. Since org.springframework.jms.connection.CachingConnectionFactory caches producers, for every temporary replyTo queue, a producer object is cached and never closed. After processing hundreds of requests, the application started throwing exceptions:
********************************************************************************
Message : Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage
Code : MULE_ERROR-42999
--------------------------------------------------------------------------------
Exception stack is:
1. MQJE001: Completion Code 2, Reason 2017 (com.ibm.mq.MQException)
com.ibm.mq.MQQueueManager:2808 (null)
2. MQJMS2008: failed to open MQ queue TESTret5a975v53AF980F2006BE02(JMS Code: MQJMS2008) (javax.jms.ResourceAllocationException)
com.ibm.mq.jms.MQQueueServices:398 (http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/jms/ResourceAllocationException.html)
3. Failed to create and dispatch response event over Jms destination "queue://QMGR1/TESTret5a975v53AF980F2006BE02?targetClient=1". Failed to route event via endpoint: null. Message payload is of type: JMSTextMessage (org.mule.api.transport.DispatchException)
org.mule.transport.jms.JmsReplyToHandler:173 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transport/DispatchException.html)
After investigating the MQ error code (MQJE001: Completion Code 2, Reason 2017), I found that the reason behind this error is because we never closed producers, and the producers exhausted MQ Handles on the Queue Manager. The quick and easy fix is to uncomment the line in spring jms cache config to close Producers each and every time.
<spring:bean id="testMsgMqFactoryBeanCache1" class="org.springframework.jms.connection.CachingConnectionFactory">
<spring:property name="targetConnectionFactory" ref="testMsgMqFactoryBean1" />
<spring:property name="sessionCacheSize" value="${test.threading.profile.maxThreadsActive}" />
<spring:property name="cacheConsumers" value="false" />
<spring:property name="cacheProducers" value="false" />
</spring:bean>
Now I am not seeing the MQ issue, but came up with another performance issue, because no producers are cached, so a new producer is created every single time.
My question is, how to deal with this scenario? Since the client won't change the way they receive reply messages from temporary queues, how can we avoid exhausting MQ handlers without impacting the performance.
Thank you very much
- Lei
This is a very interesting use case. However, I'm afraid there is nothing out of the box to fix this. There are the more obvious solutions: disable the caching or extend the spring cache provider.
Temporary queues and performance are definitely not two things that you can have at the same time. I would suggest another posibility:
If you are using the temporary queues to have the responses come back to a given consumer only, probably in adition to having old messages discarded on reconnection:
You could use a well know queue for the replies, using a combination of a header with the hostname that should receive the message plus a selector different on each consumer node and a TTL on the messages sent to make them dissapear after a while.
My app (a spring message listener) reads from a queue and writes to the database in a single transaction. I use Atomikos to provide the XA transaction behaviour. When the app is abruptly terminated with kill statements for example, I see messages are lost. Is there any specific configuration I need to use? Should the queues be persistent? Currently the queues are non-persistent. My MQ version is v7.1.
Spring config for listener container looks like:
<bean id="listenerContainer" class="com.miax.test.TestListenerMDPImpl" autowire="byName">
<property name="connectionFactory" ref="mqConnFactory" />
<property name="destinationName" value="QUEUE" />
<property name="messageListener" ref="listenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<!-- receive time out, should be less than tranaction time out -->
<property name="receiveTimeout" value="3000" />
<!-- retry connection every 1 seconds -->
<property name="recoveryInterval" value="1000" />
<property name="autoStartup" value="true" />
<property name="sessionAcknowledgeMode" value="0" />
</bean>
Any other info will be given as needed.
Thanks.
The client you are using must be the Extended Transactional Client if downloaded prior to May of this year. Any of the V7.0.1 and higher clients as of May 2012 have the XA capability built in. If in doubt, go download a current release of the WMQ client and install.
Second, the XA transaction manager must have it's own connection to the queue manager independent of the application. This is so that it can connect and reconcile the transactions if the application fails to restart. To do this, the transaction manager must be configured with an XX_OPEN string and a switch file as described in the Infocenter topic Configuring XA-compliant transaction managers.
For what its worth, there is no such thing as a persistent queue in WMQ. It is the messages themselves that are persistent (or not). For more on that, please see my blog post on the topic. This is a rather important topic because when people assume that the queue itself is persistent they tend to devise solutions that produce unexpected results. Please read the blog post!