how to configure (spring) JMS connection Pool for WMQ - spring

I am trying to configure a JMS connection pool in spring/camel for Websphere MQ. I am seeing class cast exception, when tried to use CachingConnectionFactory from spring. Could not find a pool from WMQ, have anybody done connection pooling with WMQ, i didnt find any examples. There are lot of examples for ActiveMQ.
here is what i have so far, that is producing class cast exception.
<bean id="inCachingConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="inboundMqConnectionFactory1" />
<property name="sessionCacheSize" value="5" />
</bean>
<bean id="inboundWebsphereMq1" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="inCachingConnectionFactory" />
<property name="destinationResolver" ref="jmsDestinationResolver" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="txManager1" />
</bean>
<bean id="inboundMqConnectionFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${isi.inbound.queue.host2}" />
<property name="port" value="${isi.inbound.queue.port}" />
<property name="queueManager" value="${isi.inbound.queue.queuemanager2}" />
<property name="channel" value="${isi.inbound.queue.channel2}" />
<property name="transportType" value="${isi.queue.transportType}" />
</bean>
The exception i see is here
trying to recover. Cause: com.sun.proxy.$Proxy37 cannot be cast to com.ibm.mq.jms.MQQueueSession

In general:
do not use QueueConnectionFactory or
TopicConnectionFactory, as ConnectionFactory (JMS 1.1) is replacement for both
Each ConnectionFactory from v7 WMQ JMS client jars provide caching logic on each own so in general you don't need CachingConnection Factory.
Now try it this way:
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory"
p:queueManager="${QM_NAME}"
p:hostName="${QM_HOST_NAME}"
p:port="${QM_HOST_PORT}"
p:channel="${QM_CHANNEL}"
p:clientID="${QM_CLIENT_ID}">
<property name="transportType">
<util:constant static-field="com.ibm.msg.client.wmq.WMQConstants.WMQ_CM_CLIENT" />
</property>
</bean>
<bean id="userConnectionFactory" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="mqConnectionFactory"
p:username="${QM_USERNAME}"
p:password="${QM_PASSWORD}" />
<!-- this will work -->
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="userConnectionFactory"
p:cacheConsumers="true"
p:reconnectOnException="true" />
Of course you can cache sessions instead of consumers if you want it that way. By my experience WMQ session caching is measurable performance improvement but only if you are limited on CPU power on WMQ machine or by actual message throughput; both situations are rare in majority of world applications. Caching consumers avoids excessive MQ OPEN calls which is expensive operation on WMQ so it helps too.
My rule of the thumb is consumer + session caching performance benefit is equal to 1/2 of performance benefit of connection caching and usually not wort of pursuing in your everyday JEE stack unless you are hardware limited.
Since WMQ v7, asynchronous consumers are realy realy fast with literally no CPU overhead when compared to spring MC, and are preferred way of consuming messages if you are HW limited. Most of the days I still use Spring as I prefer its easy-going nature.
Hope it helps.

Related

Using ActiveMQ/JMS on Tomcat server?

We have a Spring based web application deployed in Tomcat. The web application uses Spring's JmsTemplate and DefaultMessageListenerContainer to produce and consume messages respectively. Our JMS provider is ActiveMQ "Classic" and we have used ActiveMQ client libraries to establish a connection with the native broker. We have used PooledConnectionFactory from the activemq-pool library. Since it's a Spring web application we have defined the connection factory bean and wired the connectionFactory into JmsTemplate and DefaultMessageListenerContainer beans respectively. We are assuming the pooling would be driven by Spring through the connection factory.
The behavior we are seeing is the JMS sessions are being created/destroyed continuously. Under load the application would stop consuming the messages.
After reading different articles we are trying to understand the role of JCA JMS. Can anybody suggest implementing JMS through JCA would resolve the issue and to enlist JMS as an XA resource to maintain the connections and sessions through JCA Adapters.
In our Spring JMS web application and Tomcat server we have used both the ActiveMQ PooledConnectionFactory and ActiveMQConnectionFactory from ActiveMQ client libraries. We see the sessions are being created/destroyed frequently and this stops the JMS client from consuming messages.
The below snippet of the Spring beans how we have configured the Spring JmsTemplate and DefaultMessageListenerContainer beans wired in with the connectionFactory.
<!--bean id="jmsQueueConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="ssl://localhost:61616"/>
<property name="trustAllPackages" value="true"/>
</bean-->
<bean id="jmsQueueConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="ssl://localhost:61616"/>
<property name="trustAllPackages" value="true"/>
</bean>
</property>
<property name="maxConnections" value="5"/>
<property name="maximumActiveSessionPerConnection" value="400"/>
</bean>
<bean id="dmDefaultMessageListenerContainer" class="com.crsoftwareinc.crs.core.jmsListener.DMDefaultMessageListenerContainer">
<property name="autoStartup" value="false"/>
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="5" />
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="connectionFactory" ref="jmsQueueConnectionFactory" />
<property name="sessionTransacted" value="true"/>
</bean>
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="jmsQueueConnectionFactory"/>
<property name="sessionTransacted" value="true"/>
</bean>
We are looking for a work around how could we use JMS with ActiveMQ in Spring Web application in production deployments where the connections and sessions are maintained seamlessly?

Solace JMS didn't run with parallel threads in Camel

In order to perform parallel processing of jms messages, I have configured the JmsComponent and connectionFactory as below.
After reading some posts and the official tutorial, seems that the below configuration should work for ActiveMQ. However, my testing shows that it doesn't work on Solace. Can someone give me a hint on this? Thanks.
// Route Definition - Camel Java DSL
from(INBOUND_ENDPOINT).setExchangePattern(ExchangePattern.InOnly).threads(5).bean(ThroughputMeasurer.class);
<!-- JMS Config -->
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="acknowledgementModeName" value="AUTO_ACKNOWLEDGE" />
<property name="deliveryPersistent" value="false" />
<property name="asyncConsumer" value="true" />
<property name="concurrentConsumers" value="5" />
</bean>
<!-- jndiTemplate is omitted here -->
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="ceConnectionFactory" />
</bean>
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
<property name="sessionCacheSize" value="30" />
</bean>
I believe the problem here is that your consumer is bound to an exclusive queue, and only 1 consumer is allowed to process message. Binding to an non-exclusive queue should solve the problem.
Exclusive queues only allow the first consumer to consume from the queue.
All other consumers that are bound to the queue will not receive data. In the event that the first consumer disconnects, the next oldest consumer will begin receiving data. The other consumers can be thought of as being "standby" consumers that will take over the moment the first consumer disconnects.
Non-exclusive queues are used for load balancing. Messages that are spooled to the queue will be distributed to all consumers in a round-robin manner.
Check:
Are there any exception messages in the log?
Is the jndiName correct? Perhaps it should be jms/ceConnectionFactory?
Is the URI INBOUND_ENDPOINT correct?
...
Try to setup ActiveMQ first, and migrate the configuration to Solace.

SpringJMS DMLC messages not picked up until restart

I am using DMLC to listen to Tibco EMS queues ( Tomcat). After some time, messages are not being delivered. After restarting, messages are delivered again. I am using SingleConnectionFactory.
Connection Factory:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="${connectionQueueFactory}" />
<property name="cache" value="false"/>
<property name="lookupOnStartup" value="false"/>
<property name="proxyInterface" value="javax.jms.ConnectionFactory"/>
</bean>
Authenticated Connection Factory:
<bean id="authenticationConnectionFactory"
class="com.my.service.AuthenticationConnectionFactory"> <-- extends SingleConnectionFactory
<property name="targetConnectionFactory" ref="jmsConnectionFactory" />
<property name="username" value="${userName}" />
<property name="password" value="${password}" />
<property name="sessionCacheSize" value="1"/>
</bean>
Destination Resolver:
<bean id="destinationResolver" class="org.springframework.jms.support.destination.JndiDestinationResolver">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="cache" value="true"/>
</bean>
Container:
<jms:listener-container concurrency="10-15" container-type="default"
connection-factory="simpleAuthenticationConnectionFactory"
destination-type="queue"
cache="consumer"
prefetch="1"
destination-resolver="destinationResolver"
acknowledge="transacted">
..... listeners.....
</jms:listener-container>
Thank you.
acknowledge="transacted" will not confirm the message if an exception is thrown during processing, which can result in this behaviour as the EMS daemon thinks your application is still busy processing the messages that have been delivered but not confirmed.
However transacted is also the only acknowledge mode that guarantees re-delivery in case an exception is thrown.
What this means is that you must not let exceptions be thrown, unless your application is shutting down, which can be a real pain. I've covered off the various options in an article The Chain of Custody Problem. The short version is:
Discard the message and ignore the error;
Send the message (and/or the error) to an error log or queue, which is by monitored by people;
Send the message (and/or the error) to an error queue, which is processed by the sending application.
All of these options have problems, which is why Fire-and-Forget messaging sucks, but in your case you'll probably find option 2 to be the most pragmatic.

How to create multiple listeners in ActiveMQ using spring JMS

I have a use case where I want to create multiple listeners(6) in an application. I want to subscribe to multiple destinations(6 topics). All the subscription are durable. I am using separate default message listener container(DMLC) for each listener and using different client id but I am confused about how connection factory should be used.
Should I use single ActiveMQ pooled connection factory with maxConnection specified to 6. OR should I use different pooled connection factory for each listener ?
Is there any harm is using pooledConnectionFactory with maxConnection for durable subscriber?
Source code:
Connection Factory:
<bean id="jmsFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
destroy-method="stop">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>${jms.broker.url}</value>
</property>
</bean>
</property>
<property name="maxConnections" value="6" />
and my listener is using it as: (I have 6 similar listeners similar to this using different destinations and client ID)
<bean id="listenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
<property name="connectionFactory" ref="jmsFactory" />
<property name="destination" ref="topic_pnlCompleteTopic" />
<property name="durableSubscriptionName" value="FAGCompletion" />
<property name="pubSubDomain" value="true" />
<property name="subscriptionDurable" value="${jms.fagsListener.durable}" />
<property name="clientId" value="${jms.fagsListener.clientId}" />
<property name="messageListener" ref="pnlMessageListener" />
<property name="messageSelector" value="JMSType = 'FAG Completion'" />
</bean>
That sounds all fine to me as long as you're not using this connection factory for something else. There is no reason to limit the number of connections to 6, you can put a higher number if you want and it will be only used if necessary. ConnectionFactory is typically the thing that you share for your whole app. See it as a kind of DataSource for JMS access
You listeners will typically only use one session each, since you are using topics. There is no reason to specify a limit on your pool, or to use multiple pools. You typically use one pooled connection factory for your application unless you see a real reason to limit or split it up into different pools.

WebSphere MQ and Atomikos - Messages Lost on process termination

My app (a spring message listener) reads from a queue and writes to the database in a single transaction. I use Atomikos to provide the XA transaction behaviour. When the app is abruptly terminated with kill statements for example, I see messages are lost. Is there any specific configuration I need to use? Should the queues be persistent? Currently the queues are non-persistent. My MQ version is v7.1.
Spring config for listener container looks like:
<bean id="listenerContainer" class="com.miax.test.TestListenerMDPImpl" autowire="byName">
<property name="connectionFactory" ref="mqConnFactory" />
<property name="destinationName" value="QUEUE" />
<property name="messageListener" ref="listenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<!-- receive time out, should be less than tranaction time out -->
<property name="receiveTimeout" value="3000" />
<!-- retry connection every 1 seconds -->
<property name="recoveryInterval" value="1000" />
<property name="autoStartup" value="true" />
<property name="sessionAcknowledgeMode" value="0" />
</bean>
Any other info will be given as needed.
Thanks.
The client you are using must be the Extended Transactional Client if downloaded prior to May of this year. Any of the V7.0.1 and higher clients as of May 2012 have the XA capability built in. If in doubt, go download a current release of the WMQ client and install.
Second, the XA transaction manager must have it's own connection to the queue manager independent of the application. This is so that it can connect and reconcile the transactions if the application fails to restart. To do this, the transaction manager must be configured with an XX_OPEN string and a switch file as described in the Infocenter topic Configuring XA-compliant transaction managers.
For what its worth, there is no such thing as a persistent queue in WMQ. It is the messages themselves that are persistent (or not). For more on that, please see my blog post on the topic. This is a rather important topic because when people assume that the queue itself is persistent they tend to devise solutions that produce unexpected results. Please read the blog post!

Resources