SpringJMS DMLC messages not picked up until restart - spring

I am using DMLC to listen to Tibco EMS queues ( Tomcat). After some time, messages are not being delivered. After restarting, messages are delivered again. I am using SingleConnectionFactory.
Connection Factory:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="${connectionQueueFactory}" />
<property name="cache" value="false"/>
<property name="lookupOnStartup" value="false"/>
<property name="proxyInterface" value="javax.jms.ConnectionFactory"/>
</bean>
Authenticated Connection Factory:
<bean id="authenticationConnectionFactory"
class="com.my.service.AuthenticationConnectionFactory"> <-- extends SingleConnectionFactory
<property name="targetConnectionFactory" ref="jmsConnectionFactory" />
<property name="username" value="${userName}" />
<property name="password" value="${password}" />
<property name="sessionCacheSize" value="1"/>
</bean>
Destination Resolver:
<bean id="destinationResolver" class="org.springframework.jms.support.destination.JndiDestinationResolver">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="cache" value="true"/>
</bean>
Container:
<jms:listener-container concurrency="10-15" container-type="default"
connection-factory="simpleAuthenticationConnectionFactory"
destination-type="queue"
cache="consumer"
prefetch="1"
destination-resolver="destinationResolver"
acknowledge="transacted">
..... listeners.....
</jms:listener-container>
Thank you.

acknowledge="transacted" will not confirm the message if an exception is thrown during processing, which can result in this behaviour as the EMS daemon thinks your application is still busy processing the messages that have been delivered but not confirmed.
However transacted is also the only acknowledge mode that guarantees re-delivery in case an exception is thrown.
What this means is that you must not let exceptions be thrown, unless your application is shutting down, which can be a real pain. I've covered off the various options in an article The Chain of Custody Problem. The short version is:
Discard the message and ignore the error;
Send the message (and/or the error) to an error log or queue, which is by monitored by people;
Send the message (and/or the error) to an error queue, which is processed by the sending application.
All of these options have problems, which is why Fire-and-Forget messaging sucks, but in your case you'll probably find option 2 to be the most pragmatic.

Related

Solace JMS didn't run with parallel threads in Camel

In order to perform parallel processing of jms messages, I have configured the JmsComponent and connectionFactory as below.
After reading some posts and the official tutorial, seems that the below configuration should work for ActiveMQ. However, my testing shows that it doesn't work on Solace. Can someone give me a hint on this? Thanks.
// Route Definition - Camel Java DSL
from(INBOUND_ENDPOINT).setExchangePattern(ExchangePattern.InOnly).threads(5).bean(ThroughputMeasurer.class);
<!-- JMS Config -->
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="acknowledgementModeName" value="AUTO_ACKNOWLEDGE" />
<property name="deliveryPersistent" value="false" />
<property name="asyncConsumer" value="true" />
<property name="concurrentConsumers" value="5" />
</bean>
<!-- jndiTemplate is omitted here -->
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="ceConnectionFactory" />
</bean>
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
<property name="sessionCacheSize" value="30" />
</bean>
I believe the problem here is that your consumer is bound to an exclusive queue, and only 1 consumer is allowed to process message. Binding to an non-exclusive queue should solve the problem.
Exclusive queues only allow the first consumer to consume from the queue.
All other consumers that are bound to the queue will not receive data. In the event that the first consumer disconnects, the next oldest consumer will begin receiving data. The other consumers can be thought of as being "standby" consumers that will take over the moment the first consumer disconnects.
Non-exclusive queues are used for load balancing. Messages that are spooled to the queue will be distributed to all consumers in a round-robin manner.
Check:
Are there any exception messages in the log?
Is the jndiName correct? Perhaps it should be jms/ceConnectionFactory?
Is the URI INBOUND_ENDPOINT correct?
...
Try to setup ActiveMQ first, and migrate the configuration to Solace.

how to configure (spring) JMS connection Pool for WMQ

I am trying to configure a JMS connection pool in spring/camel for Websphere MQ. I am seeing class cast exception, when tried to use CachingConnectionFactory from spring. Could not find a pool from WMQ, have anybody done connection pooling with WMQ, i didnt find any examples. There are lot of examples for ActiveMQ.
here is what i have so far, that is producing class cast exception.
<bean id="inCachingConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="inboundMqConnectionFactory1" />
<property name="sessionCacheSize" value="5" />
</bean>
<bean id="inboundWebsphereMq1" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="inCachingConnectionFactory" />
<property name="destinationResolver" ref="jmsDestinationResolver" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="txManager1" />
</bean>
<bean id="inboundMqConnectionFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${isi.inbound.queue.host2}" />
<property name="port" value="${isi.inbound.queue.port}" />
<property name="queueManager" value="${isi.inbound.queue.queuemanager2}" />
<property name="channel" value="${isi.inbound.queue.channel2}" />
<property name="transportType" value="${isi.queue.transportType}" />
</bean>
The exception i see is here
trying to recover. Cause: com.sun.proxy.$Proxy37 cannot be cast to com.ibm.mq.jms.MQQueueSession
In general:
do not use QueueConnectionFactory or
TopicConnectionFactory, as ConnectionFactory (JMS 1.1) is replacement for both
Each ConnectionFactory from v7 WMQ JMS client jars provide caching logic on each own so in general you don't need CachingConnection Factory.
Now try it this way:
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory"
p:queueManager="${QM_NAME}"
p:hostName="${QM_HOST_NAME}"
p:port="${QM_HOST_PORT}"
p:channel="${QM_CHANNEL}"
p:clientID="${QM_CLIENT_ID}">
<property name="transportType">
<util:constant static-field="com.ibm.msg.client.wmq.WMQConstants.WMQ_CM_CLIENT" />
</property>
</bean>
<bean id="userConnectionFactory" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="mqConnectionFactory"
p:username="${QM_USERNAME}"
p:password="${QM_PASSWORD}" />
<!-- this will work -->
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="userConnectionFactory"
p:cacheConsumers="true"
p:reconnectOnException="true" />
Of course you can cache sessions instead of consumers if you want it that way. By my experience WMQ session caching is measurable performance improvement but only if you are limited on CPU power on WMQ machine or by actual message throughput; both situations are rare in majority of world applications. Caching consumers avoids excessive MQ OPEN calls which is expensive operation on WMQ so it helps too.
My rule of the thumb is consumer + session caching performance benefit is equal to 1/2 of performance benefit of connection caching and usually not wort of pursuing in your everyday JEE stack unless you are hardware limited.
Since WMQ v7, asynchronous consumers are realy realy fast with literally no CPU overhead when compared to spring MC, and are preferred way of consuming messages if you are HW limited. Most of the days I still use Spring as I prefer its easy-going nature.
Hope it helps.

Huge latency observed while committing transacted persistent AMQ messages

I have the following AMQ consumer configuration file which tries to consume 'persistent' messages from the queue. Also, the messages are 'transacted' (as I need to rollback if a message can't be processed in an expected way).
I see a problem with this configuration:
Whenever the consumer calls session.commit() after consuming the message, I see the commit call taking ~8 seconds to come out. I believe this is not expected.
Can someone point me if I have any issues with the below config?
<bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<bean id="simpleMessageListener" class="listener.StompSpringListener" scope="prototype" />
<bean id="destination" class="org.apache.activemq.command.ActiveMQQueue">
<property name="physicalName" value="JOBS.notifications"/>
</bean>
<bean id="poolMessageListener" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="simpleMessageListener"/>
<property name="maxSize" value="200"/>
</bean>
<bean id="messageListenerBean" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource" ref="poolMessageListener"/>
</bean>
<jms:listener-container
container-type="default"
connection-factory="amqConnectionFactory"
acknowledge="transacted"
concurrency="200-200"
cache="consumer"
prefetch="10">
<jms:listener destination="JOBS.notifications" ref="messageListenerBean" method="onMessage"/>
</jms:listener-container>
Take a look at the the ActiveMQ Spring support page. What I can see immediately is that you should be using a PoolingConnectionFactory instead of an ActiveMQConnectionFactory. This will ensure that only a single TCP connection is set up to the broker from your code, and will be shared between polling threads.

Failover queue should not deliver the messages until the actual queue start delivering the messages

I am using ActiveMQ implementation for sending messages to the queue. When there is a problem in the queue I am redirecting all my messages to another queue using failover mechanism.
But my requirement is that the failover queue messages should not be consumed by the consumer until the the messages in the first queue are consumed by the consumer.
Can anyone suggest me how to implement this scenario? Thanks in advance.
Here is my XML configuration:
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://172.16.121.146:61617" />
</bean>
<bean id="cscoDest" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="STOCKS.CSCO" />
</bean>
<!--The message listener-->
<bean id="portfolioListener" class="my.test.jms.Listener"></bean>
<!--Spring DMLC-->
<bean id="cscoConsumer" class="org.springframework.jms.listener.DefaultMessageListenerContainer102">
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="cscoDest" />
<property name="messageListener" ref="portfolioListener" />
<property name="sessionAcknowledgeModeName" value="CLIENT_ACKNOWLEDGE"/>
</bean>
<!--Spring JMS Template-->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="stockPublisher" class="my.test.jms.SpringPublisher">
<property name="template" ref="jmsTemplate" />
<property name="destinations">
<list>
<ref local="cscoDest" />
</list>
</property>
</bean>
The ActiveMQ failover mechanism is there to let the client fail over to another ActiveMQ broker if the connection with the current broker goes down.
Your requirement that the 2nd queue should not deliver messages until the first queue is empty is very odd. What happends if someone terminates the ActiveMQ server by pulling the plug? There might still be unprocessed messages on that server, but you cannot process them.
What you want is a master slave setup that shares a disk area (network share somewhere). Then a second broker can pick up where the master broker stopped working.

Spring DefaultMessageListenerContainer - listener not reading messages on Websphere MQ

I am using Spring 3.0 - DefaultMessageListenerContainer to connect to a Websphere 6 MQ. There are some messages already present on the MQ. When I run my test, the listener implementing SessionAwareMessageListener is started. But the onMessage() does not get invoked. So the problem is that the messages already in the queue are not read.
As per the docs, autoStartup is true by default (and I have not changed this). As per my underatanding, On startup, the listener should read the Queue for any existing messages and onMessage() should get invoked. Please let me know if this understanding is wrong.
Here is the snippet from the config file:
<bean id="jmsContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="jmsQueueConnectionFactory" />
<property name="destinationName">
<value>${queue}</value>
</property>
<property name="messageListener" ref="exampleMessageListener" />
<property name="concurrentConsumers" value="1" />
<property name="maxConcurrentConsumers" value="1" />
<property name="idleTaskExecutionLimit" value="4" />
<property name="maxMessagesPerTask" value="4" />
<property name="receiveTimeout" value="5000" />
<property name="recoveryInterval" value="5000" />
<property name="sessionTransacted" value="true" />
<property name="transactionManager" ref="jmsTransActionManager" />
</bean>
Note: There is no error/exception, the test app starts up just fine.
Any pointers to resolve this will be of great help.
Thanks,
RJ
The issue is resolved. The test class was terminating after the listener got hold of the message but before it could show the message as output. So the first message (highest priority one) was getting lost from the queue.
Later as I had included a Transaction Manager, the listener was putting the message back on the queue (showing a warning as Rejecting received message because of the listener container having been stopped in the meantime). As this was a warning and my logger was at a Debug level, I missed this earlier.
Putting a thread.sleep in the test class made sure that it is running for a longer time and the listener could read all the messages in the queue in the order of priority :)
cheers,
RJ

Resources