No new consumers on activemq queue after a while - spring

Since a month we have a reoccurring issue with activemq and spring. After a some time (between a day and a week) we have no more consumers and no new ones get started and the queue starts to fill up.
This setup ran for over a year, without any issues and as far as we can see nothing relevant has been changed.
An other queue we use also started to show the same behavior, but less frequent.
from the activemq webconsole ( as you can see lots of pending messages and no consumers)
Name ↑ Number Of Pending Messages Number Of Consumers Messages Enqueued Messages Dequeued Views Operations
queue.readresult 19595 0 40747 76651 Browse Active Consumers
contents of our bundle-context.xml
<!-- JMS -->
<bean id="jmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="maxConnections" value="5" />
<property name="maximumActive" value="5" />
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>tcp://localhost:61616</value>
</property>
</bean>
</property>
</bean>
<bean id="ResultMessageConverter" class="com.bla.ResultMessageConverter" />
<jms:listener-container connection-factory="jmsConnectionFactory" destination-resolver="jmsDestinationResolver"
concurrency="2" prefetch="10" acknowledge="auto" cache="none" message-converter="ResultMessageConverter">
<jms:listener destination="queue.readresult" ref="ReaderRequestManager" method="handleReaderResult" />
</jms:listener-container>
There are no exception in any of the logs. Does anyone knows of a reason why after a while no new consumers could be started.
Thanks

I've run into issues before where "consumers stop consuming," but haven't seen consumers stop existing. You may be running out of memory and/or connections in the pool. Do you have to restart ActiveMQ to fix the problem or just your application?
Here are a couple suggestions:
Set the queue prefetch to 0
Add "useKeepAlive=false" on the connection string
Increase the memory limits for the queues

I can see no obvious reason in the config provided why it should fail. So you need to resort to classic trouble shooting.
Try to set logging to debug and recreate the issue. Then you should be able to see more and you might be able to detect the root cause of it.
Also, check out the JMS exception listener and try to attach it your implementation of it to get a grasp of the real problem.
http://docs.oracle.com/javaee/6/api/javax/jms/ExceptionListener.html

Related

How do I configure an Spring message listener (MDP) to have one instance across a cluster

I have a spring message listener configured with
<bean id="processListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1"/>
<property name="clientId" value="process-execute"/>
<property name="connectionFactory" ref="topicConnectionFactory"/>
<property name="destination" ref="processExecuteQueue"/>
<property name="messageListener" ref="processExecuteListener"/>
</bean>
This is running on a cluster with 2 nodes. I see that it's creating 1 consumer per node rather than 1 per cluster. They're both configured with the above xml so they have the same clientId. Yet, when 2 notifications are posted to the queue, both of the listeners are running, each gets a notification, and both execute in parallel. This is a problem because the notifications need to be handled sequentially.
I can't seem to find out how to make it have only one message listener per cluster rather than per node.
I solved the problem by having the jms queue block the next consumer until the previous returned. This is a feature in the weblogic server I'm using called Unit of Order. The documentation says you just need to enable it on the queue (I used hash). However, I found that I needed to enable it on the connection factory as well and set a default name. Now I see an MDP per node but 1 waits for 2 to complete before processing and vice versa. Not the solution I intended but it's working nontheless. While oracle specific, it's actually slightly better than a single MDP solution.
Note: I did not set the unit of order name in the spring jmstemplate producer as I do not know if that's possible. I have weblogic setting a default name when none is provided by the producer.

WebSphere MQ and Atomikos - Messages Lost on process termination

My app (a spring message listener) reads from a queue and writes to the database in a single transaction. I use Atomikos to provide the XA transaction behaviour. When the app is abruptly terminated with kill statements for example, I see messages are lost. Is there any specific configuration I need to use? Should the queues be persistent? Currently the queues are non-persistent. My MQ version is v7.1.
Spring config for listener container looks like:
<bean id="listenerContainer" class="com.miax.test.TestListenerMDPImpl" autowire="byName">
<property name="connectionFactory" ref="mqConnFactory" />
<property name="destinationName" value="QUEUE" />
<property name="messageListener" ref="listenerAdapter" />
<property name="transactionManager" ref="jtaTransactionManager" />
<property name="sessionTransacted" value="true" />
<property name="concurrentConsumers" value="1" />
<!-- receive time out, should be less than tranaction time out -->
<property name="receiveTimeout" value="3000" />
<!-- retry connection every 1 seconds -->
<property name="recoveryInterval" value="1000" />
<property name="autoStartup" value="true" />
<property name="sessionAcknowledgeMode" value="0" />
</bean>
Any other info will be given as needed.
Thanks.
The client you are using must be the Extended Transactional Client if downloaded prior to May of this year. Any of the V7.0.1 and higher clients as of May 2012 have the XA capability built in. If in doubt, go download a current release of the WMQ client and install.
Second, the XA transaction manager must have it's own connection to the queue manager independent of the application. This is so that it can connect and reconcile the transactions if the application fails to restart. To do this, the transaction manager must be configured with an XX_OPEN string and a switch file as described in the Infocenter topic Configuring XA-compliant transaction managers.
For what its worth, there is no such thing as a persistent queue in WMQ. It is the messages themselves that are persistent (or not). For more on that, please see my blog post on the topic. This is a rather important topic because when people assume that the queue itself is persistent they tend to devise solutions that produce unexpected results. Please read the blog post!

ServiceMix, Apache ActiveMQ, Camel sending "done" for consuming messages

The issue I have is this:
using service mix and camel routing I am sending a JSON message via activeMQ to consumer.
The issue is that the time that this consumer takes to process this message is X so it is possible the consumer get stopped or crashed during the consuming of the message. In this case the message will be half consumer and will be already deleted from the queue because well it was delivered.
Is it possible to make the queue to not remove messages when they are consumed but instead to wait for some confirmation from the consumer that the processing of this message is done before deleting it?
In a typical importing files from filesystem you will remove the file or rename it to done only at the end ones the file is fully processed and a transaction is fully committed. So how in the ESB world we can say keep the message till I finish and I tell you to remove it.
i am using spring jms:listener-container and jms:listeners for consuming this messages currently.
Your problem is what JMS transactions solves every day.
Some notes from ActiveMQ about it here
You could easily use local transactions in JMS, and setup a listener container like this (note true on sessionTransacted):
<bean id="myListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="myQueue" />
<property name="messageListener" ref="myConsumerBean" />
<property name="sessionTransacted" value="true" />
</bean>
Then you have a transacted session. The message will be rolled back to the queue if the message listener fails to consume the message. The transaction will not commit(=message removed from queue) unless the "onMessage" method in the message listener bean returns successfully (no exceptions thrown). I guess this is what you want

Losing messages from a topic with DefaultMessageListenerContainer

I'm using a DefaultMessageListenerContainer to consume messages from a topic (Broker is ActiveMQ). Because the consumers are created at runtime i'm doing the following:
1) I have a Conainer Template configured in spring
<bean id="topiccontainertemplate" class="org.springframework.jms.listener.DefaultMessageListenerContainer" scope="prototype" destroy-method="stop">
<property name="connectionFactory" ref="connectionfactory" />
<property name="pubSubDomain" value="true" />
<property name="cacheLevelName" value="CACHE_CONSUMER" />
<property name="destinationName" value="default" />
</bean>
2) When a new consumer is required i get a new one from the application conext and reconfigure the destinationName.
DefaultMessageListenerContainer container = context.getBean("topiccontainertemplate", DefaultMessageListenerContainer.class);
container.setDestinationName(localEntity.getId().getDestination());
container.setMessageListener(getListener());
container.start();
Unfortunately the container misses some messages on the topic.
Does anybody know what i'm doing wrong?
Your subscription does not look durable. If it was, messages would have been saved while your sub is offline/ starting up. Your subscriber will get msgs from the point it is started completly - msgs sent prior to that will be lost.
after some more tests we found a race condition during consumer creation. the messages weren't lost, they weren't distributed properly in our code.

Spring JmsTemplate and Apache ActiveMQ, why so many connections?

I have a web application that runs text processing jobs in the background once a message is received on an ActiveMQ which is listened to by a Spring MessageListener....the problem I"m encountering is that once I process around 30 background jobs, ActiveMQ stops processing any messages, Spring message listener loses its JMS connection, and sometimes I get an error in the ActiveMQ log saying that there are too many open files.
I ran the 'lsof' (list open file) command on Linux against the ActiveMQ process and noticed that for almost each message queued/published/received by JmsTemplate, a new connection is opened. Is this normal?
here's my configuration:
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<constructor-arg ref="amqConnectionFactory" />
<property name="exceptionListener" ref="jmsExceptionListener" />
<property name="sessionCacheSize" value="100" />
</bean>
You need to use the PooledConnectionFactory provided by ActiveMQ, you can see full configuration here. Make sure you read the JmsTemplate Gotchas as well.

Resources