I have a web application that runs text processing jobs in the background once a message is received on an ActiveMQ which is listened to by a Spring MessageListener....the problem I"m encountering is that once I process around 30 background jobs, ActiveMQ stops processing any messages, Spring message listener loses its JMS connection, and sometimes I get an error in the ActiveMQ log saying that there are too many open files.
I ran the 'lsof' (list open file) command on Linux against the ActiveMQ process and noticed that for almost each message queued/published/received by JmsTemplate, a new connection is opened. Is this normal?
here's my configuration:
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<constructor-arg ref="amqConnectionFactory" />
<property name="exceptionListener" ref="jmsExceptionListener" />
<property name="sessionCacheSize" value="100" />
</bean>
You need to use the PooledConnectionFactory provided by ActiveMQ, you can see full configuration here. Make sure you read the JmsTemplate Gotchas as well.
Related
I need to connect from a web application running on a WebSphere AS 6.1 application server to a remote WebSphere MQ on z/OS queue. On WebSphere AS, I configured both QueueConnectionFactory and Queue (an object containing a part of the remote queue data), with most of the settings set to their default values - I just needed to set queue name, channel, host, port, and transport type which is CLIENT. I inject them in the following Spring 3.2 configuration using JNDI lookup:
<jee:jndi-lookup id="destination" jndi-name="MyMQQueue" expected-type="javax.jms.Queue" />
<jee:jndi-lookup id="targetConnectionFactory" jndi-name="MyMQQCF" expected-type="javax.jms.QueueConnectionFactory" />
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="targetConnectionFactory"
p:defaultDestination-ref="destination" />
<bean id="simpleMessageListener" class="my.own.SimpleMessageListener"/>
<bean id="msgListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="targetConnectionFactory" />
<property name="destination" ref="destination" />
<property name="messageListener" ref="simpleMessageListener" />
<property name="taskExecutor" ref="managedThreadsTaskExecutor" />
<property name="receiveTimeout" value="5000" />
<property name="recoveryInterval" value="5000" />
</bean>
<bean id="managedThreadsTaskExecutor" class="org.springframework.scheduling.commonj.WorkManagerTaskExecutor">
<property name="workManagerName" value="wm/default" />
</bean>
JmsTemplate sends and receives (synchronously) messages correctly. DefaultMessageListenerContainer, an asynchronous message receiver, reads some (previously sent) messages off the MQ queue during WebSphere AS start, but chokes soon afterwards, and begins to throw repeatedly "connection closed" exception. On each such occasion it notifies me that
DefaultMessag W org.springframework.jms.listener.DefaultMessageListenerContainer handleListenerSetupFailure Setup of JMS message listener invoker failed for destination 'queue://myqueue' - trying to recover. Cause: Connection closed
DefaultMessag I org.springframework.jms.listener.DefaultMessageListenerContainer refreshConnectionUntilSuccessful Successfully refreshed JMS Connection
but stops taking messages off the queue.
Digging a bit into Spring code, I found that setting on DefaultMessageListenerContainer
<property name="cacheLevel" value="0"/>
solves the problem, in the sense that the messages are now being read off the queue every time I send them. However, looking at the TCP traffic to WebSphere MQ I find that MQCLOSE/MQOPEN commands are sent to it repeatedly as in:
Wireshark captured traffic
which probably means that the connection gets continuously closed and reopened.
Can anybody suggest what might be the cause for caching not working properly, and whether there is perhaps a relatively simple way to modify Spring code (extending DefaultMessageListenerContainer, for example), or perhaps set some property on MQ queue connection factory/queue, to get it working?
EDIT:
Searching further the internet, I have found the following link
http://forum.spring.io/forum/spring-projects/integration/jms/89532-defaultmessagelistenercontainer-cachingconnectionfactory-tomcat-and-websphere-mq
which seems to describe a similar problem occurring on Tomcat. The solution there is to set a certain exceptionListener on DefaultMessageListenerContainer. However, trying to do this on WebSphere throws the exception "javax.jms.IllegalStateException: Method setExceptionListener not permitted". The underlying cause seems to be that J2EE 1.4 spec forbids calling setExceptionListener on JMS connections.
https://www.ibm.com/developerworks/library/j-getmess/j-getmess-pdf.pdf
It seems that setting
<property name="cacheLevel" value="0"/>
on DefaultMessageListenerContainer is actually the correct solution.
I mislead myself by interpreting MQCLOSE/MQOPEN I saw on Wireshark captured TCP traffic in this case, as the heavyweight connection opening.
First, the newly created Connection Factory on the administrative console WebSphere AS 6.1 has by default a JMS connection pool (max size 10). By debugging the base class of DefaultMessageListenerContainer, AbstractPollingMessageListenerContainer, (specifically the method
protected boolean doReceiveAndExecute(
Object invoker, Session session, MessageConsumer consumer, TransactionStatus status)
)
one sees that neither the call to create a connection, neither the call to create a session from connection generate TCP traffic, and TCP traffic is generated only by creating a consumer (considered to be a "lightweight operation" if I understand correctly), trying to receive a message from the queue, and closing the consumer.
So it seems that the connection is taken from the respective pool, and also the session is somehow "cached".
So instead of caching by Spring, the caching appears to be done here by the application server.
A spring quartz process runs every 15 minutes in my project i.e 96 times a day. This fetch certain records from database and POST it on a REST service (running on JBoss 7). These records are in general 50 to 100 in count.
On REST service there is jms event publisher that publishes this message on a topic. There are two consumers on this topic.
That process message and sends push notification messages on mobile
Talk to third party (generally takes 4 to 5 second to complete the call)
Since it is topic both consumers receive all messages but they filter them out based on some property, so few messages are processed by one and rest by another consumer.
My problem is; which is being observed recently since a week time; that consumer #1 receives response from APNS as invalid token multiple times; token is used to send push notification to mobile; after some time this consumer stops and do not respond at all while second one keeps running.
Below are configurations:
<amq:broker id="broker" useJmx="false" persistent="false">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0"/>
</amq:transportConnectors>
</amq:broker>
<!-- ActiveMQ Destination -->
<amq:topic id="topicName" physicalName="topicPhysicalName"/>
<!-- JMS ConnectionFactory to use, configuring the embedded broker using XML -->
<amq:connectionFactory id="jmsFactory" brokerURL="vm://localhost"/>
<!-- JMS Producer Configuration -->
<bean id="jmsProducerConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
depends-on="broker"
p:targetConnectionFactory-ref="jmsFactory"/>
<!-- JMS Templates-->
<bean id="jmsTemplate"
class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsProducerConnectionFactory"/>
<!-- Publisher-->
<bean name="jmsEventPublisher"
class="com.jhi.mhm.services.event.jms.publisher.JMSEventPublisher">
<property name="jmsTemplate" ref="jmsTemplate"/>
<property name="topic">
<map>
<entry key="keyname" value-ref="topicName"/>
</map>
</property>
</bean>
<!-- JMS Consumer Configuration -->
<bean name="consumer2" class="Consumer2"/>
<bean name="consumer1" class="Consumer1"/>
<bean id="jmsConsumerConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
depends-on="broker"
p:targetConnectionFactory-ref="jmsFactory"/>
<jms:listener-container container-type="default"
connection-factory="jmsConsumerConnectionFactory"
acknowledge="auto"
destination-type="topic">
<jms:listener destination="topicPhysicalName" ref="consumer1"/>
<jms:listener destination="topicPhysicalName" ref="consumer2"/>
</jms:listener-container>
I search another posted questions but could not find anything related.
Your thoughts would be really helpful.
shailu - I went through similar problem. What we did is upgrade the version of MQ. Although this did not solved the problem completely as MQ shows random behavior and at the end we simply merged our endpoint and call destination as per biz logic
I have a production application that seems to be losing messages intermittently. One example that I have identified is the sending system logs 3 messages that are sent milliseconds apart. The receiving system logs that it got message 1 and message 3...but not message 2. The queue's depth is 0. I see no errors in the logs of the application that reads the messages to indicate something bad happened.
I have verified that there are no other "rogue" clients creating a race condition for reading messages.
Here is my configuration; we have a primary and secondary/failover queue...
<bean id="primaryProficiencyInboundQueue" class="com.ibm.mq.jms.MQQueue">
<property name="baseQueueManagerName" value="${lsm.primary.outbound.manager}"/>
<property name="baseQueueName" value="${lsm.proficiency.inbound.queue}"/>
</bean>
<bean id="secondaryProficiencyInboundQueue" class="com.ibm.mq.jms.MQQueue">
<property name="baseQueueManagerName" value="${lsm.secondary.outbound.manager}"/>
<property name="baseQueueName" value="${lsm.proficiency.inbound.queue}"/>
</bean>
<int:channel id="inboundProficiencyChannel">
<int:interceptors>
<int:wire-tap channel="logger" />
</int:interceptors>
</int:channel>
<!-- adapter that connects the inbound proficiency queues to the inboundProficiencyChannel -->
<jms:message-driven-channel-adapter id="primaryProficiencyInboundAdapter"
connection-factory="primaryConnectionFactory"
destination="primaryProficiencyInboundQueue"
channel="inboundProficiencyChannel" />
<jms:message-driven-channel-adapter id="secondaryProficiencyInboundAdapter"
connection-factory="secondaryConnectionFactory"
destination="secondaryProficiencyInboundQueue"
channel="inboundProficiencyChannel" />
<int:service-activator id="proficiencyInboundActivator"
input-channel="inboundProficiencyChannel">
<bean class="com.myactivator.LSMInboundProficiencyActivator" />
</int:service-activator>
Is it possible that something is failing silently?
One other noteworthy thing is that there are two production servers that are both configured as above....meaning that there are two servers that have JMS adapters configured to read the messages. I mentioned above that I see message 1 and message 3 being read. In that case, I see server one process message one and server two process mesage two...so it seems that having two servers configured like above has no negative impact. Any other thoughts on what could be happening or how I can debug the case of the missing/dropped messages?
Version information:
Spring - 3.0.5.RELEASE
Spring Integration - 2.0.3.RELEASE
Nothing springs to mind that would cause lost messages; I don't recall that ever being reported; but 2.0.3 is quite ancient (3rd birthday coming up soon). We don't support 2.0.x any more but you should at least update to 2.0.6 which is the last release of 2.0.x. But preferably move up to the latest 3.0.0 release (or at least 2.2.6).
Since a month we have a reoccurring issue with activemq and spring. After a some time (between a day and a week) we have no more consumers and no new ones get started and the queue starts to fill up.
This setup ran for over a year, without any issues and as far as we can see nothing relevant has been changed.
An other queue we use also started to show the same behavior, but less frequent.
from the activemq webconsole ( as you can see lots of pending messages and no consumers)
Name ↑ Number Of Pending Messages Number Of Consumers Messages Enqueued Messages Dequeued Views Operations
queue.readresult 19595 0 40747 76651 Browse Active Consumers
contents of our bundle-context.xml
<!-- JMS -->
<bean id="jmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="maxConnections" value="5" />
<property name="maximumActive" value="5" />
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>tcp://localhost:61616</value>
</property>
</bean>
</property>
</bean>
<bean id="ResultMessageConverter" class="com.bla.ResultMessageConverter" />
<jms:listener-container connection-factory="jmsConnectionFactory" destination-resolver="jmsDestinationResolver"
concurrency="2" prefetch="10" acknowledge="auto" cache="none" message-converter="ResultMessageConverter">
<jms:listener destination="queue.readresult" ref="ReaderRequestManager" method="handleReaderResult" />
</jms:listener-container>
There are no exception in any of the logs. Does anyone knows of a reason why after a while no new consumers could be started.
Thanks
I've run into issues before where "consumers stop consuming," but haven't seen consumers stop existing. You may be running out of memory and/or connections in the pool. Do you have to restart ActiveMQ to fix the problem or just your application?
Here are a couple suggestions:
Set the queue prefetch to 0
Add "useKeepAlive=false" on the connection string
Increase the memory limits for the queues
I can see no obvious reason in the config provided why it should fail. So you need to resort to classic trouble shooting.
Try to set logging to debug and recreate the issue. Then you should be able to see more and you might be able to detect the root cause of it.
Also, check out the JMS exception listener and try to attach it your implementation of it to get a grasp of the real problem.
http://docs.oracle.com/javaee/6/api/javax/jms/ExceptionListener.html
The issue I have is this:
using service mix and camel routing I am sending a JSON message via activeMQ to consumer.
The issue is that the time that this consumer takes to process this message is X so it is possible the consumer get stopped or crashed during the consuming of the message. In this case the message will be half consumer and will be already deleted from the queue because well it was delivered.
Is it possible to make the queue to not remove messages when they are consumed but instead to wait for some confirmation from the consumer that the processing of this message is done before deleting it?
In a typical importing files from filesystem you will remove the file or rename it to done only at the end ones the file is fully processed and a transaction is fully committed. So how in the ESB world we can say keep the message till I finish and I tell you to remove it.
i am using spring jms:listener-container and jms:listeners for consuming this messages currently.
Your problem is what JMS transactions solves every day.
Some notes from ActiveMQ about it here
You could easily use local transactions in JMS, and setup a listener container like this (note true on sessionTransacted):
<bean id="myListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="myQueue" />
<property name="messageListener" ref="myConsumerBean" />
<property name="sessionTransacted" value="true" />
</bean>
Then you have a transacted session. The message will be rolled back to the queue if the message listener fails to consume the message. The transaction will not commit(=message removed from queue) unless the "onMessage" method in the message listener bean returns successfully (no exceptions thrown). I guess this is what you want