ACTIVEMQ - How can subscriber receive topic messages when started after publisher? - spring

In my program, I have two modules :- Publisher and Subscriber which communicate via Topic.
I understand that in order to receive messages by subscriber, it should be started before publisher. But there may be a scenario where the subscriber goes down for some reason and needs to be restarted. Is there any way, by which if I start the Subscriber after Publisher, then also it should be able to receive message?

Adding a code example by using spring DMLC and durable subscribers. It's harder to achieve this with a plain JMSTemplate (you tagged this, so I guess you are using JMS Templates to receive?), since you have to grab the session from the template and create the durable consumer yourself. This is automatically handled for you if you use the DMLC approach.
<bean id="myDurableConsumer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="myCf" />
<property name="sessionTransacted" value="true" />
<property name="subscriptionDurable" value="true"/>
<property name="durableSubscriberName" value="myDurableNameThatIsUniqueForThisInstance" />
<property name="destinationName" value="someTopic" />
<property name="messageListener" ref="myListener" />
< /bean>

If you are only interested in the disconnect-reconnect scenario, I think a durable subscriber is what you are looking for.
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html

In general if you want to account for a subscriber going offline and returning without missing any messages you would use JMS Durable Subscriptions. This allows your subscriber to receive any messages it missed while offline. Note that the caveat here is that is needs to have subscribed once first before it will start to collect offline messages.
Besides the standard JMS Durable consumer model ActiveMQ also provides the retroactive consumer. Another possibility is Virtual destinations.

Related

One JMS Consumer stops listening active mq topic while second do not

A spring quartz process runs every 15 minutes in my project i.e 96 times a day. This fetch certain records from database and POST it on a REST service (running on JBoss 7). These records are in general 50 to 100 in count.
On REST service there is jms event publisher that publishes this message on a topic. There are two consumers on this topic.
That process message and sends push notification messages on mobile
Talk to third party (generally takes 4 to 5 second to complete the call)
Since it is topic both consumers receive all messages but they filter them out based on some property, so few messages are processed by one and rest by another consumer.
My problem is; which is being observed recently since a week time; that consumer #1 receives response from APNS as invalid token multiple times; token is used to send push notification to mobile; after some time this consumer stops and do not respond at all while second one keeps running.
Below are configurations:
<amq:broker id="broker" useJmx="false" persistent="false">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0"/>
</amq:transportConnectors>
</amq:broker>
<!-- ActiveMQ Destination -->
<amq:topic id="topicName" physicalName="topicPhysicalName"/>
<!-- JMS ConnectionFactory to use, configuring the embedded broker using XML -->
<amq:connectionFactory id="jmsFactory" brokerURL="vm://localhost"/>
<!-- JMS Producer Configuration -->
<bean id="jmsProducerConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
depends-on="broker"
p:targetConnectionFactory-ref="jmsFactory"/>
<!-- JMS Templates-->
<bean id="jmsTemplate"
class="org.springframework.jms.core.JmsTemplate"
p:connectionFactory-ref="jmsProducerConnectionFactory"/>
<!-- Publisher-->
<bean name="jmsEventPublisher"
class="com.jhi.mhm.services.event.jms.publisher.JMSEventPublisher">
<property name="jmsTemplate" ref="jmsTemplate"/>
<property name="topic">
<map>
<entry key="keyname" value-ref="topicName"/>
</map>
</property>
</bean>
<!-- JMS Consumer Configuration -->
<bean name="consumer2" class="Consumer2"/>
<bean name="consumer1" class="Consumer1"/>
<bean id="jmsConsumerConnectionFactory"
class="org.springframework.jms.connection.SingleConnectionFactory"
depends-on="broker"
p:targetConnectionFactory-ref="jmsFactory"/>
<jms:listener-container container-type="default"
connection-factory="jmsConsumerConnectionFactory"
acknowledge="auto"
destination-type="topic">
<jms:listener destination="topicPhysicalName" ref="consumer1"/>
<jms:listener destination="topicPhysicalName" ref="consumer2"/>
</jms:listener-container>
I search another posted questions but could not find anything related.
Your thoughts would be really helpful.
shailu - I went through similar problem. What we did is upgrade the version of MQ. Although this did not solved the problem completely as MQ shows random behavior and at the end we simply merged our endpoint and call destination as per biz logic

Publish subscribe implementation with Spring JMS

I have a JMS queue implementation with JmsTemplate. I want to have more than one listener when a message is put in the queue, i.e. I want to use topic instead of queue.
I have configuration without JMS namespacing. What are the changes that need to be made to have multiple listeners listen on a topic when someone sends a message in a topic.
I guess you are probably using DefaultMessageListenerContainer. Just to be sure, you want that several individual components receive the same message (i.e. you don't want to process messages in parallel).
Assuming I got this right and component A and compoent B should receive the same message, you simply create two DefaultMessageListenerContainer instance on the same topic and you set the pubSubDomain property to true. Make sure you haven't set any concurrency on the listener container, or better yet, set the concurrency to 1 to make that explicit.
This would give something like
<bean id="listener1"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="pubSubDomain" value="true"/>
<property name="concurrency" value="1"/>
<property name="destinationName=" value="...."/> <!-- topic name -->
<property name="messageListener" ref="...."/>
</bean>
Then you should create a similar bean for the second component.

Direct exchange doesn't work with default routingKey

I don't understand something.
I'm using Spring Integration to send and receive messages from RabbitMQ.
My topology is pretty simple:
One JVM produce messages using the RabbitTemplate of Spring
<rabbit:template id="rabbitTemplate" connection-factory="rabbitConnectionFactory" />
<bean id="amqpTemplate" parent="rabbitTemplate">
<property name="queue" value="${queue.name}" />
<property name="routingKey" value="${queue.name}" />
</bean>
RabbitMQ queue receive the message
<rabbit:queue name="${queue.name}" durable="true" />
Another JVM consume the message (to launch a Spring-Batch job, but that's not the point):
<int-amqp:inbound-channel-adapter
queue-names="${queue.name}"
channel="amqp-requests"
connection-factory="rabbitConnectionFactory" />
The send method used is:
/**
* Convert a Java object to an Amqp {#link Message} and send it to a default exchange with a default routing key.
*
* #param message a message to send
* #throws AmqpException if there is a problem
*/
void convertAndSend(Object message) throws AmqpException;
It works fine but according to the documentation, I don't think the routingKey is mandatory in my usecase. I don't know why someone put a routingKey.
So I tried to remove the routingKey:
<bean id="amqpTemplate" parent="rabbitTemplate">
<property name="queue" value="${queue.name}" />
</bean>
Then I can still send the messages to the queue, but they are never consumed anymore!
Can someone explain me what is going on?
Can't I send messages from one JVM to another without a routingKey?
...but according to the documentation, I don't think the routingKey is mandatory...
Which "documentation" are you referring to?
With AMQP, producers do not know about queues; they send messages to various types of exchanges which have bindings for routing to queues.
Maybe you are mis-understanding the notion of the default exchange, to which every queue is bound, with a routing key equal to its queue name.
This allows simple routing to specific queues (by way of their names). The default exchange is a convenience to provide a quick on-ramp to amqp messaging. This works fine, but you might want to consider using explicitly declared exhanges instead, because it further decouples the producer from the consumer. With the default exchange the producer has to know the name of the queue that the consumer is listening on.
Further, on the RabbitTemplate, the queue property is only used for receiving (consuming) messages, it has no bearing on sending messages; as I said producers don't know about queues.
You should use the following...
<bean id="amqpTemplate" parent="rabbitTemplate">
<property name="routing-key" value="${queue.name}" />
</bean>

No new consumers on activemq queue after a while

Since a month we have a reoccurring issue with activemq and spring. After a some time (between a day and a week) we have no more consumers and no new ones get started and the queue starts to fill up.
This setup ran for over a year, without any issues and as far as we can see nothing relevant has been changed.
An other queue we use also started to show the same behavior, but less frequent.
from the activemq webconsole ( as you can see lots of pending messages and no consumers)
Name ↑ Number Of Pending Messages Number Of Consumers Messages Enqueued Messages Dequeued Views Operations
queue.readresult 19595 0 40747 76651 Browse Active Consumers
contents of our bundle-context.xml
<!-- JMS -->
<bean id="jmsConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" destroy-method="stop">
<property name="maxConnections" value="5" />
<property name="maximumActive" value="5" />
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL">
<value>tcp://localhost:61616</value>
</property>
</bean>
</property>
</bean>
<bean id="ResultMessageConverter" class="com.bla.ResultMessageConverter" />
<jms:listener-container connection-factory="jmsConnectionFactory" destination-resolver="jmsDestinationResolver"
concurrency="2" prefetch="10" acknowledge="auto" cache="none" message-converter="ResultMessageConverter">
<jms:listener destination="queue.readresult" ref="ReaderRequestManager" method="handleReaderResult" />
</jms:listener-container>
There are no exception in any of the logs. Does anyone knows of a reason why after a while no new consumers could be started.
Thanks
I've run into issues before where "consumers stop consuming," but haven't seen consumers stop existing. You may be running out of memory and/or connections in the pool. Do you have to restart ActiveMQ to fix the problem or just your application?
Here are a couple suggestions:
Set the queue prefetch to 0
Add "useKeepAlive=false" on the connection string
Increase the memory limits for the queues
I can see no obvious reason in the config provided why it should fail. So you need to resort to classic trouble shooting.
Try to set logging to debug and recreate the issue. Then you should be able to see more and you might be able to detect the root cause of it.
Also, check out the JMS exception listener and try to attach it your implementation of it to get a grasp of the real problem.
http://docs.oracle.com/javaee/6/api/javax/jms/ExceptionListener.html

ServiceMix, Apache ActiveMQ, Camel sending "done" for consuming messages

The issue I have is this:
using service mix and camel routing I am sending a JSON message via activeMQ to consumer.
The issue is that the time that this consumer takes to process this message is X so it is possible the consumer get stopped or crashed during the consuming of the message. In this case the message will be half consumer and will be already deleted from the queue because well it was delivered.
Is it possible to make the queue to not remove messages when they are consumed but instead to wait for some confirmation from the consumer that the processing of this message is done before deleting it?
In a typical importing files from filesystem you will remove the file or rename it to done only at the end ones the file is fully processed and a transaction is fully committed. So how in the ESB world we can say keep the message till I finish and I tell you to remove it.
i am using spring jms:listener-container and jms:listeners for consuming this messages currently.
Your problem is what JMS transactions solves every day.
Some notes from ActiveMQ about it here
You could easily use local transactions in JMS, and setup a listener container like this (note true on sessionTransacted):
<bean id="myListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="myQueue" />
<property name="messageListener" ref="myConsumerBean" />
<property name="sessionTransacted" value="true" />
</bean>
Then you have a transacted session. The message will be rolled back to the queue if the message listener fails to consume the message. The transaction will not commit(=message removed from queue) unless the "onMessage" method in the message listener bean returns successfully (no exceptions thrown). I guess this is what you want

Resources