I have a JMS queue implementation with JmsTemplate. I want to have more than one listener when a message is put in the queue, i.e. I want to use topic instead of queue.
I have configuration without JMS namespacing. What are the changes that need to be made to have multiple listeners listen on a topic when someone sends a message in a topic.
I guess you are probably using DefaultMessageListenerContainer. Just to be sure, you want that several individual components receive the same message (i.e. you don't want to process messages in parallel).
Assuming I got this right and component A and compoent B should receive the same message, you simply create two DefaultMessageListenerContainer instance on the same topic and you set the pubSubDomain property to true. Make sure you haven't set any concurrency on the listener container, or better yet, set the concurrency to 1 to make that explicit.
This would give something like
<bean id="listener1"
class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="pubSubDomain" value="true"/>
<property name="concurrency" value="1"/>
<property name="destinationName=" value="...."/> <!-- topic name -->
<property name="messageListener" ref="...."/>
</bean>
Then you should create a similar bean for the second component.
Related
I have a spring message listener configured with
<bean id="processListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1"/>
<property name="clientId" value="process-execute"/>
<property name="connectionFactory" ref="topicConnectionFactory"/>
<property name="destination" ref="processExecuteQueue"/>
<property name="messageListener" ref="processExecuteListener"/>
</bean>
This is running on a cluster with 2 nodes. I see that it's creating 1 consumer per node rather than 1 per cluster. They're both configured with the above xml so they have the same clientId. Yet, when 2 notifications are posted to the queue, both of the listeners are running, each gets a notification, and both execute in parallel. This is a problem because the notifications need to be handled sequentially.
I can't seem to find out how to make it have only one message listener per cluster rather than per node.
I solved the problem by having the jms queue block the next consumer until the previous returned. This is a feature in the weblogic server I'm using called Unit of Order. The documentation says you just need to enable it on the queue (I used hash). However, I found that I needed to enable it on the connection factory as well and set a default name. Now I see an MDP per node but 1 waits for 2 to complete before processing and vice versa. Not the solution I intended but it's working nontheless. While oracle specific, it's actually slightly better than a single MDP solution.
Note: I did not set the unit of order name in the spring jmstemplate producer as I do not know if that's possible. I have weblogic setting a default name when none is provided by the producer.
I don't understand something.
I'm using Spring Integration to send and receive messages from RabbitMQ.
My topology is pretty simple:
One JVM produce messages using the RabbitTemplate of Spring
<rabbit:template id="rabbitTemplate" connection-factory="rabbitConnectionFactory" />
<bean id="amqpTemplate" parent="rabbitTemplate">
<property name="queue" value="${queue.name}" />
<property name="routingKey" value="${queue.name}" />
</bean>
RabbitMQ queue receive the message
<rabbit:queue name="${queue.name}" durable="true" />
Another JVM consume the message (to launch a Spring-Batch job, but that's not the point):
<int-amqp:inbound-channel-adapter
queue-names="${queue.name}"
channel="amqp-requests"
connection-factory="rabbitConnectionFactory" />
The send method used is:
/**
* Convert a Java object to an Amqp {#link Message} and send it to a default exchange with a default routing key.
*
* #param message a message to send
* #throws AmqpException if there is a problem
*/
void convertAndSend(Object message) throws AmqpException;
It works fine but according to the documentation, I don't think the routingKey is mandatory in my usecase. I don't know why someone put a routingKey.
So I tried to remove the routingKey:
<bean id="amqpTemplate" parent="rabbitTemplate">
<property name="queue" value="${queue.name}" />
</bean>
Then I can still send the messages to the queue, but they are never consumed anymore!
Can someone explain me what is going on?
Can't I send messages from one JVM to another without a routingKey?
...but according to the documentation, I don't think the routingKey is mandatory...
Which "documentation" are you referring to?
With AMQP, producers do not know about queues; they send messages to various types of exchanges which have bindings for routing to queues.
Maybe you are mis-understanding the notion of the default exchange, to which every queue is bound, with a routing key equal to its queue name.
This allows simple routing to specific queues (by way of their names). The default exchange is a convenience to provide a quick on-ramp to amqp messaging. This works fine, but you might want to consider using explicitly declared exhanges instead, because it further decouples the producer from the consumer. With the default exchange the producer has to know the name of the queue that the consumer is listening on.
Further, on the RabbitTemplate, the queue property is only used for receiving (consuming) messages, it has no bearing on sending messages; as I said producers don't know about queues.
You should use the following...
<bean id="amqpTemplate" parent="rabbitTemplate">
<property name="routing-key" value="${queue.name}" />
</bean>
In my program, I have two modules :- Publisher and Subscriber which communicate via Topic.
I understand that in order to receive messages by subscriber, it should be started before publisher. But there may be a scenario where the subscriber goes down for some reason and needs to be restarted. Is there any way, by which if I start the Subscriber after Publisher, then also it should be able to receive message?
Adding a code example by using spring DMLC and durable subscribers. It's harder to achieve this with a plain JMSTemplate (you tagged this, so I guess you are using JMS Templates to receive?), since you have to grab the session from the template and create the durable consumer yourself. This is automatically handled for you if you use the DMLC approach.
<bean id="myDurableConsumer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="myCf" />
<property name="sessionTransacted" value="true" />
<property name="subscriptionDurable" value="true"/>
<property name="durableSubscriberName" value="myDurableNameThatIsUniqueForThisInstance" />
<property name="destinationName" value="someTopic" />
<property name="messageListener" ref="myListener" />
< /bean>
If you are only interested in the disconnect-reconnect scenario, I think a durable subscriber is what you are looking for.
http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
In general if you want to account for a subscriber going offline and returning without missing any messages you would use JMS Durable Subscriptions. This allows your subscriber to receive any messages it missed while offline. Note that the caveat here is that is needs to have subscribed once first before it will start to collect offline messages.
Besides the standard JMS Durable consumer model ActiveMQ also provides the retroactive consumer. Another possibility is Virtual destinations.
The issue I have is this:
using service mix and camel routing I am sending a JSON message via activeMQ to consumer.
The issue is that the time that this consumer takes to process this message is X so it is possible the consumer get stopped or crashed during the consuming of the message. In this case the message will be half consumer and will be already deleted from the queue because well it was delivered.
Is it possible to make the queue to not remove messages when they are consumed but instead to wait for some confirmation from the consumer that the processing of this message is done before deleting it?
In a typical importing files from filesystem you will remove the file or rename it to done only at the end ones the file is fully processed and a transaction is fully committed. So how in the ESB world we can say keep the message till I finish and I tell you to remove it.
i am using spring jms:listener-container and jms:listeners for consuming this messages currently.
Your problem is what JMS transactions solves every day.
Some notes from ActiveMQ about it here
You could easily use local transactions in JMS, and setup a listener container like this (note true on sessionTransacted):
<bean id="myListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers" value="1" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="myQueue" />
<property name="messageListener" ref="myConsumerBean" />
<property name="sessionTransacted" value="true" />
</bean>
Then you have a transacted session. The message will be rolled back to the queue if the message listener fails to consume the message. The transaction will not commit(=message removed from queue) unless the "onMessage" method in the message listener bean returns successfully (no exceptions thrown). I guess this is what you want
I have the need to send/receive messages towards/from different topics stored on a single JMS Server.
I would like to use JmsTemplate for sending and MessageListenerContainer for registering asyncronous listeners.
My configuration looks like this:
<bean id="jndiTemplate" class="org.springframework.jndi.JndiTemplate">
<property name="environment">
<props>
<prop key="java.naming.factory.initial">xxx</prop>
<prop key="java.naming.provider.url">yyy</prop>
</props>
</property>
</bean>
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref ="jndiTemplate"/>
<property name="jndiName" value="TopicConnectionFactory"/>
</bean>
<bean id="singleConnectionFactory" class="org.springframework.jms.connection.SingleConnectionFactory">
<constructor-arg ref="connectionFactory"/>
</bean>
<bean id="tosJmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="singleConnectionFactory"/>
<property name="destinationResolver" ref="destinationResolver"/>
<property name="pubSubDomain" value="true"/>
</bean>
As far as I understood, the singleConnectionFactory, returning always the same connection instance, helps reducing the overhead of creating and closing
a connection each time a jmsTemplate needs (for example) to send/receive a message (as it would be when using a normal ConnectionFactory).
My first question is: if I create multiple jmsTemplate(s), can they all share a ref to a singleConnectionFactory? Or do they have to receive a distinct instance each (singleConnectionFactory1, singleConnectionFactory2, etc)?
Reading the API for SingleConnectionFactory, I found this:
Note that Spring's message listener containers support the use of a shared Connection
within each listener container instance. Using SingleConnectionFactory in combination only really makes sense for sharing a single JMS Connection across multiple listener containers.
This sound a bit cryptic to me. As far as I know, it is possible to register only 1 Listener per MessageListenerContainer, so I don't understand to what extent is a connection shared.
Suppose I want to register N Listeners: I will need to repeat N times something like this:
<bean
class="org.springframework.jms.listener.SimpleMessageListenerContainer">
<property name="connectionFactory" ref="connectionFactory" />
<property name="destinationName" value="destX" />
<property name="messageListener" ref="listener1outOfN" />
</bean>
How many Connections are created in such case from connectionFactory? One for each ListenerContainer or just a pool of Connections? And what if I provide the SimpleMessageListenerContainer-s with a ref to singleConnectionFactory?
What is the best approach (from the point of view of the performances, of course) in this case?
if I create multiple jmsTemplate(s), can they all share a ref to a singleConnectionFactory?
Yes, this is fine. The javadoc for SingleConnectionFactory says:
According to the JMS Connection model, this is perfectly thread-safe (in contrast to e.g. JDBC).
JMS Connection objects are thread-safe, and can be used by multiple threads concurrenctly. So there's no need to use multiple SingleConnectionFactory beans.
As far as I know, it is possible to register only 1 Listener per MessageListenerContainer, so I don't understand to what extent is a connection shared.
This is true; however, each MessageListenerContainer can have multiple threads processing messages concurrently, all using the same MessageListener object. The MessageListenerContainer will use a single, shared Connection for all of these threads (unless configured to do otherwise).
Note that Spring's message listener containers support the use of a shared Connection within each listener container instance. Using SingleConnectionFactory in combination only really makes sense for sharing a single JMS Connection across multiple listener containers.
In other words, if all you have is a single MessageListenerContainer, then SingleConnectionFactory is unnecessary, since the single connection is managed internally to MessageListenerContainer. if you have multiple listener containers, and want them all to share a connection, then SingleConnectionFactory is required. Also, if you want to share a connection between listening and sending, as you do, then SingleConnectionFactory is also necessary.