RabbitMQ + Spring cloud stream: usage of groups - spring

When using RabbitMQ + Spring cloud stream you can define the following properties in application.properties file:
spring.cloud.stream.bindings.input1.destination=someDest
spring.cloud.stream.bindings.input1.group=someGroup
I guess that "destination" means the RabbitMQ queue, but what does it mean "group" here?
Thanks!

The destination means topic exchange. The group means a queue bound to that exchange. So, several apps may subscribe to the same destination and get the same message if they use different groups. If group is the same, only one consumer instance it going to get one message.
See documentation for more info: http://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-rabbit/2.1.0.RC4/single/spring-cloud-stream-binder-rabbit.html#_rabbitmq_binder_overview

Actually, the destination is the exchange name; the queue someDest.someGroup will be bound to the exchange someDest.
When a group is provided, multiple instances of the app will compete for messages.
If there is no group, the queue will be an anonymous auto-delete queue.

Related

Configuring multiple queues with a topic exchange and using routing key to direct the message specific queue with spring cloud streams

Configuring multiple queues with a topic exchange and using routing key to direct the message specific queue with spring cloud streams
My requirement is example I have the queues and exchange defined as below in consumer end
spring.cloud.stream.bindings.inputA.destination=Common-Exchange
spring.cloud.stream.bindings.inputA.group=A-Queue
spring.cloud.stream.bindings.inputB.destination=Common-Exchange
spring.cloud.stream.bindings.inputB.group=B-Queue
I should be able to specify the routing key in the consumer just like
we do it in AMQP where we can pass the exchange queue and routing key
to create the binding
I should be able to set the routing key when sending the message in
producer end using MessageBuilder
channel.send(MessageBuilder.withPayload(message).build())
Of course we can use one queue and use headers to direct different type of messages but I need to know how multiple queue connected to a single exchange work with streams.
See the Rabbit binder documentation.
On the consumer side, set the bindingRoutingKey consumer binding property.
On the producer side, the the routingKeyExpression producer binding property (e.g. headers['routingKey'] and set that header as needed).
Also see Using Existing Queues/Exchanges.

RabbitMQ multiple listeners for same message prevent duplicate listening

I am using rabbitmq in spring boot application. I am using aws ecs for deployment. Now suppose multiple instance is running of my service. and rabbitmq listening for order create is registered with direct exchange.. So what happen when order is placed? will my both the instance of service will get same message? If yes, How to prevent duplicate message on those 2 listeners?
If the service creates multiple Listeners/Consumers for same queue on a direct exchange below mechanism is applicable:
By default, RabbitMQ will send each message to the next consumer, in sequence. On average every consumer will get the same number of messages. This way of distributing messages is called round-robin.
Best Tutorial for this topic: https://www.rabbitmq.com/tutorials/tutorial-two-java.html

Messages in Virtual Topic not consumed by consumer queue

I am trying to use a queue in activemq to dequeue messages from a virtual topic. I tried sending some messages and it is showing up in the topic under "message enqueued" but it is not able to be consumed.
The virtual topic name that i created was VirtualTopic.AA and the consumer is called Consumer.client1.VirtualTopic.AA.
In the consumer.client1.VirtualTopic.AA, i can see that there is a consumer but it is just not able to dequeue messages from the virtual topic.
Anyone knows why this is happening ? Do i need to change some settings in the configuration in the xml file ?
When you publish to a virtual topic using Spring's JmsTemplate, you need to configure it for a topic by setting the pubSubDomain property to "true".
From the JmsTemplate documentation:
If you want to use dynamic destination creation, you must specify the type of JMS destination to create, using the "pubSubDomain" property. For other operations, this is not necessary. Point-to-Point (Queues) is the default domain.
And in JmsDestinationAccessor#setPubSubDomain:
pubSubDomain - "true" for the Publish/Subscribe domain (Topics), "false" for the Point-to-Point domain (Queues)

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

Notify ActiveMQ producer if consumer on the destination is down

I am using ActiveMQ messaging broker and I have a requirement where the producer application would want to know if the consumer application consuming on the particular destination is up or not?
How can I achieve this?
Thanks!
You should checkout Advisory messages. It's a topic you can subscribe to if you want updates on such events.
Specifically the topic: ActiveMQ.Advisory.NoConsumer.Queue should be of interest. You need to enable it broker side though using the destination policy property: sendAdvisoryIfNoConsumers.
You can do that by using java code as follows:
Destination class has a method getConsumers() which will return List of Subscriptions to that destination,and it will in turn give you consumer information, by this you can check whether your required consumer is active or not.
Good luck!

Resources