need spring rabbitmq send a message to all customers - disable round robin for one queue - spring

I have a couple of queues and I need to do the following with ONE of them:
A producer should send a message to this queue, but ALL consumers should receive it. So, if I have 5 spring listeners on this queue, each of them should receive the message, but not the producer. I do that because I have a tomcat cluster and rabbitmq asynchronous messages, and if I get response from the worker, I don't know how to dispatch it to the correct tomcat node. So I decided to broadcast all worker replies to all tomcat nodes. Each tomcat cluster node listens the same output queue. Then, if it's a correct tomcat instance, it will be processed, all other copies will be lost, and it's ok. How to implement it? How make consumers on tomcat's end to receive the same message the same time?

Ok, found the solution here:
RabbitMQ / AMQP: single queue, multiple consumers for same message?
It's impossible to do in rabbitmq, need to create a couple of queues for each consumer.

Related

Spring Rabbit mq Listner complete the process even if is killed

I have integrated rabbitmq in a spring application.In my application i am doing indexing on solr using rabbit mq.
On my every queue i have set only one listner.
I want to stop the listner on message progess.But the problem is that when i am going to stop the listner by registry.stop the rabbit mq ui and logs showing that the listner is stopped. but the message on which it works sucessfully index on solr.
As per my knowledge after killing the listner, the message also not going to further process.
That's not correct. It just stops to consume more messages from the queue. Currently in-flight messages are processed gracefully. Why would one want do not do that? You would lose the data which was consumed and acknowledged on the broker.

RabbitMQ multiple listeners for same message prevent duplicate listening

I am using rabbitmq in spring boot application. I am using aws ecs for deployment. Now suppose multiple instance is running of my service. and rabbitmq listening for order create is registered with direct exchange.. So what happen when order is placed? will my both the instance of service will get same message? If yes, How to prevent duplicate message on those 2 listeners?
If the service creates multiple Listeners/Consumers for same queue on a direct exchange below mechanism is applicable:
By default, RabbitMQ will send each message to the next consumer, in sequence. On average every consumer will get the same number of messages. This way of distributing messages is called round-robin.
Best Tutorial for this topic: https://www.rabbitmq.com/tutorials/tutorial-two-java.html

messages published to all consumers with same consumer-group in spring-data-stream project

I got my zookeeper and 3 kafka broker running locally.
I started one producer and one consumer. I can see consumer is consuming message.
I then started three consumers with same consumer group name (different ports since its a spring boot project). but what I found is that all the consumers are now consuming (receiving) messages. But I expect the message to be load-balanced in that only messages are not repeated across the consumers. I don't know what the problem is.
Here is my property file
spring.cloud.stream.bindings.input.destination=timerTopicLocal
spring.cloud.stream.kafka.binder.zkNodes=localhost
spring.cloud.stream.kafka.binder.brokers=localhost
spring.cloud.stream.bindings.input.group=timerGroup
Here the group is timerGroup.
consumer code : https://github.com/codecentric/edmp-sample-stream-sink
producer code : https://github.com/codecentric/edmp-sample-stream-source
Can you please update dependencies to Camden.RELEASE (and start using Kafka 0.9+) ? In Brixton.RELEASE, Kafka consumers were 0.8-based and required passing instanceIndex/instanceCount as properties in order to distribute partitions correctly.
In Camden.RELEASE we are using the Kafka 0.9+ consumer client, which does load-balancing in the way you are expecting (we also support static partition allocation via instanceIndex/instanceCount, but I suspect this is not what you want). I can enter into more details on how to configure this with Brixton, but I guess an upgrade should be a much easier path.

Queue consumer clusters with ActiveMQ

How to configure cluster of Consumers in ActiveMQ?
I created a simple embedded ActiveMQ application with two consumers of one Queue, consumers are working in separate threads. But when I send a message to the Queue, JMS delivers it to first consumer no matter how long it sleeps after receiving.
I think you're trying to explain that the first consumer is receiving all the messages. There is a FAQ entry for this type of problem available here:
http://activemq.apache.org/i-do-not-receive-messages-in-my-second-consumer.html
Bruce

JBoss doesn't process JMS messages

Although JBoss seem to receive the JMS messages (I can list them through jmx-console) it doesn't process them. They stayed queued forever. What might be the reason for that?
Do you have a message consumer running to process the queue?
This could be something like a message driven bean, or another JMS client connecting to the queue.

Resources