How to have dynamic selector with DefaultMessageListenerContainer (in spring-boot)? - spring

I have a spring-boot application with ActiveMQ JMS. I have a queue in the application which will get messages with a string property say color. Value of color can be red, green or blue. Application has a Rest service where it will get list of color(s) { one or more } which should be used as a SELECTOR when listening for the messages on the queue. Over the lifetime of the application, this might change, so the value of SELECTOR can look like
"color='red'", "color='blue' OR color='red'" or "color='green'".
#Bean
MessageListenerAdapter adapter() {
return new MessageListenerAdapter(new Object() {
// message handler
});
}
#Bean
DefaultMessageListenerContainer container(ConnectionFactory cf) throws Exception {
DefaultMessageListenerContainer c = new DefaultMessageListenerContainer();
c.setMessageListener(adapter());
c.setConcurrency(this.concurrency);
c.setMessageSelector(this.selector);
c.setConnectionFactory(cf);
c.setDestinationName(this.q);
return c;
}
Was planning to use above code to achieve this; code works fine to start with initial selector, however when selector needs to change following code does not work.
c.stop();
// modify value of selector
c.setMessageSelector(this.selector);
c.start();
Looks like, I have a working solution. I put #Scope("prototype") on top of method container() and have a method which instantiates a new DefaultMessageListenerContainer whenever selector changes.
public void xx(String selector) {
this.selector = selector;
DefaultMessageListenerContainer c =
context.getBean("container", DefaultMessageListenerContainer.class);
c.start();
}
Is this the right way to go about this? Also, when selector changes and I instantiate a new DefaultMessageListenerContainer, what's the correct way to shutdown/stop existing DefaultMessageListenerContainer?
regards,
Yogi

The prototype thing looks a very bad idea to me.
MessageListenerContainer is meant to be a singleton that is responsible to handle listeners for a particular queue or topic configuration. The selector is a JMS spec so you basically need to reconfigure the listener container at runtime which will require you to fully stop the listener, change its configuration and restart it. I haven't seen your code but I don't see a reason why it wouldn't work.
Having said that, why are you using a selector for this? If the selector changes during the lifetime of your application, wouldn't it be better to actually perform that selection in your own logic? Having a selector at the JMS level is interesting if you have several message types on the same queue and you want different thread pools for them (i.e. you want 5 concurrent listeners for "red" and only 2 for green for instance). If you don't have that requirement having a generic route that filters the incoming message is probably a better idea.
If you do have that requirement, stopping the container, changing its config and restarting it should work. Unfortunately it doesn't so I've created SPR-14604 to track this issue.

Related

JmsListener called again and again when a error happen in the method

In a spring boot application, I have a class with jms listener.
public class PaymentNotification{
#JmsListener(destination="payment")
public void receive(String payload) throws Exception{
//mapstring conversion
....
paymentEvent = billingService.insert(paymentEvent); //transactional method
//call rest...
billingService.save(paymentEvent);
//send info to jms
}
}
I saw then when a error happen, data is inserted in the database, that ok, but it's like receive method is called again and again... but queue is empty when I check on the server.
If there is an error, I don't want method is called again, Is there something for that.
The JMS Message Headers might contain additional information to help with your processing. In particular JMSRedelivered could be of some value. The Oracle doc states that "If a client receives a message with the JMSRedelivered field set, it is likely, but not guaranteed, that this message was delivered earlier but that its receipt was not acknowledged at that time."
I ran the following code to explore what was available in my configuration (Spring Boot with IBM MQ).
#JmsListener(destination="DEV.QUEUE.1")
public void receive(Message message) throws Exception{
for (Enumeration<String> e = message.getPropertyNames(); e.hasMoreElements();)
System.out.println(e.nextElement());
}
From here I could find JMSXDeliveryCount is available in JMS 2.0. If that property is not available, then you may well find something similar for your own configuration.
One strategy would be to use JMSXDeliveryCount, a vendor specific property or maybe JMSRedelivered (if suitable for your needs) as a way to check before you process the message. Typically, the message would be sent to a specific blackout queue where the redelivery count exceeds a set threshold.
Depending on the messaging provider you are using it might also be possible to configure back out queue processing as properties of the queue.

How to better correlate Spring Integration TCP Inbound and Outbound Adapters within the same application?

I currently have a Spring Integration application which is utilizing a number of TCP inbound and outbound adapter combinations for message handling. All of these adapter combinations utilize the same single MessageEndpoint for request processing and the same single MessagingGateway for response sending.
The MessageEndpoint’s final output channel is a DirectChannel that is also the DefaultRequestChannel of the MessageGateway. This DirectChannel utilizes the default RoundRobinLoadBalancingStrategy which is doing a Round Robin search for the correct Outbound Adapter to send the given response through. Of course, this round robin search does not always find the appropriate Outbound Adapter on first search and when it doesn’t it logs accordingly. Not only is this producing a large amount of unwanted logging but it also raises some performance concerns as I anticipate several hundred inbound/outbound adapter combinations existing at any given time.
I am wondering if there is a way in which I can more closely correlate the inbound and outbound adapters in a way that there is no need for the round robin processing and each response can be sent directly to the corresponding outbound adapter? Ideally, I would like this to be implemented in a way that the use of a single MessageEndpoint and single MessageGateway can be maintained.
Note: Please limit solutions to those which use the Inbound/Outbound Adapter combinations. The use of TcpInbound/TcpOutboundGateways is not possible for my implementation as I need to send multiple responses to a single request and, to my knowledge, this can only be done with the use of inbound/outbound adapters.
To add some clarity, below is a condensed version of the current implementation described. I have tried to clear out any unrelated code just to make things easier to read...
// Inbound/Outbound Adapter creation (part of a service that is used to dynamically create varying number of inbound/outbound adapter combinations)
public void configureAdapterCombination(int port) {
TcpNioServerConnectionFactory connectionFactory = new TcpNioServerConnectionFactory(port);
// Connection Factory registered with Application Context bean factory (removed for readability)...
TcpReceivingChannelAdapter inboundAdapter = new TcpReceivingChannelAdapter();
inboundAdapter.setConnectionFactory(connectionFactory);
inboundAdapter.setOutputChannel(context.getBean("sendFirstResponse", DirectChannel.class));
// Inbound Adapter registered with Application Context bean factory (removed for readability)...
TcpSendingMessageHandler outboundAdapter = new TcpSendingMessageHandler();
outboundAdapter.setConnectionFactory(connectionFactory);
// Outbound Adapter registered with Application Context bean factory (removed for readability)...
context.getBean("outboundResponse", DirectChannel.class).subscribe(outboundAdapter);
}
// Message Endpoint for processing requests
#MessageEndpoint
public class RequestProcessor {
#Autowired
private OutboundResponseGateway outboundResponseGateway;
// Direct Channel which is using Round Robin lookup
#Bean
public DirectChannel outboundResponse() {
return new DirectChannel();
}
// Removed additional, unrelated, endpoints for readability...
#ServiceActivator(inputChannel="sendFirstResponse", outputChannel="sendSecondResponse")
public Message<String> sendFirstResponse(Message<String> message) {
// Unrelated message processing/response generation excluded...
outboundResponseGateway.sendOutboundResponse("First Response", message.getHeaders().get(IpHeaders.CONNECTION_ID, String.class));
return message;
}
// Service Activator that puts second response on the request channel of the Message Gateway
#ServiceActivator(inputChannel = "sendSecondResponse", outputChannel="outboundResponse")
public Message<String> processQuery(Message<String> message) {
// Unrelated message processing/response generation excluded...
return MessageBuilder.withPayload("Second Response").copyHeaders(message.getHeaders()).build();
}
}
// Messaging Gateway for sending responses
#MessagingGateway(defaultRequestChannel="outboundResponse")
public interface OutboundResponseGateway {
public void sendOutboundResponse(#Payload String payload, #Header(IpHeaders.CONNECTION_ID) String connectionId);
}
SOLUTION:
#Artem's suggestions in the comments/answers below seem to do the trick. Just wanted to make a quick note about how I was able to add a replyChannel to each Outbound Adapter on creation.
What I did was create two maps that are being maintained by the application. The first map is populated whenever a new Inbound/Outbound adapter combination is created and it is a mapping of ConnectionFactory name to replyChannel name. The second map is a map of ConnectionId to replyChannel name and this is populated on any new TcpConnectionOpenEvent via an EventListener.
Note that every TcpConnectionOpenEvent will have a ConnectionFactoryName and ConnectionId property defined based on where/how the connection is established.
From there, whenever a new request is received I use theses maps and the 'ip_connectionId' header on the Message to add a replyChannel header to the Message. The first response is sent by manually grabbing the corresponding replyChannel (based on the value of the replyChannel header) from the application's context and sending the response on that channel. The second response is sent via Spring Integration using the replyChannel header on the message as Artem describes in his responses.
This solution was implemented as a quick proof of concept and is just something that worked for my current implementation. Including this to hopefully jumpstart other viewer's own implementations/solutions.
Well, I see now your point about round-robin. You create many similar TCP channel adapters against the same channels. In this case it is indeed hard to distinguish one flow from another because you have a little control over those channels and their subscribers.
On of the solution would be grate with Spring Integration Java DSL and its dynamic flows: https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-runtime-flows
So, you would concentrate only on the flows and won't worry about runtime registration. But since you are not there and you deal just with plain Java & Annotations configuration, it is much harder for you to achieve a goal. But still...
You may be know that there is something like replyChannel header. It is taken into an account when we don't have a outputChannel configured. This way you would be able to have an isolated channel for each flow and the configuration would be really the same for all the flows.
So,
I would create a new channel for each configureAdapterCombination() call.
Propagate this one into that method for replyChannel.subscribe(outboundAdapter);
Use this channel in the beginning of your particular flow to populate it into a replyChannel header.
This way your processQuery() service-activator should go without an outputChannel. It is going to be selected from the replyChannel header for a proper outbound channel adapter correlation.
You don't need a #MessagingGateway for such a scenario since we don't have a fixed defaultRequestChannel any more. In the sendFirstResponse() service method you just take a replyChannel header and send a newly created message manually. Technically it is exactly the same what you try to do with a mentioned #MessagingGateway.
For Java DSL variant I would go with a filter on the PublishSubscribeChannel to discard those messages which don't belong to the current flow. Anyway it is a different story.
Try to figure out how you can have a reply channel per flow when you configure particular configureAdapterCombination().

Spring-Kafka Concurrency Property

I am progressing on writing my first Kafka Consumer by using Spring-Kafka. Had a look at the different options provided by framework, and have few doubts on the same. Can someone please clarify below if you have already worked on it.
Question - 1 : As per Spring-Kafka documentation, there are 2 ways to implement Kafka-Consumer; "You can receive messages by configuring a MessageListenerContainer and providing a message listener or by using the #KafkaListener annotation". Can someone tell when should I choose one option over another ?
Question - 2 : I have chosen KafkaListener approach for writing my application. For this I need to initialize a container factory instance and inside container factory there is option to control concurrency. Just want to double check if my understanding about concurrency is correct or not.
Suppose, I have a topic name MyTopic which has 4 partitions in it. And to consume messages from MyTopic, I've started 2 instances of my application and these instances are started by setting concurrency as 2. So, Ideally as per kafka assignment strategy, 2 partitions should go to consumer1 and 2 other partitions should go to consumer2. Since the concurrency is set as 2, does each of the consumer will start 2 threads, and will consume data from the topics in parallel ? Also should we consider anything if we are consuming in parallel.
Question 3 - I have chosen manual ack mode, and not managing the offsets externally (not persisting it to any database/filesystem). So should I need to write custom code to handle rebalance, or framework will manage it automatically ? I think no as I am acknowledging only after processing all the records.
Question - 4 : Also, with Manual ACK mode, which Listener will give more performance? BATCH Message Listener or normal Message Listener. I guess if I use Normal Message listener, the offsets will be committed after processing each of the messages.
Pasted the code below for your reference.
Batch Acknowledgement Consumer:
public void onMessage(List<ConsumerRecord<String, String>> records, Acknowledgment acknowledgment,
Consumer<?, ?> consumer) {
for (ConsumerRecord<String, String> record : records) {
System.out.println("Record : " + record.value());
// Process the message here..
listener.addOffset(record.topic(), record.partition(), record.offset());
}
acknowledgment.acknowledge();
}
Initialising container factory:
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<String, String>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> configs = new HashMap<String, Object>();
configs.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServer);
configs.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
configs.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, enablAutoCommit);
configs.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, maxPolInterval);
configs.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
configs.put(ConsumerConfig.CLIENT_ID_CONFIG, clientId);
configs.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
configs.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return configs;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<String, String>();
// Not sure about the impact of this property, so going with 1
factory.setConcurrency(2);
factory.setBatchListener(true);
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.getContainerProperties().setConsumerRebalanceListener(RebalanceListener.getInstance());
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setMessageListener(new BatchAckConsumer());
return factory;
}
#KafkaListener is a message-driven "POJO" it adds stuff like payload conversion, argument matching, etc. If you implement MessageListener you can only get the raw ConsumerRecord from Kafka. See #KafkaListener Annotation.
Yes, the concurrency represents the number of threads; each thread creates a Consumer; they run in parallel; in your example, each would get 2 partitions.
Also should we consider anything if we are consuming in parallel.
Your listener must be thread-safe (no shared state or any such state needs to be protected by locks.
It's not clear what you mean by "handle rebalance events". When a rebalance occurs, the framework will commit any pending offsets.
It doesn't make a difference; message listener Vs. batch listener is just a preference. Even with a message listener, with MANUAL ackmode, the offsets are committed when all the results from the poll have been processed. With MANUAL_IMMEDIATE mode, the offsets are committed one-by-one.
Q1:
From the documentation,
The #KafkaListener annotation is used to designate a bean method as a
listener for a listener container. The bean is wrapped in a
MessagingMessageListenerAdapter configured with various features, such
as converters to convert the data, if necessary, to match the method
parameters.
You can configure most attributes on the annotation with SpEL by using
"#{…​} or property placeholders (${…​}). See the Javadoc for more information."
This approach can be useful for simple POJO listeners and you do not need to implement any interfaces. You are also enabled to listen on any topics and partitions in a declarative way using the annotations. You can also potentially return the value you received whereas in case of MessageListener, you are bound by the signature of the interface.
Q2:
Ideally yes. If you have multiple topics to consume from, it gets more complicated though. Kafka by default uses RangeAssignor which has its own behaviour (you can change this -- see more details under).
Q3:
If your consumer dies, there will be rebalancing. If you acknowledge manually and your consumer dies before committing offsets, you do not need to do anything, Kafka handles that. But you could end up with some duplicate messages (at-least once)
Q4:
It depends what you mean by "performance". If you meant latency, then consuming each record as fast as possible will be the way to go. If you want to achieve high throughput, then batch consumption is more efficient.
I had written some samples using Spring kafka and various listeners - check out this repo

Workaround to fix StreamListener constant Channel Name

I am using cloud stream to consuming messages I am using something like
#StreamListener(target = "CONSTANT_CHANNEL_NAME")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
But I want to keep channel name as per my environment and it should be picked from property file, while as per Spring channel name should be constant.
Is there any work around to fix this problem?
Edit:1
Let's see the actual situation
I am using multiple queues and dlq queues and it's binding is done with rabbit-mq
I want to change my channel name and queue name as per my environment
I want to do all on same AMQP host.
My Sink Code
public interfaceProcessorSink extends Sink {
#Input(CONSTANT_CHANNEL_NAME)
SubscribableChannel channel();
#Input(CONSTANT_CHANNEL_NAME_1)
SubscribableChannel channel2();
#Input(CONSTANT_CHANNEL_NAME_2)
SubscribableChannel channle2();
}
You can pick target value from property file as below:
#StreamListener(target = "${streamListener.target}")
public void readingData(String input){
System.out.println("consumed info is"+input);
}
application.yml
streamListener:
target: CONSTANT_CHANNEL_NAME
While there are many ways to do that I wonder why do you even care? In fact if anything you do want to make it constant so it is always the same, but thru configuration properties map it to different remote destinations (e.g., Kafka, Rabbit etc). For example spring.cloud.stream.bindings.input.destination=myKafkaTopic states that channel by the name input will be mapped to (bridged with) Kafka topic named myKafkaTopic'.
In fact, to further prove my point we completely abstracted away channels all together for users who use spring-cloud-function programming model, but that is a whole different discussion.
My point is that I believe you are actually creating a problem rather the solving it since with externalisation of the channel name you create probably that due to misconfiguration your actual bound channel and the channel you're mentioning in your properties are not going to be the same.

Configuring a Dedicated Listener Container for each Queue using Spring AMQP Java Configuration

I have listeners configured in XML like this
<rabbit:listener-container connection-factory="connectionFactory" concurrency="1" acknowledge="manual">
<rabbit:listener ref="messageListener" queue-names="${address.queue.s1}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s2}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s3}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s4}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s5}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s6}" exclusive="true"/>
</rabbit:listener-container>
I am trying to move that to Java Configuration and I don't see a way to add more than one MessageListener to a ListenerContainer. Creating multiple ListenerContainer beans is not an option in my case because I would not know the number of queues to consume from until runtime. Queue names will come from a configuration file.
I did the following
#PostConstruct
public void init()
{
for (String queue : queues.split(","))
{
// The Consumers would not connect if I don't call the 'start()' method.
messageListenerContainer(queue).start();
}
}
#Bean
public SimpleMessageListenerContainer messageListenerContainer(String queue)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queue);
container.setMessageListener(messageListener());
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
It "sort" of works BUT I see some weird behavior with lots of ghost channels getting opened which never used to happen with the XML configuration. So it makes me suspicious that I am doing something wrong. I would like to know the correct way of creating MessageListenerContainers in Java configuration? Simply put, "How does Spring convert 'rabbit:listener-container' with multiple 'rabbit:listener' to java objects properly?" Any help/insight into this would be greatly appreciated.
Business Case
We have a Publisher that publishes User Profile Updates. The publisher could dispatch multiple updates for the same use and we have to process them in the correct order to maintain data integrity in the data store.
Example : User : ABC, Publish -> {UsrA:Change1,...., UsrA:Change 2,....,UsrA:Change 3} -> Consumer HAS to process {UsrA:Change1,...., UsrA:Change 2,....,UsrA:Change 3} in that order.
In our previous setup, we had 1 Queue that got all the User Updates and we had a consumer app with concurrency = 5. There were multiple app servers running the consumer app. That resulted in *5 * 'Number of instances of the consumer app' channels/threads* that could process the incoming messages. The speed was GREAT! but we were having out of order processing quite often resulting in data corruption.
To maintain strict FIFO order and still process message parallelly as much as possible, we implemented queue Sharding. We have a "x-consistent-hash with a hash-header on employee-id. Our Publisher publishes messages to the hash exchange and we have multiple sharded queues bound to the hash exchange. The idea is, we will have all changes for a given user (User A for example) queued up in the same shard. We then have our consumers connect to the sharded queues in 'Exclusive' mode and 'ConcurrentConsumers = 1' and process the messages. That way we are sure to process messages in the correct order while still processing messages parallelly. We could make it more parallel by increasing the number of shards.
Now on to the consumer configuration
We have the consumer app deployed on multiple app servers.
Original Approach:
I simply added multiple 'rabbit:listener' to my 'rabbit:listener-container' in my consumer app as you can see above and it works great except for the server that starts first get an exclusive lock on all the sharded queues and the other servers are just sitting there doing no work.
New Approach:
We moved the sharded queue names to the application configuration file. Like so
Consumer Instance 1 : Properties
queues=user.queue.s1,user.queue.s2,user.queue.s3
Consumer Instance 2 : Properties
queues=user.queue.s4,user.queue.s5,user.queue.s6
Also worth noting, we could have Any number of Consumer instances and the shards could be distributed unevenly between instances depending on resource availability.
With the queue names moved to configuration file, the XML confiugration will no longer work because we cannot dynamically add 'rabbit:listener' to my 'rabbit:listener-container' like we did before.
Then we decided to switch over to the Java Configuration. That is where we are STUCK!.
We did this initially
#Bean
public SimpleMessageListenerContainer messageListenerContainer()
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queues.split(","));
container.setMessageListener(messageListener());
container.setMissingQueuesFatal(false);
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.start();
return container;
}
and it works BUT all our queues are on one connection sharing 1 channel. That is NOT good for speed. What we want is One connection and every queue gets its own channel.
Next Step
No success here YET!. The java configuration in my original question is where we are at now.
I am baffled why this is so HARD to do. Clearly the XML configuration does something that is NOT easly doable in Java confiugration (Or atleast it feel sthat way to me). I see this as a gap that needs to be filled unless I am compeltly missing something. Please correct me if I am wrong. This is a genuine business case NOT some ficticious edge case. Please feel free to comment if you think otherwise.
and it works BUT all our queues are on one connection sharing 1 channel. That is NOT good for speed. What we want is One connection and every queue gets its own channel.
If you switch to the DirectMessageListenerContainer, each queue in that configuration gets its own Channel.
See the documentation.
To answer your original question (pre-edit):
#Bean
public SimpleMessageListenerContainer messageListenerContainer1(#Value("${address.queue.s1}") String queue)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queue);
container.setMessageListener(messageListener());
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
...
#Bean
public SimpleMessageListenerContainer messageListenerContainer6(#Value("${address.queue.s6}" ) String queue)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queue);
container.setMessageListener(messageListener());
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
Here is the Java Configuration for creating SimpleMessageListenerContainer
#Value("#{'${queue.names}'.split(',')}")
private String[] queueNames;
#Bean
public SimpleMessageListenerContainer listenerContainer(final ConnectionFactory connectionFactory) {
final SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueNames);
container.setMessageListener(vehiclesReceiver());
setCommonQueueProperties(container);
return container;
}
Each <rabbit:listener > creates its own SimpleListenerContainer bean with the same ConnectionFactory. To do similar in Java config, you have to declare as much SimpleListenerContainer beans as you have queues: one for each of them.
You also may consider to use #RabbitListener approach instead: https://docs.spring.io/spring-amqp/docs/2.0.4.RELEASE/reference/html/_reference.html#async-annotation-driven

Resources