How to consume only one RabbitMQ on an application that has 3 queues? - spring-boot

I have a Spring Boot application that uses RabbitMQ and has 3 queues (queue1, queue2 and queue3).
In this application i have one listener, that should only listen for messages on the queue named queue1 and ignore the other 2 queues, but it is getting messages from all queues.
This is my RabbitMQ config:
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(this.host);
connectionFactory.setPort(this.port);
connectionFactory.setUsername(this.user);
connectionFactory.setPassword(this.password);
return connectionFactory;
}
#Bean
SimpleMessageListenerContainer container(MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(this.connectionFactory());
container.setQueueNames(this.startQueueQueueName, this.printQueueQueueName, this.errorQueueQueueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(RabbitDocumentToPrintListener receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
and this is my listener
public void receiveMessage(String message) throws Exception {
this.logger.debug("Received message from Rabbit");
}
I've tried adding #RabbitListener(queues = "queue1", exclusive = true) to the listener, but it didn't work.
If someone could help me making this app to consume only queue1, I'd appreciate. Thanks!

Related

Configure dead letter queue. DLQ for a jms consumer

It is a rather simple question... I have a spring project where I consume queues (CONSUMER).
Now I want to configure individual dead letter queues for each queue I am consuming.
However, in my mind, the individual dead letter queues configuration must be done in the broker service (SERVER), not in the CONSUMER. Is it really so?
My code below WILL NOT work, correct?
#Bean
public DeadLetterStrategy deadLetterStrategy(){
IndividualDeadLetterStrategy dlq = new IndividualDeadLetterStrategy();
dlq.setQueueSuffix(".DLQ");
dlq.setUseQueueForQueueMessages(true);
return dlq;
}
#Bean
public ActiveMQConnectionFactory consumerActiveMQConnectionFactory() {
var activeMQConnectionFactory = new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
RedeliveryPolicy policy = activeMQConnectionFactory.getRedeliveryPolicy();
policy.setMaximumRedeliveries(maximumRedeliveries);
policy.setInitialRedeliveryDelay(0);
policy.setBackOffMultiplier(3);
policy.setUseExponentialBackOff(true);
return activeMQConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
var factory = new DefaultJmsListenerContainerFactory();
factory.setSessionAcknowledgeMode(JmsProperties.AcknowledgeMode.CLIENT.getMode());
factory.setConcurrency(factoryConcurrency);
factory.setConnectionFactory(consumerActiveMQConnectionFactory());
return factory;
}
#Bean
public BrokerService broker() throws Exception {
final BrokerService broker = new BrokerService();
broker.addConnector(brokerUrl);
broker.setPersistent(false);
broker.setDestinationPolicy(policyMap());
return broker;
}
#Bean
public PolicyMap policyMap() {
PolicyMap destinationPolicy = new PolicyMap();
List<PolicyEntry> entries = new ArrayList<PolicyEntry>();
PolicyEntry queueEntry = new PolicyEntry();
queueEntry.setQueue(">"); // In activemq '>' does the same thing as '*' does in other languages
queueEntry.setDeadLetterStrategy(deadLetterStrategy());
entries.add(queueEntry);
destinationPolicy.setPolicyEntries(entries);
return destinationPolicy;
} }
#JmsListener(destination = "myqueue")
public void onMessage(Message message, Session session) throws JMSException {
try {
stuff()
message.acknowledge();
} catch (Exception ex) {
session.recover();
}
}
A JMS consumer in ActiveMQ 5.x cannot configure the broker side dead letter strategy, this must be done at the broker in the configuration XML or via programmatic broker configuration. You could configure it in spring as you've done if your broker is simply an in memory broker however that is of little use for most applications.
Refer to the broker documentation for more help on configuration of the broker.

Spring boot + Rabbit MQ Process messages paralally

I have observed that I have multiple messages in queue but my worker is picking messages one by one, it's not processing messages parallally. What I am doing wrong here ? How can i process multiple messages parallally to utilizing my worker processing at most. Any best practice ?
I am not sure what concurrency value 8 doing here.
application.yml
spring:
rabbitmq:
host:
port:
username:
virtual-host:
password:
listener:
simple:
concurrency: 8
prefetch: 8
Bean config:
#Bean
Queue queue() {
return new Queue("testQ", true);
}
#Bean
TopicExchange exchange() {
return new TopicExchange("testE");
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind("testQ").to("testE").with("a.b.c");
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames("testQ");
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(Receiver receiver) {
return new MessageListenerAdapter(receiver, "handleMessage");
}
I think if you create SimpleMessageListenerContainer in your code yourself, then you could specify the parameter "concurrency" right in your code. For example:
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames("testQ");
container.setConcurrency("8"); //set 'Concurrency' property for your container
container.setMessageListener(listenerAdapter);
return container;
}

Spring AMQP stop send or consuming message

I have an application which receives a message from one queue, processes it and sends it to another queue. When it's receiving a lot of messages (20 thousand or more), spring shows me this message when it tries to send the message to another queue:
connection error; protocol method: #method<connection.close>(reply-code=504 reply-text=CHANNEL_ERROR - second 'channel.open' seen class-id=20 method-id=10)
So I raised the channel cache size and created two CachingConnectionFactory one for consumer and another for the producer, this configurations I followed a note from spring doc:
When the application is configured with a single CachingConnectionFactory, as it is by default with Spring Boot auto-configuration, the application will stop working when the connection is blocked by the Broker. And when it is blocked by the Broker, any its clients stop to work. If we have producers and consumers in the same application, we may end up with a deadlock when producers are blocking the connection because there are no resources on the Broker anymore and consumers can’t free them because the connection is blocked. To mitigate the problem, there is just enough to have one more separate CachingConnectionFactory instance with the same options - one for producers and one for consumers. The separate CachingConnectionFactory isn’t recommended for transactional producers, since they should reuse a Channel associated with the consumer transactions.
Following this recommendations the error message disappeared, but now the application suddenly stops, it's not sending or receiving new messages and all queues are idle. It's kind strange because it has a low concurrency number on listener. What am I missing?
Configuration:
Spring Boot: 2.0.8.RELEASE
Spring AMQP: 2.0.11.RELEASE
RabbitMQ: 3.8.8
spring:
rabbitmq:
listener:
simple:
default-requeue-rejected: false
concurrency: 5
max-concurrency: 8
cache:
channel:
size: 1000
#Bean
public ConnectionFactory consumerConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(properties.getHost());
connectionFactory.setPort(properties.getPort());
connectionFactory.setUsername(properties.getUsername());
connectionFactory.setPassword(properties.getPassword());
connectionFactory.setChannelCacheSize(properties.getCache().getChannel().getSize());
connectionFactory.setConnectionNameStrategy(cns());
return connectionFactory;
}
#Bean
public ConnectionFactory producerConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(properties.getHost());
connectionFactory.setPort(properties.getPort());
connectionFactory.setUsername(properties.getUsername());
connectionFactory.setPassword(properties.getPassword());
connectionFactory.setChannelCacheSize(properties.getCache().getChannel().getSize());
connectionFactory.setConnectionNameStrategy(cns());
return connectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(#Qualifier("consumerConnectionFactory") ConnectionFactory consumerConnectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer,
RabbitProperties properties) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency());
factory.setMaxConcurrentConsumers(properties.getListener().getSimple().getMaxConcurrency());
configurer.configure(factory, consumerConnectionFactory);
return factory;
}
#Bean
#Primary
public RabbitAdmin producerRabbitAdmin() {
return new RabbitAdmin(producerConnectionFactory());
}
#Bean
public RabbitAdmin consumerRabbitAdmin() {
return new RabbitAdmin(consumerConnectionFactory());
}
#Bean
#Primary
public RabbitTemplate producerRabbitTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(producerConnectionFactory());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public RabbitTemplate consumerRabbitTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(consumerConnectionFactory());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
After analize, the problem was due to Java Memory Heap limit. Besides, I updated my configuration, removed ConnectionFactory beans, and set a publisher factory to RabbitTemplate
So I ended with this:
#Bean
#Primary
public RabbitTemplate producerRabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
rabbitTemplate.setUsePublisherConnection(true);
return rabbitTemplate;
}
#Bean
public RabbitTemplate consumerRabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer,
RabbitProperties properties) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency());
factory.setMaxConcurrentConsumers(properties.getListener().getSimple().getMaxConcurrency());
configurer.configure(factory, connectionFactory);
return factory;
}
With this configuration memory consume was reduced and I was able to raise consumer concurrey numbers:
spring:
rabbitmq:
listener:
simple:
default-requeue-rejected: false
concurrency: 10
max-concurrency: 15
cache:
channel:
size: 1000
I'm looking now for the right cache channel size and to raise even more concurrency numbers.

RabbitMQ Spring Boot AMQP - consume with concurrent threads

I want my app to handle multiple messages received from RabbitMQ concurrently.
I have tried probably all the google-page-1 solutions but it won't work.
Here is my setup:
POM.xml
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.10.RELEASE</version>
</parent>
.
.
.
<dependency>
<groupId>org.springframework.amqp</groupId>
<artifactId>spring-rabbit</artifactId>
</dependency>
application.properties:
#############################
# RabbitMQ #
#############################
#AMQP RabbitMQ configuration
spring.rabbitmq.host=zzzzzzzz
spring.rabbitmq.port=5672
spring.rabbitmq.username=zzzzzzz
spring.rabbitmq.password=zzzzzzz
#Rabbit component names
com.cp.neworder.queue.name = new-order-queue-stg
com.cp.neworder.queue.exchange = new-order-exchange-stg
com.cp.completedorder.queue.name = completed-order-queue
com.cp.completedorder.queue.exchange = completed-order-exchange
#Rabbit MQ concurrect consumers config
spring.rabbitmq.listener.simple.concurrency=3
spring.rabbitmq.listener.simple.retry.initial-interval=3000
Configuration file:
#Configuration
public class RabbitMQConfig {
#Value("${com.cp.neworder.queue.name}")
private String newOrderQueueName;
#Value("${com.cp.neworder.queue.exchange}")
private String newOrderExchangeName;
#Bean
Queue queue() {
return new Queue(newOrderQueueName, true);
}
#Bean
TopicExchange exchange() {
return new TopicExchange(newOrderExchangeName);
}
#Bean
Binding binding(Queue queue, TopicExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(newOrderQueueName);
}
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(newOrderQueueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(OrderMessageListener receiver) {
return new MessageListenerAdapter(receiver, "receiveOrder");
}
}
My consumer class works as intended, it just processes one request at a time. How do I know?
I save the process of my async requests in DB, so I can query how many are processing at the moment, and it's always just 1.
I can look at the RabbitMQ Management platform, and I see that it's being dequeued one by one.
What are the mistakes in my setup? How do I get it to work?
Thanks.
SimpleMessageListenerContainer has a way to set the concurrent consumers. It has setConcurrentConsumers method where you can set the number of consumers.
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(newOrderQueueName);
container.setMessageListener(listenerAdapter);
container. setConcurrentConsumers(10);
return container;
}
With this configuration, when you start the application, you will be able to see multiple consumers in the RabbitMQ admin
You are not using Boot to create the container so the boot properties aren't applied.
Try
#Bean
SimpleMessageListenerContainer container(ConnectionFactory connectionFactory, MessageListenerAdapter listenerAdapter,
RabbitProperties properties) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(newOrderQueueName);
container.setMessageListener(listenerAdapter);
container.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency();
return container;
}

Spring AMQP Transactional Processing and Retry

I am trying the Srpring AMQP features regarding transactional message processing.
I have the following setup - I have a message consumer that is annotated as #Transactional
#Transactional
public void handleMessage(EventPayload event) {
Shop shop = new Shop();
shop.setName(event.getName());
Shop savedShop = shopService.create(shop);
log.info("Created shop {} from event {}", shop, event);
}
In shopService.create I save the shop and send another message about the creation:
#Transactional(propagation = REQUIRED)
#Component
public class ShopService {
...
public Shop create(Shop shop) {
eventPublisher.publish(new EventPayload(shop.getName()));
return shopRepository.save(shop);
}
}
I want to achieve the following - the message sent in the create method should just go to the broker if the database action succeeded. If it fails the message is not sent and the received message is rolled back.
I also have a Retry configured - so I expect each message to be retried 3 times before it is rejected:
#Bean
public RetryOperationsInterceptor retryOperationsInterceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(1000, 2.0, 10000)
.build();
}
I am observing the following behaviour:
When I configure the container as follows the message is retried 3 times but every time the message in shopService.create is sent to the broker:
#Bean
SimpleMessageListenerContainer messageListenerContainer(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(testEventSubscriberQueue().getName());
container.setMessageListener(listenerAdapter);
container.setChannelTransacted(true);
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
return container;
}
So I tried passing the PlatformTransactionManager to the container -
#Bean
SimpleMessageListenerContainer messageListenerContainer(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter,
PlatformTransactionManager transactionManager) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(testEventSubscriberQueue().getName());
container.setMessageListener(listenerAdapter);
container.setChannelTransacted(true);
container.setTransactionManager(transactionManager);
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
return container;
}
Now the message sent in shopService.create is only send to the broker if the database transaction succeeded - which is what I want - but the message is retried indefinitely now - and not discarded after 3 retires as configured. But it seems that the backOff settings are applied - so there is some time between the retries.
The setup described does not really make sense from a business point of view - I am trying to understand and evaluate the transaction capabilities.
I am use spring-amqp 1.5.1.RELEASE
Thanks for any hints.
I had the same requirements, an #RabbitListener annotated with #Transactional, I wanted retry with backoff. It works even stateless with the following config:
#Bean
public RetryOperationsInterceptor retryOperationsInterceptor( ) {
return RetryInterceptorBuilder.stateless()
.maxAttempts( 3 )
.recoverer( new RejectAndDontRequeueRecoverer() )
.backOffOptions(1000, 2, 10000)
.build();
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter( ObjectMapper objectMapper ) {
Jackson2JsonMessageConverter jackson2JsonMessageConverter = new Jackson2JsonMessageConverter( objectMapper );
jackson2JsonMessageConverter.setCreateMessageIds( true );
return jackson2JsonMessageConverter;
}
#Bean
SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory( ConnectionFactory connectionFactory,
PlatformTransactionManager transactionManager,
Jackson2JsonMessageConverter converter) {
SimpleRabbitListenerContainerFactory container = new SimpleRabbitListenerContainerFactory ();
container.setConnectionFactory(connectionFactory);
container.setChannelTransacted(true);
container.setTransactionManager(transactionManager);
container.setAdviceChain( retryOperationsInterceptor() );
container.setMessageConverter( converter );
return container;
}
To use stateless(), using RejectAndDontRequeueRecoverer was important because otherwise the retry will work but the consumer will then by default put the message back on the queue. Then the consumer will retrieve it again, applying the retry policy and then putting it back on the queue infinitely.

Resources