Im trying to utilise RabbitMQ streams (as of 3.9+) with Spring boot's org.springframework.boot:spring-boot-starter-amqp starter.
First I started the RabbitMQ docker compose by
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672"
- "15672:15672"
and then run the bash script to enable stream plugin:
docker exec docker_rabbitmq_1 rabbitmq-plugins enable rabbitmq_stream
By declaring the queue
#Bean
Queue queue2() {
Map<String, Object> args = new HashMap<>();
args.put("x-queue-type", "stream");
return new Queue("test3-queue",true, false, false, args);
}
I configure the queue as stream and the broker says in logs
2021-09-22 21:06:46.400520+00:00 [warn] <0.1090.0> rabbit_stream_coordinator: started writer __test3-queue_1632344806396284388 on rabbit#1115e1f022e2 in 1
This should be enough config to start using Rabbit's new feature Streams with the given queue.
When I send message to the rabbitTemplate
template.convertAndSend("test3-stream", request.getMessage());
all my listeners
#RabbitListener(id = "listener1", queues = "#{queue2}")
public void listen1(String in) {
log.info("AMQP listener 1: {}", in);
}
#RabbitListener(id = "listener2", queues = "#{queue2}")
public void listen2(String in) {
log.info("AMQP listener 2: {}", in);
}
#RabbitListener(id = "listener3", queues = "#{queue2}")
public void listen3(String in) {
log.info("AMQP listener 3: {}", in);
}
receive the message and print its log. According the doc, referencing the queue by SpEL receives the queue configured with x-queue-type: stream.
As SB uses the amqp starter with the amqp client 0.9.1, it should work, but id does not:
But when I add new listener, old messages are not being processed with it.
Is it even possible to use the #RabbitListener with the new Streams feature or Im too early with using the append-only log with Rabbit broker?
Should I use Kafka instead just because Spring has not yet implemented support for RabbitMQ streams?
Rabbit comes with its own Java library handling the stream events, which works well but it is missing the simplicity of the Spring underlying heavy lifting..
EDIT:
I've got a bit further by configuring the RabbitListenerContainerFactory with customizer:
#Bean
RabbitListenerContainerFactory rabbitListenerContainerFactory(
SimpleRabbitListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
configurer.configure(factory, connectionFactory);
factory.setContainerCustomizer(c -> {
Map<String, Object> args = new HashMap<>(c.getConsumerArguments());
args.put("x-stream-offset", "first");
c.setConsumerArguments(args);
});
return factory;
}
Which causes the new listener to read the messages from the beginning (offset 0). The odd is that ALL listeners now read from offset 0.
I guess its because the consumers (listeners) do not have the name set correctly?
Related
I am designing a simple system where the flow is going to be like this:
Message Producer Microservice --> Active MQ --> Message Consumer Microservice --> Mongo DB
I need to design a queuing strategy in a way so that if MongoDB is down, I should not lose the message (because Message consumer will dequeue the message).
My consumer is written like this:
#JmsListener(destination = "Consumer.myconsumer.VirtualTopic.Tracking")
public void onReceiveFromQueueConsumer2(TrackingRequest trackingRequest) {
log.debug("Received tracking request from the queue by consumer 2");
log.debug(trackingRequest.toString());
}
How do you provide client acknowledgement?
You can use client acknowledge mode from your "Message Consumer Microservice." Since you're using a Spring JmsListener you can define the listener container using the containerFactory and then you can set the mode you want on your listener container using sessionAcknowledgeMode. See the Spring documentation for more details on what ack mode you might want to use here.
From the perspective of the ActiveMQ client you can configure redelivery semantics however you like in case of a failure. See the ActiveMQ documentation for more about that.
Alright, so I was able to solve this dilemma, here is what your config should be like (thanks to Justin for his valuable inputs):
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerUrl);
connectionFactory.setPassword(userName);
connectionFactory.setUserName(password);
connectionFactory.setTrustAllPackages(true);
connectionFactory.setRedeliveryPolicy(redeliveryPolicy());
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(connectionFactory());
return template;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory listenerCF = new DefaultJmsListenerContainerFactory();
listenerCF.setConnectionFactory(connectionFactory());
listenerCF.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
listenerCF.setSessionTransacted(true);
return listenerCF;
}
#Bean
public RedeliveryPolicy redeliveryPolicy() {
RedeliveryPolicy redeliveryPolicy = new RedeliveryPolicy();
redeliveryPolicy.setRedeliveryDelay(600000L); //keep trying every 10 minutes
redeliveryPolicy.setMaximumRedeliveries(-1); //Keep trying till its successfully inserted
return redeliveryPolicy;
}
I am migrating a Spring Boot microservice that consumes data from 3 RabbitMQ queues on server A, saves it into Redis and finally produces messages into an exchange in a different RabbitMQ on server B so these messages can be consumed by another microservice. This flow is working fine but I would like to migrate it to Spring Cloud Stream using the RabbitMQ binder. All Spring AMQP configuration is customised in the properties file and no spring property is used to create connections, queues, bindings, etc...
My first idea was setting up two bindings in Spring Cloud Stream, one connected to server A (consumer) and the other connected to server B (producer), and migrate the existing code to a Processor but I discarded it because it seems connection names cannot be set yet if multiple binders are used and I need to add several bindings to consume from server A's queues and bindingRoutingKey property does not support a list of values (I know it can be done programmately as explained here).
So I decided to only refactor the part of code related to the producer to use Spring Cloud Stream over RabbitMQ so the same microservice should consume via Spring AMQP from server A (original code) and should produce into server B via Spring Cloud Stream.
The first issue I found was a NonUniqueBeanDefinitionException in Spring Cloud Stream because a org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory bean was defined twice with handlerMethodFactory and integrationMessageHandlerMethodFactory names.
org.springframework.beans.factory.NoUniqueBeanDefinitionException: No qualifying bean of type 'org.springframework.messaging.handler.annotation.support.MessageHandlerMethodFactory' available: expected single matching bean but found 2: handlerMethodFactory,integrationMessageHandlerMethodFactory
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveNamedBean(DefaultListableBeanFactory.java:1144)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveBean(DefaultListableBeanFactory.java:411)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:344)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBean(DefaultListableBeanFactory.java:337)
at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1123)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.injectAndPostProcessDependencies(StreamListenerAnnotationBeanPostProcessor.java:317)
at org.springframework.cloud.stream.binding.StreamListenerAnnotationBeanPostProcessor.afterSingletonsInstantiated(StreamListenerAnnotationBeanPostProcessor.java:113)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:862)
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:877)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:549)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:141)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:743)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:390)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:312)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1214)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1203)
It seems the former bean is created by Spring AMQP and the latter by Spring Cloud Stream so I created my own primary bean:
#Bean
#Primary
public MessageHandlerMethodFactory messageHandlerMethodFactory() {
return new DefaultMessageHandlerMethodFactory();
}
Now the application is able to start but the output channel is created by Spring Cloud Stream in server A instead of server B. It seems that Spring Cloud Stream configuration is using the connection created by Spring AMQP instead of using its own configuration.
The configuration of Spring AMQP is this:
#Bean
public SimpleRabbitListenerContainerFactory priceRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_A));
}
#Bean
public SimpleRabbitListenerContainerFactory maxbetRabbitListenerContainerFactory(
ConnectionFactory consumerConnectionFactory) {
return
getSimpleRabbitListenerContainerFactory(
consumerConnectionFactory,
rabbitProperties.getConsumer().getListeners().get(LISTENER_B));
}
#Bean
public ConnectionFactory consumerConnectionFactory() throws Exception {
return
new CachingConnectionFactory(
getRabbitConnectionFactoryBean(
rabbitProperties.getConsumer()
).getObject()
);
}
private SimpleRabbitListenerContainerFactory getSimpleRabbitListenerContainerFactory(
ConnectionFactory connectionFactory,
RabbitProperties.ListenerProperties listenerProperties) {
//return a SimpleRabbitListenerContainerFactory set up from external properties
}
/**
* Create the AMQ Admin.
*/
#Bean
public AmqpAdmin consumerAmqpAdmin(ConnectionFactory consumerConnectionFactory) {
return new RabbitAdmin(consumerConnectionFactory);
}
/**
* Create the map of available queues and declare them in the admin.
*/
#Bean
public Map<String, Queue> queues(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
Queue queue =
QueueBuilder
.nonDurable(listenerEntry.getValue().getQueueName())
.autoDelete()
.build();
consumerAmqpAdmin.declareQueue(queue);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), queue);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the map of available exchanges and declare them in the admin.
*/
#Bean
public Map<String, TopicExchange> exchanges(AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().entrySet().stream()
.map(listenerEntry -> {
TopicExchange exchange =
new TopicExchange(listenerEntry.getValue().getExchangeName());
consumerAmqpAdmin.declareExchange(exchange);
return new AbstractMap.SimpleEntry<>(listenerEntry.getKey(), exchange);
}).collect(
Collectors.toMap(
AbstractMap.SimpleEntry::getKey,
AbstractMap.SimpleEntry::getValue
)
);
}
/**
* Create the list of bindings and declare them in the admin.
*/
#Bean
public List<Binding> bindings(Map<String, Queue> queues, Map<String, TopicExchange> exchanges, AmqpAdmin consumerAmqpAdmin) {
return
rabbitProperties.getConsumer().getListeners().keySet().stream()
.map(listenerName -> {
Queue queue = queues.get(listenerName);
TopicExchange exchange = exchanges.get(listenerName);
return
rabbitProperties.getConsumer().getListeners().get(listenerName).getKeys().stream()
.map(bindingKey -> {
Binding binding = BindingBuilder.bind(queue).to(exchange).with(bindingKey);
consumerAmqpAdmin.declareBinding(binding);
return binding;
}).collect(Collectors.toList());
}).flatMap(Collection::stream)
.collect(Collectors.toList());
}
Message listeners are:
#RabbitListener(
queues="${consumer.listeners.LISTENER_A.queue-name}",
containerFactory = "priceRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
#RabbitListener(
queues="${consumer.listeners.LISTENER_B.queue-name}",
containerFactory = "maxbetRabbitListenerContainerFactory"
)
public void handleMessage(Message rawMessage, org.springframework.messaging.Message<ModelPayload> message) {
// call a service to process the message payload
}
Properties:
#
# Server A config (Spring AMQP)
#
consumer.host=server-a
consumer.username=
consumer.password=
consumer.port=5671
consumer.ssl.enabled=true
consumer.ssl.algorithm=TLSv1.2
consumer.ssl.validate-server-certificate=false
consumer.connection-name=local:microservice-1
consumer.thread-factory.thread-group-name=server-a-consumer
consumer.thread-factory.thread-name-prefix=server-a-consumer-
# LISTENER_A configuration
consumer.listeners.LISTENER_A.queue-name=local.listenerA
consumer.listeners.LISTENER_A.exchange-name=exchangeA
consumer.listeners.LISTENER_A.keys[0]=*.1.*.*
consumer.listeners.LISTENER_A.keys[1]=*.3.*.*
consumer.listeners.LISTENER_A.keys[2]=*.6.*.*
consumer.listeners.LISTENER_A.keys[3]=*.8.*.*
consumer.listeners.LISTENER_A.keys[4]=*.9.*.*
consumer.listeners.LISTENER_A.initial-concurrency=5
consumer.listeners.LISTENER_A.maximum-concurrency=20
consumer.listeners.LISTENER_A.thread-name-prefix=listenerA-consumer-
# LISTENER_B configuration
consumer.listeners.LISTENER_B.queue-name=local.listenerB
consumer.listeners.LISTENER_B.exchange-name=exchangeB
consumer.listeners.LISTENER_B.keys[0]=*.1.*
consumer.listeners.LISTENER_B.keys[1]=*.3.*
consumer.listeners.LISTENER_B.keys[2]=*.6.*
consumer.listeners.LISTENER_B.initial-concurrency=5
consumer.listeners.LISTENER_B.maximum-concurrency=20
consumer.listeners.LISTENER_B.thread-name-prefix=listenerB-consumer-
#
# Server B config (Spring Cloud Stream)
#
spring.rabbitmq.host=server-b
spring.rabbitmq.port=5672
spring.rabbitmq.username=
spring.rabbitmq.password=
spring.cloud.stream.bindings.outbound.destination=microservice-out
spring.cloud.stream.bindings.outbound.group=default
spring.cloud.stream.rabbit.binder.connection-name-prefix=local:microservice
So my question is: is it possible to use in the same Spring Boot application code that consumes data from RabbitMQ via Spring AMQP and produces messages into a different server via Spring Cloud Stream RabbitMQ? If it is, could somebody tell me what I am doing wrong, please?
Spring AMQP version is the one provided by Boot version 2.1.7 (2.1.8-RELEASE) and Spring Cloud Stream version is the one provided by Spring Cloud train Greenwich.SR2 (2.1.3.RELEASE).
EDIT
I was able to make it work configuring the binder via multiple configuration properties instead of the default one. So with this configuration it works:
#
# Server B config (Spring Cloud Stream)
#
spring.cloud.stream.binders.transport-layer.type=rabbit
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.host=server-b
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.username=
spring.cloud.stream.binders.transport-layer.environment.spring.rabbitmq.password=
spring.cloud.stream.bindings.stream-output.destination=microservice-out
spring.cloud.stream.bindings.stream-output.group=default
Unfortunately it is not possible to set the connection-name yet in multiple binders configuration: A custom ConnectionNameStrategy is ignored if there is a custom binder configuration.
Anyway, I still do not understand why it seems the contexts are "mixed" when using Spring AMQP and Spring Cloud Stream RabbitMQ. It is still necessary to set a primary MessageHandlerMethodFactory bean in order the implementation to work.
EDIT
I found out that the NoUniqueBeanDefinitionException was caused because the microservice itself was creating a ConditionalGenericConverter to be used by Spring AMQP part to deserialize messages from Server A.
I removed it and added some MessageConverters instead. Now the problem is solved and the #Primary bean is no longer necessary.
Unrelated, but
consumerAmqpAdmin.declareQueue(queue);
You should never communicate with the broker within a #Bean definition; it is too early in application context lifecycle. It might work but YMMV; also if the broker is not available it will prevent your app from starting.
It's better to define beans of type Declarables containing the lists of queues, channels, bindings and the Admin will automatically declare them when the connection is first opened successfully. See the reference manual.
I have never seen the MessageHandlerFactory problem; Spring AMQP declares no such bean. If you can provide a small sample app that exhibits the behavior, that would be useful.
I'll see if I can find a work around to the connection name issue.
EDIT
I found a work around to the connection name issue; it involves a bit of reflection but it works. I suggest you open a new feature request against the binder to request a mechanism to set the connection name strategy when using multiple binders.
Anyway; here's the work around...
#SpringBootApplication
#EnableBinding(Processor.class)
public class So57725710Application {
public static void main(String[] args) {
SpringApplication.run(So57725710Application.class, args);
}
#Bean
public Object connectionNameConfigurer(BinderFactory binderFactory) throws Exception {
setConnectionName(binderFactory, "rabbit1", "myAppProducerSide");
setConnectionName(binderFactory, "rabbit2", "myAppConsumerSide");
return null;
}
private void setConnectionName(BinderFactory binderFactory, String binderName,
String conName) throws Exception {
binderFactory.getBinder(binderName, MessageChannel.class); // force creation
#SuppressWarnings("unchecked")
Map<String, Map.Entry<Binder<?, ?, ?>, ApplicationContext>> binders =
(Map<String, Entry<Binder<?, ?, ?>, ApplicationContext>>) new DirectFieldAccessor(binderFactory)
.getPropertyValue("binderInstanceCache");
binders.get(binderName)
.getValue()
.getBean(CachingConnectionFactory.class).setConnectionNameStrategy(queue -> conName);
}
#StreamListener(Processor.INPUT)
#SendTo(Processor.OUTPUT)
public String listen(String in) {
System.out.println(in);
return in.toUpperCase();
}
}
and
spring.cloud.stream.binders.rabbit1.type=rabbit
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit1.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.output.destination=outDest
spring.cloud.stream.bindings.output.producer.required-groups=outQueue
spring.cloud.stream.bindings.output.binder=rabbit1
spring.cloud.stream.binders.rabbit2.type=rabbit
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.host=localhost
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.port=5672
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.username=guest
spring.cloud.stream.binders.rabbit2.environment.spring.rabbitmq.password=guest
spring.cloud.stream.bindings.input.destination=inDest
spring.cloud.stream.bindings.input.group=default
spring.cloud.stream.bindings.input.binder=rabbit2
and
How can I configure my Spring listener to receive multiple JMS messages per session from Solace? My current JMS configuration is as follows:
#Bean
public DefaultJmsListenerContainerFactory listenerContainerFactory(ConnectionFactory connectionFactory, #Value("${jms.max-listeners}") Integer maxListeners) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrency("1-" + maxListeners);
factory.setSessionAcknowledgeMode(Session.SESSION_TRANSACTED);
factory.setSessionTransacted(true);
factory.setCacheLevel(DefaultMessageListenerContainer.CACHE_CONSUMER);
return factory;
}
#JmsListener(containerFactory = "listenerContainerFactory", destination = "${jms.my-queue}")
public void receive(String messages) {
try {
...
} catch (Exception e) {
...
}
}
The payloads need to be single messages, however I'd like to consume many of them in each transaction so that I'm able to batch persist to my persistent store.
Typically this is done with the Solace Native API, but I'd like to abstract our JMS implementation with Spring because it alleviates the nasty boiler plate that comes alongside that API.
How to change the redelivery policy for the embedded ActiveMQ when using with Spring Boot? I tried specifying FixedBackOff on the DefaultJmsListenerContainerFactory but it didn't help. Below is code I am using to initialize the jms factory bean. I have a message consumer processing incoming messages on a queue. During processing because of unavailable resource, I throw a checked exception. I am hoping to have the message redelivered for processing after a fixed interval.
Spring Boot : 1.5.7.Release
Java : 1.7
#Bean
public JmsListenerContainerFactory<?> publishFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setBackOff(new FixedBackOff(5000, 5));
// This provides all boot's default to this factory, including the message converter
configurer.configure(factory, connectionFactory);
// You could still override some of Boot's default if necessary.
factory.setErrorHandler(new ErrorHandler() {
#Override
public void handleError(Throwable t) {
LOG.error("Error occured in JMS transaction.", t);
}
});
return factory;
}
Consumer Code:
#JmsListener(destination = "PublishQueue", containerFactory = "publishFactory")
#Transactional
public void receiveMessage(PublishData publishData) {
LOG.info("Processing incoming message on publish queue with transaction id: " + publishData.getTransactionId());
PublishUser user = new PublishUser();
user.setPriority(1);
user.setUserId(publishData.getUserId());
LOG.trace("Trying to enroll in the publish lock queue for user: " + user);
PublishLockQueue lockQueue = publishLockQueueService.createLock(user);
if (lockQueue == null)
throw new RuntimeException("Unable to create lock for publish");
LOG.trace("Publish lock queue obtained with lock queue id: " + lockQueue.getId());
try {
publishLockQueueService.acquireLock(lockQueue.getId());
LOG.trace("Acquired publish lock.");
}
catch (PublishLockQueueServiceException pex) {
throw new RuntimeException(pex);
}
try {
publishService.publish(publishData, lockQueue.getId());
LOG.trace("Completed publish of changes.");
sendPublishSuccessNotification(publishData);
}
finally {
LOG.trace("Trying to release lock to publish.");
publishLockQueueService.releaseLock(lockQueue.getId());
}
LOG.info("Publish has been completed for transaction id: " + publishData.getTransactionId());
}
#claus answerd: i tested it to work:
Its the consumer, you need to use transacted acknowledge mode to let the consumer rollback on exception and let ActiveMQ be able to re-deliver the message to the same consumer or another consumer if you have multiple consumers running. You can however configure redelivery options on the ActiveMQ such as backoff etc. The error handler above is just a noop listener which cannot do very much other than logging
We're using spring-amqp 1.5.2, with RabbitMQ version 3.5.3. All queues work fine and we have consumers listening on them with no issues, except one consumer which keeps on dropping connections mysteriously. spring-amqp auto recovers, but after a few hours the consumers are disconnected and never come back up.
The queue is declared as
#Bean()
public Queue analyzeTransactionsQueue(){
Map<String, Object> args = new HashMap<>();
args.put("x-message-ttl", 60000);
return new Queue("analyze.txns", true, false, false, args);
}
Other queues are declared in a similar fashion, and have no issues.
The consumer (listener) is declared as
#Bean
public SimpleRabbitListenerContainerFactory analyzeTransactionListenerContainerFactory(ConnectionFactory connectionFactory, AsyncTaskExecutor asyncTaskExecutor) {
connectionFactory.getVirtualHost());
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setConcurrentConsumers(2);
factory.setMaxConcurrentConsumers(4);
factory.setTaskExecutor(asyncTaskExecutor);
ConsumerTagStrategy consumerTagStrategy = new ConsumerTagStrategy() {
#Override
public String createConsumerTag(String queue) {
return queue;
}
};
factory.setConsumerTagStrategy(consumerTagStrategy);
return factory;
}
Again, other consumers having no issues are declared in a similar fashion.
The code after the message is received has no exceptions. Even after turning on DEBUG logging for SimpleMessageListenerContainer, there are no errors in the logs.
LogLevel=DEBUG; category=org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer; msg=Cancelling Consumer: tags=[{}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#10.17.1.13:5672/,47), acknowledgeMode=AUTO local queue size=0;
LogLevel=DEBUG; category=org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer; msg=Idle consumer terminating: Consumer: tags=[{}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest#10.17.1.13:5672/,47), acknowledgeMode=AUTO local queue size=0;
Any ideas on why this would be happening. Have tried DEBUG logging but to no avail.
one thing I have observed is that consumer would disconnect if there's an exception during parsing and it doesn't always log the problem, depending on your logging config...
since then, I always wrap the handleDelivery method into a try catch, to get better logging and no connection drop :
consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body) throws IOException {
log.info("processing message - content : " + new String(body, "UTF-8"));
try {
MyEvent myEvent = objectMapper.readValue(new String(body, "UTF-8"), MyEvent.class);
processMyEvent(myEvent);
} catch (Exception exp) {
log.error("couldn't process "+MyEvent.class+" message : ", exp);
}
}
};
Looking at the way you have configured things, it is pretty obvious that you have enabled dynamic scaling of consumers.
factory.setConcurrentConsumers(2);
factory.setMaxConcurrentConsumers(4);
There was a threading issue that I submitted a fix for which caused number of consumers to drop to zero. This was happening while consumers were scaling down.
By the looks of it, you have been a victim of that problem. The fix has been back-ported I believe and can be seen here
Try using the latest version and see whether you get the same problem.