I've configured the CachingConnectionFactory with automatic recovery enabled set to false and assigned it to rabbit template preliminary set back off policy to it. However when I break a rabbit connection before publishing a message to rabbit seems like back off policy doesn't work and the exception is immediately thrown out. Why its not working as expected and thrown without 5s delay.
one more thing is it possible to apply back off policy to connection factory itself in order to enable auto recovery policy without fear of getting errors each 5s in the logs?
Caused by: java.net.SocketException: Broken pipe (Write failed)
#Bean
public CachingConnectionFactory connectionFactory() {
ConnectionFactory factory = new ConnectionFactory();
factory.setPassword(password);
factory.setRequestedHeartbeat(requestedHeartbeat);
factory.setAutomaticRecoveryEnabled(false);
CachingConnectionFactory cachingConnectionFactory = new CachingConnectionFactory(factory);
cachingConnectionFactory.setAddress(address1 + ", " + address2);
return cachingConnectionFactory;
}
#Bean
public RabbitTemplate rabbitTemplate() {
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(500);
backOffPolicy.setMultiplier(10);
backOffPolicy.setMaxInterval(10000);
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(backOffPolicy);
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory());
rabbitTemplate.setRetryTemplate(retryTemplate);
return rabbitTemplate;
}
Related
I have a Spring Boot application that uses RabbitMQ and has 3 queues (queue1, queue2 and queue3).
In this application i have one listener, that should only listen for messages on the queue named queue1 and ignore the other 2 queues, but it is getting messages from all queues.
This is my RabbitMQ config:
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(this.host);
connectionFactory.setPort(this.port);
connectionFactory.setUsername(this.user);
connectionFactory.setPassword(this.password);
return connectionFactory;
}
#Bean
SimpleMessageListenerContainer container(MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(this.connectionFactory());
container.setQueueNames(this.startQueueQueueName, this.printQueueQueueName, this.errorQueueQueueName);
container.setMessageListener(listenerAdapter);
return container;
}
#Bean
MessageListenerAdapter listenerAdapter(RabbitDocumentToPrintListener receiver) {
return new MessageListenerAdapter(receiver, "receiveMessage");
}
and this is my listener
public void receiveMessage(String message) throws Exception {
this.logger.debug("Received message from Rabbit");
}
I've tried adding #RabbitListener(queues = "queue1", exclusive = true) to the listener, but it didn't work.
If someone could help me making this app to consume only queue1, I'd appreciate. Thanks!
I am using a Spring boot JmsListener to dequeue messages from IBM MQ in a transacted session.
Issue arise when its throughput got restricted around 50 Transactions per second(Dequeue from MQ ,process and save to Database).
TO improve throughput we increased its concurrency and it really helped. But now we can see that at time of app restart, Same message is consumed by multiple concurrent listeners.(This I have concluded by logging thread names).
This does not happen at heavy loads, but at the time of application restart with messages pending to be dequeued.
Is there any other way of achieving high throughput while maintaining atmost one complete consumption with transactional JsmListener?
#Bean
public PlatformTransactionManager transactionManager() {
JmsTransactionManager transactionManager = new JmsTransactionManager();
transactionManager.setConnectionFactory(jmsConnectionFactory());
return transactionManager;
}
#Bean
public ConnectionFactory jmsConnectionFactory(){
MQConnectionFactory cf = new MQConnectionFactory();
try {
cf.setStringProperty(WMQConstants.WMQ_HOST_NAME, host);
cf.setIntProperty(WMQConstants.WMQ_PORT, Integer.parseInt(port));
cf.setStringProperty(WMQConstants.WMQ_CHANNEL, channel);
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, queueManager);
cf.setStringProperty(WMQConstants.WMQ_APPLICATIONNAME, "JmsGet (JMS)");
cf.setBooleanProperty(WMQConstants.USER_AUTHENTICATION_MQCSP, true);
cf.setStringProperty(WMQConstants.USERID, user);
cf.setStringProperty(WMQConstants.PASSWORD, password);
cf.setStringProperty(WMQConstants.WMQ_SSL_CIPHER_SUITE, null);
} catch (JMSException jmsException) {
log.error(jmsException.toString());
}
return cf;
}
#Bean
public JmsListenerContainerFactory<?> listenerContainerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConcurrency("2");//Added to increase throughput
factory.setTransactionManager(transactionManager());
factory.setSessionTransacted(true);
configurer.configure(factory, connectionFactory);
return factory;
}
#JmsListener(containerFactory = "listenerContainerFactory", destination = "${input_qname}")
public void onMessage(String message) throws Exception{
//if exception -> throw e;
}
I am using Spring Boot. I am trying to remove channels which are not in use using Spring AMQP (RabbitMQ) by Spring Boot. But not getting how to achieve it. Any help is appreciable.
ConnectionFactory Declaration:
public ConnectionFactory connectionFactory() {
final CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setUsername(userName);
connectionFactory.setPassword(password);
connectionFactory.setVirtualHost(centralHost);
connectionFactory.setHost(rabbitMqHost);
connectionFactory.setConnectionTimeout(connectionTimeout);
connectionFactory.setChannelCacheSize(4);
connectionFactory.setExecutor(Executors.newFixedThreadPool(rabbitmqThreads));
return connectionFactory;
}
ContainerFactory Declaration:
public DirectRabbitListenerContainerFactory rabbitDirectListenerContainerFactory(ConnectionFactory connectionFactory) {
DirectRabbitListenerContainerFactory factory = new DirectRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory);
factory.setAcknowledgeMode(AcknowledgeMode.NONE);
factory.setAfterReceivePostProcessors( m -> {
m.getMessageProperties().setContentType("text/plain");
return m;
});
return factory;
}
Why do you want to close it? That's the whole point of caching the channel(s); so we don't have to create a new one each time we publish a message.
The minimum cache size is 1.
You can call resetConnection() on the CachingConnectionFactory to close the connection and all cached channels.
I have an application which receives a message from one queue, processes it and sends it to another queue. When it's receiving a lot of messages (20 thousand or more), spring shows me this message when it tries to send the message to another queue:
connection error; protocol method: #method<connection.close>(reply-code=504 reply-text=CHANNEL_ERROR - second 'channel.open' seen class-id=20 method-id=10)
So I raised the channel cache size and created two CachingConnectionFactory one for consumer and another for the producer, this configurations I followed a note from spring doc:
When the application is configured with a single CachingConnectionFactory, as it is by default with Spring Boot auto-configuration, the application will stop working when the connection is blocked by the Broker. And when it is blocked by the Broker, any its clients stop to work. If we have producers and consumers in the same application, we may end up with a deadlock when producers are blocking the connection because there are no resources on the Broker anymore and consumers can’t free them because the connection is blocked. To mitigate the problem, there is just enough to have one more separate CachingConnectionFactory instance with the same options - one for producers and one for consumers. The separate CachingConnectionFactory isn’t recommended for transactional producers, since they should reuse a Channel associated with the consumer transactions.
Following this recommendations the error message disappeared, but now the application suddenly stops, it's not sending or receiving new messages and all queues are idle. It's kind strange because it has a low concurrency number on listener. What am I missing?
Configuration:
Spring Boot: 2.0.8.RELEASE
Spring AMQP: 2.0.11.RELEASE
RabbitMQ: 3.8.8
spring:
rabbitmq:
listener:
simple:
default-requeue-rejected: false
concurrency: 5
max-concurrency: 8
cache:
channel:
size: 1000
#Bean
public ConnectionFactory consumerConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(properties.getHost());
connectionFactory.setPort(properties.getPort());
connectionFactory.setUsername(properties.getUsername());
connectionFactory.setPassword(properties.getPassword());
connectionFactory.setChannelCacheSize(properties.getCache().getChannel().getSize());
connectionFactory.setConnectionNameStrategy(cns());
return connectionFactory;
}
#Bean
public ConnectionFactory producerConnectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setHost(properties.getHost());
connectionFactory.setPort(properties.getPort());
connectionFactory.setUsername(properties.getUsername());
connectionFactory.setPassword(properties.getPassword());
connectionFactory.setChannelCacheSize(properties.getCache().getChannel().getSize());
connectionFactory.setConnectionNameStrategy(cns());
return connectionFactory;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(#Qualifier("consumerConnectionFactory") ConnectionFactory consumerConnectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer,
RabbitProperties properties) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency());
factory.setMaxConcurrentConsumers(properties.getListener().getSimple().getMaxConcurrency());
configurer.configure(factory, consumerConnectionFactory);
return factory;
}
#Bean
#Primary
public RabbitAdmin producerRabbitAdmin() {
return new RabbitAdmin(producerConnectionFactory());
}
#Bean
public RabbitAdmin consumerRabbitAdmin() {
return new RabbitAdmin(consumerConnectionFactory());
}
#Bean
#Primary
public RabbitTemplate producerRabbitTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(producerConnectionFactory());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public RabbitTemplate consumerRabbitTemplate() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(consumerConnectionFactory());
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
After analize, the problem was due to Java Memory Heap limit. Besides, I updated my configuration, removed ConnectionFactory beans, and set a publisher factory to RabbitTemplate
So I ended with this:
#Bean
#Primary
public RabbitTemplate producerRabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
rabbitTemplate.setUsePublisherConnection(true);
return rabbitTemplate;
}
#Bean
public RabbitTemplate consumerRabbitTemplate(ConnectionFactory connectionFactory) {
RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(jackson2JsonMessageConverter());
return rabbitTemplate;
}
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ConnectionFactory connectionFactory,
SimpleRabbitListenerContainerFactoryConfigurer configurer,
RabbitProperties properties) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setConcurrentConsumers(properties.getListener().getSimple().getConcurrency());
factory.setMaxConcurrentConsumers(properties.getListener().getSimple().getMaxConcurrency());
configurer.configure(factory, connectionFactory);
return factory;
}
With this configuration memory consume was reduced and I was able to raise consumer concurrey numbers:
spring:
rabbitmq:
listener:
simple:
default-requeue-rejected: false
concurrency: 10
max-concurrency: 15
cache:
channel:
size: 1000
I'm looking now for the right cache channel size and to raise even more concurrency numbers.
I am trying the Srpring AMQP features regarding transactional message processing.
I have the following setup - I have a message consumer that is annotated as #Transactional
#Transactional
public void handleMessage(EventPayload event) {
Shop shop = new Shop();
shop.setName(event.getName());
Shop savedShop = shopService.create(shop);
log.info("Created shop {} from event {}", shop, event);
}
In shopService.create I save the shop and send another message about the creation:
#Transactional(propagation = REQUIRED)
#Component
public class ShopService {
...
public Shop create(Shop shop) {
eventPublisher.publish(new EventPayload(shop.getName()));
return shopRepository.save(shop);
}
}
I want to achieve the following - the message sent in the create method should just go to the broker if the database action succeeded. If it fails the message is not sent and the received message is rolled back.
I also have a Retry configured - so I expect each message to be retried 3 times before it is rejected:
#Bean
public RetryOperationsInterceptor retryOperationsInterceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(3)
.backOffOptions(1000, 2.0, 10000)
.build();
}
I am observing the following behaviour:
When I configure the container as follows the message is retried 3 times but every time the message in shopService.create is sent to the broker:
#Bean
SimpleMessageListenerContainer messageListenerContainer(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(testEventSubscriberQueue().getName());
container.setMessageListener(listenerAdapter);
container.setChannelTransacted(true);
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
return container;
}
So I tried passing the PlatformTransactionManager to the container -
#Bean
SimpleMessageListenerContainer messageListenerContainer(ConnectionFactory connectionFactory,
MessageListenerAdapter listenerAdapter,
PlatformTransactionManager transactionManager) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(testEventSubscriberQueue().getName());
container.setMessageListener(listenerAdapter);
container.setChannelTransacted(true);
container.setTransactionManager(transactionManager);
container.setAdviceChain(new Advice[]{retryOperationsInterceptor()});
return container;
}
Now the message sent in shopService.create is only send to the broker if the database transaction succeeded - which is what I want - but the message is retried indefinitely now - and not discarded after 3 retires as configured. But it seems that the backOff settings are applied - so there is some time between the retries.
The setup described does not really make sense from a business point of view - I am trying to understand and evaluate the transaction capabilities.
I am use spring-amqp 1.5.1.RELEASE
Thanks for any hints.
I had the same requirements, an #RabbitListener annotated with #Transactional, I wanted retry with backoff. It works even stateless with the following config:
#Bean
public RetryOperationsInterceptor retryOperationsInterceptor( ) {
return RetryInterceptorBuilder.stateless()
.maxAttempts( 3 )
.recoverer( new RejectAndDontRequeueRecoverer() )
.backOffOptions(1000, 2, 10000)
.build();
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter( ObjectMapper objectMapper ) {
Jackson2JsonMessageConverter jackson2JsonMessageConverter = new Jackson2JsonMessageConverter( objectMapper );
jackson2JsonMessageConverter.setCreateMessageIds( true );
return jackson2JsonMessageConverter;
}
#Bean
SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory( ConnectionFactory connectionFactory,
PlatformTransactionManager transactionManager,
Jackson2JsonMessageConverter converter) {
SimpleRabbitListenerContainerFactory container = new SimpleRabbitListenerContainerFactory ();
container.setConnectionFactory(connectionFactory);
container.setChannelTransacted(true);
container.setTransactionManager(transactionManager);
container.setAdviceChain( retryOperationsInterceptor() );
container.setMessageConverter( converter );
return container;
}
To use stateless(), using RejectAndDontRequeueRecoverer was important because otherwise the retry will work but the consumer will then by default put the message back on the queue. Then the consumer will retrieve it again, applying the retry policy and then putting it back on the queue infinitely.