In Spring RabbitMQ I throw AmqpRejectAndDontRequeueException but message still requeue - spring-boot

My service listens to RabbitMQ queue. I configure retry policy in consumer side. When I throw exception, all dead-letter messages requeue. But depend on my business logic, after throwing StopRequeueException (every exception except SmsException) I want to stop retry for this message. But the message still requeue.
Here is my configuration
spring:
rabbitmq:
listener:
simple:
retry:
enabled: true
initial-interval: 3s
max-attempts: 10
max-interval: 12s
multiplier: 2
missing-queues-fatal: false
if (!checkMobileService.isMobileNumberAdmitted(mobileNumber())) {
throw new StopRequeueException("SMS_BIMTEK.MOBILE_NUMBER_IS_NOT_ADMITTED");
}
My error handler:
public class CustomErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
if (!(t.getCause() instanceof SmsException)) {
throw new AmqpRejectAndDontRequeueException("Error Handler converted exception to fatal", t);
}
}
}

Calling the error handler is outside the scope of retry; it is called after retries are exhausted.
You need to classify which exceptions are retryable at the retry level and do the conversion in the recoverer.
Here is an example:
#SpringBootApplication
public class So67406799Application {
public static void main(String[] args) {
SpringApplication.run(So67406799Application.class, args);
}
#Bean
public RabbitRetryTemplateCustomizer customizer(
#Value("${spring.rabbitmq.listener.simple.retry.max-attempts}") int attempts) {
return (target, template) -> template.setRetryPolicy(new SimpleRetryPolicy(attempts,
Map.of(StopRequeueException.class, false), true, true));
}
#Bean
MessageRecoverer recoverer() {
return (msg, cause) -> {
throw new AmqpRejectAndDontRequeueException("Stop requeue after " +
RetrySynchronizationManager.getContext().getRetryCount() + " attempts");
};
}
#RabbitListener(queues = "so67406799")
void listen(String in) {
System.out.println(in);
if (in.equals("dontRetry")) {
throw new StopRequeueException("test");
}
throw new RuntimeException("test");
}
#Bean
Queue queue() {
return new Queue("so67406799");
}
}
#SuppressWarnings("serial")
class StopRequeueException extends NestedRuntimeException {
public StopRequeueException(String msg) {
super(msg);
}
}
EDIT
The customizer is called once by Spring Boot; it is called after the retry policy and back off policy have been set up. See RetryTemplateFactory.
In this case, the customizer replaces the retry policy with a new one with an exception classifier (that's why we need the max attempts injected here).
See the SimpleRetryPolicy constructor.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry attempts. If
* traverseCauses is true, the exception causes will be traversed until a match is
* found. The default value indicates whether to retry or not for exceptions (or super
* classes) are not found in the map.
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses true to traverse the exception cause chain until a classified
* exception is found or the root cause is reached.
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
The last boolean in the config above (true) is the default behavior (retry exceptions that are not in the map), the third (true) tells the policy to follow the cause chain to look for the exception (like your getCause() in the error handler). The map <Exception, Boolean> says don't retry for this one.
You can also configure it the other way around (default false and true in the map values), explicitly stating which exceptions you want to retry and don't for all others.
The MessageRecoverer is called for all exceptions, either immediately for the classified exception or when retries are exhausted for the others.

Related

How to know which exception is thrown from errorhandler in dead letter queue listener?

I have a quorum queue (myQueue) and it's dead letter queue (myDLQueue). We have several exceptions which we separated as Retryable or Fatal. But sometimes in below listener we make an api call that throws RateLimitException. In this case the application should increase both of retry count and retry delay.
#RabbitListener(queues = "#{myQueue.getName()}", errorHandler = "myErrorHandler")
#SendTo("#{myStatusQueue.getName()}")
public Status process(#Payload MyMessage message, #Headers MessageHeaders headers) {
int retries = headerProcessor.getRetries(headers);
if (retries > properties.getMyQueueMaxRetries()) {
throw new RetriesExceededException(retries);
}
if (retries > 0) {
logger.info("Message {} has been retried {} times. Process it again anyway", kv("task_id", message.getTaskId()), retries);
}
// here we send a request to an api. but sometimes api returns rate limit error in case we send too many requests.
// In that case makeApiCall throws RateLimitException which extends RetryableException
makeApiCall() // --> it will throw RateLimitException
if(/* a condition that needs to retry sending the message*/) {
throw new RetryableException()
}
if(/* a condition that should not retry*/){
throw new FatalException()
}
return new Status("Step 1 Success!");
}
I have also an error handler (myErrorHandler) that catches thrown exceptions from above rabbit listener and manages retry process according to the type of the exception.
public class MyErrorHandler implements RabbitListenerErrorHandler {
#Override
public Object handleError(Message amqpMessage,
org.springframework.messaging.Message<?> message,
ListenerExecutionFailedException exception) {
// Check if error is fatal or retryable
if (exception.getCause() /* ..is fatal? */) {
return new Status("FAIL!");
}
// Retryable exception, rethrow it and let message to be NACKed and retried via DLQ
throw exception;
}
}
Last part I have is a DLQHandler that listens dead letter queue messages and send them to original queue (myQueue).
#Service
public class MyDLQueueHandler {
private final MyAppProperties properties;
private final MessageHeaderProcessor headerProcessor;
private final RabbitProducerService rabbitProducerService;
public MyDLQueueHandler(MyProperties properties, MessageHeaderProcessor headerProcessor, RabbitProducerService rabbitProducerService) {
this.properties = properties;
this.headerProcessor = headerProcessor;
this.rabbitProducerService = rabbitProducerService;
}
/**
* Since message TTL is not available with quorum queues manually listen DL Queue and re-send the message with delay.
* This allows messages to be processed again.
*/
#RabbitListener(queues = {"#{myDLQueue.getName()}"}"})
public void handleError(#Payload Object message, #Headers MessageHeaders headers) {
String routingKey = headerProcessor.getRoutingKey(headers);
Map<String, Object> newHeaders = Map.of(
MessageHeaderProcessor.DELAY, properties.getRetryDelay(), // I need to send increased delay in case of RateLimitException.
MessageHeaderProcessor.RETRIES_HEADER, headerProcessor.getRetries(headers) + 1
);
rabbitProducerService.sendMessageDelayed(message, routingKey, newHeaders);
}
}
In the above handleError method inputs there is not any information related to exception instance thrown from MyErrorHandler or MyQueue listener. Currently I have to pass retry delay by reading it from app.properties. But I need to increase this delay if RateLimitException is thrown. So my question is how do I know which error is thrown from MyErrorHandler while in the MyDLQueueHandler?
When you use the normal dead letter mechanism in RabbitMQ, there is no exception information provided - the message is the original rejected message. However, Spring AMQP provides a RepublishMessageRecoverer which can be used in conjunction with a retry interceptor. In that case, exception information is published in headers.
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
The RepublishMessageRecoverer publishes the message with additional information in message headers, such as the exception message, stack trace, original exchange, and routing key. Additional headers can be added by creating a subclass and overriding additionalHeaders().
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.recoverer(new RepublishMessageRecoverer(amqpTemplate(), "something", "somethingelse"))
.build();
}
The interceptor is added to the container's advice chain.
https://github.com/spring-projects/spring-amqp/blob/57596c6a26be2697273cd97912049b92e81d3f1a/spring-rabbit/src/main/java/org/springframework/amqp/rabbit/retry/RepublishMessageRecoverer.java#L55-L61
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_ORIGINAL_EXCHANGE = "x-original-exchange";
public static final String X_ORIGINAL_ROUTING_KEY = "x-original-routingKey";
The exception type can be found in the stack trace header.

How to wrap exception on exhausted retries with Spring Retry

Context:
I'm using spring-retry to retry restTemplate calls.
The restTemplate calls are called from a kafka listener.
The kafka listener is also configured to retry on error (if any exception are thrown during the process, not only the restTemplate call).
Goal:
I'd like to prevent kafka from retrying when the error come from a retry template which has exhausted.
Actual behavior :
When the retryTemplate exhaust all retries, the original exception is thrown. Thus preventing me from identifying if the error was retried by the retryTemplate.
Desired behavior:
When the retryTemplate exhaust all retries, wrap the original exception in a RetryExhaustedException which will allow me to blacklist it from kafka retries.
Question:
How can I do something like this ?
Thanks
Edit
RetryTemplate configuration :
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(FunctionalException.class, false);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(3, retryableExceptions, true, true);
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.setThrowLastExceptionOnExhausted(false);
Kafka ErrorHandler
public class DefaultErrorHandler implements ErrorHandler {
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data) {
Throwable exception = Optional.ofNullable(thrownException.getCause()).orElse(thrownException);
// TODO if exception as been retried in a RetryTemplate, stop it to prevent rollback and send it to a DLQ
// else rethrow exception, it will be rollback and handled by AfterRollbackProcessor to be retried
throw new KafkaException("Could not handle exception", thrownException);
}
}
Listener kafka :
#KafkaListener
public void onMessage(ConsumerRecord<String, String> record) {
retryTemplate.execute((args) -> {
throw new RuntimeException("Should be catched by ErrorHandler to prevent rollback");
}
throw new RuntimeException("Should be retried by afterRollbackProcessor");
}
Simply configure the listener retry template with a SimplyRetryPolicy that is configured to classify RetryExhaustedException as not retryable.
Be sure to set the traverseCauses property to true since the container wraps all listener exceptions in ListenerExecutionFailedException.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found. The default value indicates whether to retry or not for exceptions
* (or super classes) are not found in the map.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
EDIT
Use
template.execute((args) -> {...}, (context) -> throw new Blah(context.getLastThrowable()));

How to dead letter a RabbitMQ messages when an exceptions happens in a service after an aggregator's forceRelease

I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.

Overriding errorChannel configured in #MessagingGateway

I have configured #MessagingGateway as below to use an error channel, which works as expected.
#MessagingGateway(errorChannel = "DefaultInboundErrorHandlerChannel")
public interface InboundMessagingGateway {
#Gateway(requestChannel = "InboundEntryChannel")
void receive(XferRes response);
}
Within the flow I am passing the object to a transformer as below:
Step 1:
#Transformer(inputChannel = "InboundEntryChannel", outputChannel = "TransmissionLogChannel")
public CassandraEntity createEntity(
org.springframework.messaging.Message<XferRes> message) throws ParseException {
XferRes response = message.getPayload();
CassandraEntity entity = new CassandraEntity();
// ... getters & setter ommitted for brevity
return entity;
}
Next, I update the entity as below:
Step 2:
#ServiceActivator(inputChannel = "TransmissionLogChannel", outputChannel="PublishChannel")
public XferRes updateCassandraEntity(
org.springframework.messaging.Message<XferRes> message) {
XferRes response = message.getPayload();
this.cassandraServiceImpl.update(response);
return response;
}
And last, I post to a Kafka topic as below:
Step 3:
#ServiceActivator(inputChannel = "PublishChannel")
public void publish(org.springframework.messaging.Message<XferRes> message){
XferRes response = message.getPayload();
publisher.post(response);
}
In case of an error I post the message to a service which publishes the error object to log ingestion:
#ServiceActivator(inputChannel="defaultInboundErrorHandlerChannel")
public void handleInvalidRequest(org.springframework.messaging.Message<MessageHandlingException> message) throws ParseException {
XferRes originalRequest = (XferRes) message.getPayload().getFailedMessage().getPayload();
this.postToErrorBoard(originalRequest)
}
If an error occurs at Step 2: in updating the DB, then also I want to invoke Step 3. A trivial way is to remove the Step 2 & make the call to update database from Step 1.
Is there any other way in Spring Integration where I can invoke Step 3 irrespective if an error occurs or not.
This technique called PublishSubscribeChannel. Since I see that you reuse a payload on the second step to send to the third step, then it is definitely a use-case for the PublishSubscribeChannel and two sequential subscribers to it.
I mean you create a PublishSubscribeChannel #Bean and those #ServiceActivators are use the name to this channel.
More info is in the Reference Manual. Pay attention to the ignoreFailures property:
/**
* Specify whether failures for one or more of the handlers should be
* ignored. By default this is <code>false</code> meaning that an Exception
* will be thrown whenever a handler fails. To override this and suppress
* Exceptions, set the value to <code>true</code>.
* #param ignoreFailures true if failures should be ignored.
*/
public void setIgnoreFailures(boolean ignoreFailures) {

Spring rabbit retries to deliver rejected message..is it OK?

I have the following configuration
spring.rabbitmq.listener.prefetch=1
spring.rabbitmq.listener.concurrency=1
spring.rabbitmq.listener.retry.enabled=true
spring.rabbitmq.listener.retry.max-attempts=3
spring.rabbitmq.listener.retry.max-interval=1000
spring.rabbitmq.listener.default-requeue-rejected=false //I have also changed it to true but the same behavior still happens
and in my listener I throw the exception AmqpRejectAndDontRequeueException to reject the message and enforce rabbit not to try to redeliver it...But rabbit redilvers it for 3 times then finally route it to dead letter queue.
Is that the standard behavior according to my provided configuration or do I miss something?
You have to configure the retry policy to not retry for that exception.
You can't do that with properties, you have to configure the retry advice yourself.
I'll post an example later if you need help with that.
requeue-rejected is at the container level (below retry on the stack).
EDIT
#SpringBootApplication
public class So39853762Application {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(So39853762Application.class, args);
Thread.sleep(60000);
context.close();
}
#RabbitListener(queues = "foo")
public void foo(String foo) {
System.out.println(foo);
if ("foo".equals(foo)) {
throw new AmqpRejectAndDontRequeueException("foo"); // won't be retried.
}
else {
throw new IllegalStateException("bar"); // will be retried
}
}
#Bean
public ListenerRetryAdviceCustomizer retryCustomizer(SimpleRabbitListenerContainerFactory containerFactory,
RabbitProperties rabbitPropeties) {
return new ListenerRetryAdviceCustomizer(containerFactory, rabbitPropeties);
}
public static class ListenerRetryAdviceCustomizer implements InitializingBean {
private final SimpleRabbitListenerContainerFactory containerFactory;
private final RabbitProperties rabbitPropeties;
public ListenerRetryAdviceCustomizer(SimpleRabbitListenerContainerFactory containerFactory,
RabbitProperties rabbitPropeties) {
this.containerFactory = containerFactory;
this.rabbitPropeties = rabbitPropeties;
}
#Override
public void afterPropertiesSet() throws Exception {
ListenerRetry retryConfig = this.rabbitPropeties.getListener().getRetry();
if (retryConfig.isEnabled()) {
RetryInterceptorBuilder<?> builder = (retryConfig.isStateless()
? RetryInterceptorBuilder.stateless()
: RetryInterceptorBuilder.stateful());
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(AmqpRejectAndDontRequeueException.class, false);
retryableExceptions.put(IllegalStateException.class, true);
SimpleRetryPolicy policy =
new SimpleRetryPolicy(retryConfig.getMaxAttempts(), retryableExceptions, true);
ExponentialBackOffPolicy backOff = new ExponentialBackOffPolicy();
backOff.setInitialInterval(retryConfig.getInitialInterval());
backOff.setMultiplier(retryConfig.getMultiplier());
backOff.setMaxInterval(retryConfig.getMaxInterval());
builder.retryPolicy(policy)
.backOffPolicy(backOff)
.recoverer(new RejectAndDontRequeueRecoverer());
this.containerFactory.setAdviceChain(builder.build());
}
}
}
}
NOTE: You cannot currently configure the policy to retry all exceptions, "except" this one - you have to classify all exceptions you want retried (and they can't be a superclass of AmqpRejectAndDontRequeueException). I have opened an issue to support this.
The other answers posted here didn't work me when using Spring Boot 2.3.5 and Spring AMQP Starter 2.2.12, but for these versions I was able to customize the retry policy to not retry AmqpRejectAndDontRequeueException exceptions:
#Configuration
public class RabbitConfiguration {
#Bean
public RabbitRetryTemplateCustomizer customizeRetryPolicy(
#Value("${spring.rabbitmq.listener.simple.retry.max-attempts}") int maxAttempts) {
SimpleRetryPolicy policy = new SimpleRetryPolicy(maxAttempts, Map.of(AmqpRejectAndDontRequeueException.class, false), true, true);
return (target, retryTemplate) -> retryTemplate.setRetryPolicy(policy);
}
}
This lets the retry policy skip retries for AmqpRejectAndDontRequeueExceptions but retries all other exceptions as usual.
Configured this way, it traverses the causes of an exception, and skips retries if it finds an AmqpRejectAndDontRequeueException.
Traversing the causes is needed as org.springframework.amqp.rabbit.listener.adapter.MessagingMessageListenerAdapter#invokeHandler wraps all exceptions as a ListenerExecutionFailedException

Resources