How to wrap exception on exhausted retries with Spring Retry - spring-retry

Context:
I'm using spring-retry to retry restTemplate calls.
The restTemplate calls are called from a kafka listener.
The kafka listener is also configured to retry on error (if any exception are thrown during the process, not only the restTemplate call).
Goal:
I'd like to prevent kafka from retrying when the error come from a retry template which has exhausted.
Actual behavior :
When the retryTemplate exhaust all retries, the original exception is thrown. Thus preventing me from identifying if the error was retried by the retryTemplate.
Desired behavior:
When the retryTemplate exhaust all retries, wrap the original exception in a RetryExhaustedException which will allow me to blacklist it from kafka retries.
Question:
How can I do something like this ?
Thanks
Edit
RetryTemplate configuration :
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(FunctionalException.class, false);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(3, retryableExceptions, true, true);
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.setThrowLastExceptionOnExhausted(false);
Kafka ErrorHandler
public class DefaultErrorHandler implements ErrorHandler {
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data) {
Throwable exception = Optional.ofNullable(thrownException.getCause()).orElse(thrownException);
// TODO if exception as been retried in a RetryTemplate, stop it to prevent rollback and send it to a DLQ
// else rethrow exception, it will be rollback and handled by AfterRollbackProcessor to be retried
throw new KafkaException("Could not handle exception", thrownException);
}
}
Listener kafka :
#KafkaListener
public void onMessage(ConsumerRecord<String, String> record) {
retryTemplate.execute((args) -> {
throw new RuntimeException("Should be catched by ErrorHandler to prevent rollback");
}
throw new RuntimeException("Should be retried by afterRollbackProcessor");
}

Simply configure the listener retry template with a SimplyRetryPolicy that is configured to classify RetryExhaustedException as not retryable.
Be sure to set the traverseCauses property to true since the container wraps all listener exceptions in ListenerExecutionFailedException.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found. The default value indicates whether to retry or not for exceptions
* (or super classes) are not found in the map.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
EDIT
Use
template.execute((args) -> {...}, (context) -> throw new Blah(context.getLastThrowable()));

Related

Can we add specific condition in #Retryable for any exception in spring-retry?

I have a class which has #Retryable annotation added to method with value as custom exception and maxAttempts =2 .
#Override
#Retryable(value = CustomException.class, maxAttempts = 2)
public void process(String input) {
//code logic
}
Currently this code is retried everytime there is a CustomException thrown in application but my code throws this CustomException in different ways like :
throw new CustomException(CustomErrorCode.RETRY)
throw new CustomException(CustomErrorCode.DONOTRETRY)
I want to retry CustomException which has errorcode Retry.
Can anybody help?
You cannot add conditions based on the exception properties; however, in the retryable method, you can do this:
RetrySynchronizationManager.getContext().setExhaustedOnly();
This prevents any retries.
/**
* Signal to the framework that no more attempts should be made to try or retry the
* current {#link RetryCallback}.
*/
void setExhaustedOnly();

How to know which exception is thrown from errorhandler in dead letter queue listener?

I have a quorum queue (myQueue) and it's dead letter queue (myDLQueue). We have several exceptions which we separated as Retryable or Fatal. But sometimes in below listener we make an api call that throws RateLimitException. In this case the application should increase both of retry count and retry delay.
#RabbitListener(queues = "#{myQueue.getName()}", errorHandler = "myErrorHandler")
#SendTo("#{myStatusQueue.getName()}")
public Status process(#Payload MyMessage message, #Headers MessageHeaders headers) {
int retries = headerProcessor.getRetries(headers);
if (retries > properties.getMyQueueMaxRetries()) {
throw new RetriesExceededException(retries);
}
if (retries > 0) {
logger.info("Message {} has been retried {} times. Process it again anyway", kv("task_id", message.getTaskId()), retries);
}
// here we send a request to an api. but sometimes api returns rate limit error in case we send too many requests.
// In that case makeApiCall throws RateLimitException which extends RetryableException
makeApiCall() // --> it will throw RateLimitException
if(/* a condition that needs to retry sending the message*/) {
throw new RetryableException()
}
if(/* a condition that should not retry*/){
throw new FatalException()
}
return new Status("Step 1 Success!");
}
I have also an error handler (myErrorHandler) that catches thrown exceptions from above rabbit listener and manages retry process according to the type of the exception.
public class MyErrorHandler implements RabbitListenerErrorHandler {
#Override
public Object handleError(Message amqpMessage,
org.springframework.messaging.Message<?> message,
ListenerExecutionFailedException exception) {
// Check if error is fatal or retryable
if (exception.getCause() /* ..is fatal? */) {
return new Status("FAIL!");
}
// Retryable exception, rethrow it and let message to be NACKed and retried via DLQ
throw exception;
}
}
Last part I have is a DLQHandler that listens dead letter queue messages and send them to original queue (myQueue).
#Service
public class MyDLQueueHandler {
private final MyAppProperties properties;
private final MessageHeaderProcessor headerProcessor;
private final RabbitProducerService rabbitProducerService;
public MyDLQueueHandler(MyProperties properties, MessageHeaderProcessor headerProcessor, RabbitProducerService rabbitProducerService) {
this.properties = properties;
this.headerProcessor = headerProcessor;
this.rabbitProducerService = rabbitProducerService;
}
/**
* Since message TTL is not available with quorum queues manually listen DL Queue and re-send the message with delay.
* This allows messages to be processed again.
*/
#RabbitListener(queues = {"#{myDLQueue.getName()}"}"})
public void handleError(#Payload Object message, #Headers MessageHeaders headers) {
String routingKey = headerProcessor.getRoutingKey(headers);
Map<String, Object> newHeaders = Map.of(
MessageHeaderProcessor.DELAY, properties.getRetryDelay(), // I need to send increased delay in case of RateLimitException.
MessageHeaderProcessor.RETRIES_HEADER, headerProcessor.getRetries(headers) + 1
);
rabbitProducerService.sendMessageDelayed(message, routingKey, newHeaders);
}
}
In the above handleError method inputs there is not any information related to exception instance thrown from MyErrorHandler or MyQueue listener. Currently I have to pass retry delay by reading it from app.properties. But I need to increase this delay if RateLimitException is thrown. So my question is how do I know which error is thrown from MyErrorHandler while in the MyDLQueueHandler?
When you use the normal dead letter mechanism in RabbitMQ, there is no exception information provided - the message is the original rejected message. However, Spring AMQP provides a RepublishMessageRecoverer which can be used in conjunction with a retry interceptor. In that case, exception information is published in headers.
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#async-listeners
The RepublishMessageRecoverer publishes the message with additional information in message headers, such as the exception message, stack trace, original exchange, and routing key. Additional headers can be added by creating a subclass and overriding additionalHeaders().
#Bean
RetryOperationsInterceptor interceptor() {
return RetryInterceptorBuilder.stateless()
.maxAttempts(5)
.recoverer(new RepublishMessageRecoverer(amqpTemplate(), "something", "somethingelse"))
.build();
}
The interceptor is added to the container's advice chain.
https://github.com/spring-projects/spring-amqp/blob/57596c6a26be2697273cd97912049b92e81d3f1a/spring-rabbit/src/main/java/org/springframework/amqp/rabbit/retry/RepublishMessageRecoverer.java#L55-L61
public static final String X_EXCEPTION_STACKTRACE = "x-exception-stacktrace";
public static final String X_EXCEPTION_MESSAGE = "x-exception-message";
public static final String X_ORIGINAL_EXCHANGE = "x-original-exchange";
public static final String X_ORIGINAL_ROUTING_KEY = "x-original-routingKey";
The exception type can be found in the stack trace header.

In Spring RabbitMQ I throw AmqpRejectAndDontRequeueException but message still requeue

My service listens to RabbitMQ queue. I configure retry policy in consumer side. When I throw exception, all dead-letter messages requeue. But depend on my business logic, after throwing StopRequeueException (every exception except SmsException) I want to stop retry for this message. But the message still requeue.
Here is my configuration
spring:
rabbitmq:
listener:
simple:
retry:
enabled: true
initial-interval: 3s
max-attempts: 10
max-interval: 12s
multiplier: 2
missing-queues-fatal: false
if (!checkMobileService.isMobileNumberAdmitted(mobileNumber())) {
throw new StopRequeueException("SMS_BIMTEK.MOBILE_NUMBER_IS_NOT_ADMITTED");
}
My error handler:
public class CustomErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
if (!(t.getCause() instanceof SmsException)) {
throw new AmqpRejectAndDontRequeueException("Error Handler converted exception to fatal", t);
}
}
}
Calling the error handler is outside the scope of retry; it is called after retries are exhausted.
You need to classify which exceptions are retryable at the retry level and do the conversion in the recoverer.
Here is an example:
#SpringBootApplication
public class So67406799Application {
public static void main(String[] args) {
SpringApplication.run(So67406799Application.class, args);
}
#Bean
public RabbitRetryTemplateCustomizer customizer(
#Value("${spring.rabbitmq.listener.simple.retry.max-attempts}") int attempts) {
return (target, template) -> template.setRetryPolicy(new SimpleRetryPolicy(attempts,
Map.of(StopRequeueException.class, false), true, true));
}
#Bean
MessageRecoverer recoverer() {
return (msg, cause) -> {
throw new AmqpRejectAndDontRequeueException("Stop requeue after " +
RetrySynchronizationManager.getContext().getRetryCount() + " attempts");
};
}
#RabbitListener(queues = "so67406799")
void listen(String in) {
System.out.println(in);
if (in.equals("dontRetry")) {
throw new StopRequeueException("test");
}
throw new RuntimeException("test");
}
#Bean
Queue queue() {
return new Queue("so67406799");
}
}
#SuppressWarnings("serial")
class StopRequeueException extends NestedRuntimeException {
public StopRequeueException(String msg) {
super(msg);
}
}
EDIT
The customizer is called once by Spring Boot; it is called after the retry policy and back off policy have been set up. See RetryTemplateFactory.
In this case, the customizer replaces the retry policy with a new one with an exception classifier (that's why we need the max attempts injected here).
See the SimpleRetryPolicy constructor.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry attempts. If
* traverseCauses is true, the exception causes will be traversed until a match is
* found. The default value indicates whether to retry or not for exceptions (or super
* classes) are not found in the map.
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses true to traverse the exception cause chain until a classified
* exception is found or the root cause is reached.
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
The last boolean in the config above (true) is the default behavior (retry exceptions that are not in the map), the third (true) tells the policy to follow the cause chain to look for the exception (like your getCause() in the error handler). The map <Exception, Boolean> says don't retry for this one.
You can also configure it the other way around (default false and true in the map values), explicitly stating which exceptions you want to retry and don't for all others.
The MessageRecoverer is called for all exceptions, either immediately for the classified exception or when retries are exhausted for the others.

How to implement ErrorHandlingDeserializer with SeekToCurrentErrorHandler and publish error records on custom topic

I am trying to write a Kafka consumer application in spring-kafka. I can think of 2 scenarios in which error can occur :
While processing records, an exception can occur in Service layer ( while updating records through API in a table)
Deserialization error
I had already explored an option to handle scenario 1, I can just throw an exception in my code and handle it using SeekToCurrentErrorHandler.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setErrorHandler(new SeekToCurrentErrorHandler(new FixedBackOff(1000L, 2L)));
return factory;
}
For scenario 2, I have got an option of ErrorHandlingDeserializer but I am not sure how to implement it with SeekToCurrentErrorHandler. Is there a way to include ErrorHandler for both scenarios using SeekToCurrentErrorHandler.
My property class is as below :
#Bean
public ConsumerFactory<String, String> consumerFactory(){
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,KAFKA_BROKERS);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, OFFSET_RESET);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID_CONFIG);
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, SCHEMA_REGISTRY_URL);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, SSL_PROTOCOL);
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,SSL_TRUSTSTORE_LOCATION_FILE_NAME);
props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, SSL_TRUSTSTORE_SECURE);
props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG,SSL_KEYSTORE_LOCATION_FILE_NAME);
props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, SSL_KEYSTORE_SECURE);
props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, SSL_KEY_SECURE);
return new DefaultKafkaConsumerFactory<>(props);
}
Publishing error records :
I am also thinking to publish error records to Dead Letter queue. For scenario 1, it should retry and publish on dead letter queue and for scenario 2, it should directly publish as there is no benefit of re-trying. I may not have access to create a topic on my own and would need to ask my producers to create one topic for error records as well. How can I implement logic to publish records on custom error topic.
As I have no control over name if I use DeadLetterPublishingRecoverer. Based on my understanding, it creates topic with <original_topic_name>.DLT.
The SeekToCurrentErrorHandler treats certain exceptions (such as DeserializationException) as fatal and are not retried - that failed record is immediately sent to the recoverer.
For retryable exceptions, the recoverer is called after retrieds are exhausted.
/**
* Add exception types to the default list. By default, the following exceptions will
* not be retried:
* <ul>
* <li>{#link DeserializationException}</li>
* <li>{#link MessageConversionException}</li>
* <li>{#link ConversionException}</li>
* <li>{#link MethodArgumentResolutionException}</li>
* <li>{#link NoSuchMethodException}</li>
* <li>{#link ClassCastException}</li>
* </ul>
* All others will be retried.
* #param exceptionTypes the exception types.
* #see #removeNotRetryableException(Class)
* #see #setClassifications(Map, boolean)
*/
public final void addNotRetryableExceptions(Class<? extends Exception>... exceptionTypes) {
Based on my understanding, it creates topic with <original_topic_name>.DLT.
That is the default behavior; you can provide your own DLT topic name strategy (destination resolver).
See the documentation.
The following example shows how to wire a custom destination resolver:
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(template,
(r, e) -> {
if (e instanceof FooException) {
return new TopicPartition(r.topic() + ".Foo.failures", r.partition());
}
else {
return new TopicPartition(r.topic() + ".other.failures", r.partition());
}
});
ErrorHandler errorHandler = new SeekToCurrentErrorHandler(recoverer, new FixedBackOff(0L, 2L));

Spring Kafka consumer - manual commit with recovery callback mechanism

I am building a kafka consumer. I have set the recovery callback similar to below. I have enabled manual commit. How can I acknowledge the message in recovery callback method so that there is no lag.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(conncurrency);
factory.setConsumerFactory(consumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setRecoveryCallback(new RecoveryCallback<Object>() {
#Override
public Object recover(RetryContext context) throws Exception {
// TODO Auto-generated method stub
logger.debug(" In recovery callback method !!");
return null;
}
});
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
/*
* Retry template.
*/
protected RetryPolicy retryPolicy() {
SimpleRetryPolicy policy = new SimpleRetryPolicy(maxRetryAttempts, retryableExceptions);
return policy;
}
protected BackOffPolicy backOffPolicy() {
ExponentialBackOffPolicy policy = new ExponentialBackOffPolicy();
policy.setInitialInterval(initialRetryInterval);
policy.setMultiplier(retryMultiplier);
return policy;
}
protected RetryTemplate retryTemplate() {
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(retryPolicy());
template.setBackOffPolicy(backOffPolicy());
return template;
}
}
Your question is too broad. You need to be more specific.
There is no any assumption in the Framework what you could do in case of retry exhausting during consumption errors.
I think you should start from the Spring Retry project to understand what is that RecoveryCallback at all and how it works:
If the business logic does not succeed before the template decides to abort, then the client is given the chance to do some alternate processing through the recovery callback.
A RetryContext has:
/**
* Accessor for the exception object that caused the current retry.
*
* #return the last exception that caused a retry, or possibly null. It will be null
* if this is the first attempt, but also if the enclosing policy decides not to
* provide it (e.g. because of concerns about memory usage).
*/
Throwable getLastThrowable();
Also Spring Kafka populates additional attributes to that RetryContext to deal with in the RecoveryCallback: https://docs.spring.io/spring-kafka/docs/2.0.0.RELEASE/reference/html/_reference.html#_retrying_deliveries
The contents of the RetryContext passed into the RecoveryCallback will depend on the type of listener. The context will always have an attribute record which is the record for which the failure occurred. If your listener is acknowledging and/or consumer aware, additional attributes acknowledgment and/or consumer will be available. For convenience, the RetryingAcknowledgingMessageListenerAdapter provides static constants for these keys. See its javadocs for more information.

Resources