Can someone please help me understand why a message offset that is manually and immediately committed is re-processed by the KafkaListener when an exception occurs?
So I'm expecting the following behaviour:
I receive an event in Kafka Listener
I commit the offset
An exception occurs
I'm expecting that message not to be reprocessed because the offset was committed.
Not sure if my understanding is correct? Or does Spring rolls-back the manual Acknowledgment that we do in case of exception?
I have the following Listener code:
#KafkaListener(topics = {"${acknowledgement.topic}"}, containerFactory = "concurrentKafkaListenerContainerFactory")
public void onMessage(String message, Acknowledgment acknowledgment) throws InterruptedException {
acknowledgment.acknowledge();
throw new Exception1();
}
And the concurrentKafkaListenerContainerFactory code is:
#Bean
public ConsumerFactory<String, String> consumerFactory() {
kafkaProperties.getConsumer().setEnableAutoCommit(false);
return new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> concurrentKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> concurrentKafkaListenerContainerFactory = new ConcurrentKafkaListenerContainerFactory<>();
concurrentKafkaListenerContainerFactory.setConsumerFactory(consumerFactory());
concurrentKafkaListenerContainerFactory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return concurrentKafkaListenerContainerFactory;
}
Yes, the default error handler treats any exception as retryable by default, regardless of whether its offset has been committed.
You should either not throw an exception, or tell the DefaultErrorHandler which exception(s) should not be retried.
/**
* Add exception types to the default list. By default, the following exceptions will
* not be retried:
* <ul>
* <li>{#link DeserializationException}</li>
* <li>{#link MessageConversionException}</li>
* <li>{#link ConversionException}</li>
* <li>{#link MethodArgumentResolutionException}</li>
* <li>{#link NoSuchMethodException}</li>
* <li>{#link ClassCastException}</li>
* </ul>
* All others will be retried, unless {#link #defaultFalse()} has been called.
* #param exceptionTypes the exception types.
* #see #removeClassification(Class)
* #see #setClassifications(Map, boolean)
*/
public final void addNotRetryableExceptions(Class<? extends Exception>...
exceptionTypes) {
Related
I'm trying to know if the message passed through specific channel for test or i'd like to get the message from specific channel
So my flow is: controller -> gateway -> ServiceActivator
private final Gateway gateway;
public ResponseEntity<Map<String,String>> submit(String applicationId, ApplicationDto applicationDto) {
applicationDto.setApplicationId(applicationId);
gateway.submitApplication(applicationDto);
return ResponseEntity.ok(Map.of(MESSAGE, "Accepted submit"));
}
the gateway
#Gateway(requestChannel = "submitApplicationChannel", replyChannel = "replySubmitApplicationChannel")
WorkflowPayload submitApplication(ApplicationDto applicationDto);
pipeline
#Bean
MessageChannel submitApplicationChannel() {
return new DirectChannel();
}
So my test is sending a request to start the flow
#Test
#DisplayName("Application Submission")
void submissionTest() throws Exception {
mockMvc.perform(MockMvcRequestBuilders
.post("/api/v1/applications/contract-validation/" + APPLICATION_ID)
.contentType(MediaType.APPLICATION_JSON)
.content(objectMapper.writeValueAsString(payload)))
.andExpect(status().isAccepted())
.andReturn();
//Check HERE if the message passed through the channel
}
Can you give me a hand??
In your test, add a ChannelInterceptor to the submitApplicationChannel before calling the gateway.
public interface ChannelInterceptor {
/**
* Invoked before the Message is actually sent to the channel.
* This allows for modification of the Message if necessary.
* If this method returns {#code null} then the actual
* send invocation will not occur.
*/
#Nullable
default Message<?> preSend(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked immediately after the send invocation. The boolean
* value argument represents the return value of that invocation.
*/
default void postSend(Message<?> message, MessageChannel channel, boolean sent) {
}
/**
* Invoked after the completion of a send regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preSend} successfully
* completed and returned a Message, i.e. it did not return {#code null}.
* #since 4.1
*/
default void afterSendCompletion(
Message<?> message, MessageChannel channel, boolean sent, #Nullable Exception ex) {
}
/**
* Invoked as soon as receive is called and before a Message is
* actually retrieved. If the return value is 'false', then no
* Message will be retrieved. This only applies to PollableChannels.
*/
default boolean preReceive(MessageChannel channel) {
return true;
}
/**
* Invoked immediately after a Message has been retrieved but before
* it is returned to the caller. The Message may be modified if
* necessary; {#code null} aborts further interceptor invocations.
* This only applies to PollableChannels.
*/
#Nullable
default Message<?> postReceive(Message<?> message, MessageChannel channel) {
return message;
}
/**
* Invoked after the completion of a receive regardless of any exception that
* have been raised thus allowing for proper resource cleanup.
* <p>Note that this will be invoked only if {#link #preReceive} successfully
* completed and returned {#code true}.
* #since 4.1
*/
default void afterReceiveCompletion(#Nullable Message<?> message, MessageChannel channel,
#Nullable Exception ex) {
}
}
I am trying to write a Kafka consumer application in spring-kafka. I can think of 2 scenarios in which error can occur :
While processing records, an exception can occur in Service layer ( while updating records through API in a table)
Deserialization error
I had already explored an option to handle scenario 1, I can just throw an exception in my code and handle it using SeekToCurrentErrorHandler.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setErrorHandler(new SeekToCurrentErrorHandler(new FixedBackOff(1000L, 2L)));
return factory;
}
For scenario 2, I have got an option of ErrorHandlingDeserializer but I am not sure how to implement it with SeekToCurrentErrorHandler. Is there a way to include ErrorHandler for both scenarios using SeekToCurrentErrorHandler.
My property class is as below :
#Bean
public ConsumerFactory<String, String> consumerFactory(){
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,KAFKA_BROKERS);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, OFFSET_RESET);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class.getName());
props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID_CONFIG);
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, SCHEMA_REGISTRY_URL);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, SSL_PROTOCOL);
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,SSL_TRUSTSTORE_LOCATION_FILE_NAME);
props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, SSL_TRUSTSTORE_SECURE);
props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG,SSL_KEYSTORE_LOCATION_FILE_NAME);
props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, SSL_KEYSTORE_SECURE);
props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, SSL_KEY_SECURE);
return new DefaultKafkaConsumerFactory<>(props);
}
Publishing error records :
I am also thinking to publish error records to Dead Letter queue. For scenario 1, it should retry and publish on dead letter queue and for scenario 2, it should directly publish as there is no benefit of re-trying. I may not have access to create a topic on my own and would need to ask my producers to create one topic for error records as well. How can I implement logic to publish records on custom error topic.
As I have no control over name if I use DeadLetterPublishingRecoverer. Based on my understanding, it creates topic with <original_topic_name>.DLT.
The SeekToCurrentErrorHandler treats certain exceptions (such as DeserializationException) as fatal and are not retried - that failed record is immediately sent to the recoverer.
For retryable exceptions, the recoverer is called after retrieds are exhausted.
/**
* Add exception types to the default list. By default, the following exceptions will
* not be retried:
* <ul>
* <li>{#link DeserializationException}</li>
* <li>{#link MessageConversionException}</li>
* <li>{#link ConversionException}</li>
* <li>{#link MethodArgumentResolutionException}</li>
* <li>{#link NoSuchMethodException}</li>
* <li>{#link ClassCastException}</li>
* </ul>
* All others will be retried.
* #param exceptionTypes the exception types.
* #see #removeNotRetryableException(Class)
* #see #setClassifications(Map, boolean)
*/
public final void addNotRetryableExceptions(Class<? extends Exception>... exceptionTypes) {
Based on my understanding, it creates topic with <original_topic_name>.DLT.
That is the default behavior; you can provide your own DLT topic name strategy (destination resolver).
See the documentation.
The following example shows how to wire a custom destination resolver:
DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(template,
(r, e) -> {
if (e instanceof FooException) {
return new TopicPartition(r.topic() + ".Foo.failures", r.partition());
}
else {
return new TopicPartition(r.topic() + ".other.failures", r.partition());
}
});
ErrorHandler errorHandler = new SeekToCurrentErrorHandler(recoverer, new FixedBackOff(0L, 2L));
public static void main(String[] args) throws InterruptedException {
AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
ctx.register(Main.class);
ctx.refresh();
DirectChannel channel1 = ctx.getBean("channel1", DirectChannel.class);
ctx.getBean("channel2", PublishSubscribeChannel.class).subscribe(message ->
System.out.println("Output: " + message));
channel1.send(MessageBuilder.withPayload("p1")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,1)
.setHeader("a", 1)
.build());
channel1.send(MessageBuilder.withPayload("p2")
.setHeader(CORRELATION_ID, 1)
.setHeader(SEQUENCE_SIZE,2)
.setHeader(SEQUENCE_NUMBER,2)
.setHeader("a", 2)
.build());
}
#Bean
public MessageChannel channel1() {
return MessageChannels.direct().get();
}
#Bean
public MessageChannel channel2() {
return MessageChannels.publishSubscribe().get();
}
#Bean
public IntegrationFlow flow1() {
return IntegrationFlows
.from("channel1")
.aggregate(a -> a
.releaseStrategy(new SequenceSizeReleaseStrategy())
.expireGroupsUponCompletion(true)
.sendPartialResultOnExpiry(true))
.channel("channel2")
.get();
}
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=2, a=2, correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
Headers "a" and "sequenceNumber" were overwritten.
How to aggregate messages with the identical headers?
It must be so
Output: GenericMessage [payload=[p1, p2], headers={sequenceNumber=[1,2], a=[1, 2], correlationId=1, id=b5e51041-c967-1bb4-1601-7e468ae28527, sequenceSize=2, timestamp=1580475773518}]
See AbstractAggregatingMessageGroupProcessor:
/**
* Specify a {#link Function} to map {#link MessageGroup} into composed headers for output message.
* #param headersFunction the {#link Function} to use.
* #since 5.2
*/
public void setHeadersFunction(Function<MessageGroup, Map<String, Object>> headersFunction) {
and also:
/**
* The {#link Function} implementation for a default headers merging in the aggregator
* component. It takes all the unique headers from all the messages in group and removes
* those which are conflicted: have different values from different messages.
*
* #author Artem Bilan
*
* #since 5.2
*
* #see AbstractAggregatingMessageGroupProcessor
*/
public class DefaultAggregateHeadersFunction implements Function<MessageGroup, Map<String, Object>> {
Or just long existing:
/**
* This default implementation simply returns all headers that have no conflicts among the group. An absent header
* on one or more Messages within the group is not considered a conflict. Subclasses may override this method with
* more advanced conflict-resolution strategies if necessary.
* #param group The message group.
* #return The aggregated headers.
*/
protected Map<String, Object> aggregateHeaders(MessageGroup group) {
So, what you need in your aggregate() configuration is an outputProcessor(MessageGroupProcessor outputProcessor) option.
See docs for more info: https://docs.spring.io/spring-integration/docs/5.2.3.RELEASE/reference/html/message-routing.html#aggregatingmessagehandler
Context:
I'm using spring-retry to retry restTemplate calls.
The restTemplate calls are called from a kafka listener.
The kafka listener is also configured to retry on error (if any exception are thrown during the process, not only the restTemplate call).
Goal:
I'd like to prevent kafka from retrying when the error come from a retry template which has exhausted.
Actual behavior :
When the retryTemplate exhaust all retries, the original exception is thrown. Thus preventing me from identifying if the error was retried by the retryTemplate.
Desired behavior:
When the retryTemplate exhaust all retries, wrap the original exception in a RetryExhaustedException which will allow me to blacklist it from kafka retries.
Question:
How can I do something like this ?
Thanks
Edit
RetryTemplate configuration :
RetryTemplate retryTemplate = new RetryTemplate();
FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
backOffPolicy.setBackOffPeriod(1000);
retryTemplate.setBackOffPolicy(backOffPolicy);
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(FunctionalException.class, false);
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy(3, retryableExceptions, true, true);
retryTemplate.setRetryPolicy(retryPolicy);
retryTemplate.setThrowLastExceptionOnExhausted(false);
Kafka ErrorHandler
public class DefaultErrorHandler implements ErrorHandler {
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data) {
Throwable exception = Optional.ofNullable(thrownException.getCause()).orElse(thrownException);
// TODO if exception as been retried in a RetryTemplate, stop it to prevent rollback and send it to a DLQ
// else rethrow exception, it will be rollback and handled by AfterRollbackProcessor to be retried
throw new KafkaException("Could not handle exception", thrownException);
}
}
Listener kafka :
#KafkaListener
public void onMessage(ConsumerRecord<String, String> record) {
retryTemplate.execute((args) -> {
throw new RuntimeException("Should be catched by ErrorHandler to prevent rollback");
}
throw new RuntimeException("Should be retried by afterRollbackProcessor");
}
Simply configure the listener retry template with a SimplyRetryPolicy that is configured to classify RetryExhaustedException as not retryable.
Be sure to set the traverseCauses property to true since the container wraps all listener exceptions in ListenerExecutionFailedException.
/**
* Create a {#link SimpleRetryPolicy} with the specified number of retry
* attempts. If traverseCauses is true, the exception causes will be traversed until
* a match is found. The default value indicates whether to retry or not for exceptions
* (or super classes) are not found in the map.
*
* #param maxAttempts the maximum number of attempts
* #param retryableExceptions the map of exceptions that are retryable based on the
* map value (true/false).
* #param traverseCauses is this clause traversable
* #param defaultValue the default action.
*/
public SimpleRetryPolicy(int maxAttempts, Map<Class<? extends Throwable>, Boolean> retryableExceptions,
boolean traverseCauses, boolean defaultValue) {
EDIT
Use
template.execute((args) -> {...}, (context) -> throw new Blah(context.getLastThrowable()));
I am building a kafka consumer. I have set the recovery callback similar to below. I have enabled manual commit. How can I acknowledge the message in recovery callback method so that there is no lag.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Map<String, Object>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(conncurrency);
factory.setConsumerFactory(consumerFactory());
factory.setRetryTemplate(retryTemplate());
factory.setRecoveryCallback(new RecoveryCallback<Object>() {
#Override
public Object recover(RetryContext context) throws Exception {
// TODO Auto-generated method stub
logger.debug(" In recovery callback method !!");
return null;
}
});
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
return factory;
}
/*
* Retry template.
*/
protected RetryPolicy retryPolicy() {
SimpleRetryPolicy policy = new SimpleRetryPolicy(maxRetryAttempts, retryableExceptions);
return policy;
}
protected BackOffPolicy backOffPolicy() {
ExponentialBackOffPolicy policy = new ExponentialBackOffPolicy();
policy.setInitialInterval(initialRetryInterval);
policy.setMultiplier(retryMultiplier);
return policy;
}
protected RetryTemplate retryTemplate() {
RetryTemplate template = new RetryTemplate();
template.setRetryPolicy(retryPolicy());
template.setBackOffPolicy(backOffPolicy());
return template;
}
}
Your question is too broad. You need to be more specific.
There is no any assumption in the Framework what you could do in case of retry exhausting during consumption errors.
I think you should start from the Spring Retry project to understand what is that RecoveryCallback at all and how it works:
If the business logic does not succeed before the template decides to abort, then the client is given the chance to do some alternate processing through the recovery callback.
A RetryContext has:
/**
* Accessor for the exception object that caused the current retry.
*
* #return the last exception that caused a retry, or possibly null. It will be null
* if this is the first attempt, but also if the enclosing policy decides not to
* provide it (e.g. because of concerns about memory usage).
*/
Throwable getLastThrowable();
Also Spring Kafka populates additional attributes to that RetryContext to deal with in the RecoveryCallback: https://docs.spring.io/spring-kafka/docs/2.0.0.RELEASE/reference/html/_reference.html#_retrying_deliveries
The contents of the RetryContext passed into the RecoveryCallback will depend on the type of listener. The context will always have an attribute record which is the record for which the failure occurred. If your listener is acknowledging and/or consumer aware, additional attributes acknowledgment and/or consumer will be available. For convenience, the RetryingAcknowledgingMessageListenerAdapter provides static constants for these keys. See its javadocs for more information.