I am trying to implement DLQ using spring cloud stream with Batch mode enabled
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(BatchErrorHandler handler) {
return ((container, destinationName, group) -> {
if(dlqEnabledTopic.contains(destinationName))
container.setBatchErrorHandler(handler);});
}
#Bean
public BatchErrorHandler batchErrorHandler(KafkaOperations<String, byte[]> kafkaOperations) {
CustomDeadLetterPublishingRecoverer recoverer = new CustomDeadLetterPublishingRecoverer(kafkaOperations,
(cr, e) -> new TopicPartition(cr.topic()+"_dlq", cr.partition()));
return new RecoveringBatchErrorHandler(recoverer, new FixedBackOff(1000, 1));
}
but have a few queries:
how to configure key/value Serializer using properties - my message is String type but KafkaOperations is using ByteArraySerializer
In the batch multiple messages are there , but if first message failed it went to DLQ but don't see the processing of next message.
Requirement - at any index if batch fails, I need only that message to be sent to DLQ and rest of the message should be processed again.
Is DLQ now supported with batch mode now ? just like with record mode it can be enabled using properties
spring.kafka.producer.* properties - however, the DLT publishing should use the same serializers as the main stream app. ByteArraySerializer is generally correct.
The recovering batch error handler will perform seeks for the unprocessed records and they will be returned. Debug logging should help you figure out what's wrong. If you can't figure it out, provide an MCRE that exhibits the behavior you are seeing.
No; the binder does not support DLQ for batch mode; configuring the error handler is the correct approach.
Related
I am using spring boot (version 2.7.1) with spring cloud stream kafka binder (2.8.5) for processing Kafka messages
I've functional style consumer that consumes messages in batches. Right now its retrying 10 times and commits the offset for errored records.
I want now to introduce the mechanism of retry for certain numbers (works using below error handler) then stop processing messages and fail entire batch messages without auto committing offset.
I read through the documents and understand that CommonContainerStoppingErrorHandler can be used for stopping the container from consuming messages.
My handler looks below now and its retries exponentially.
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<String, Message>> errorHandler() {
return (container, destinationName, group) -> {
container.getContainerProperties().setAckMode(ContainerProperties.AckMode.BATCH);
ExponentialBackOffWithMaxRetries backOffWithMaxRetries = new ExponentialBackOffWithMaxRetries(2);
backOffWithMaxRetries.setInitialInterval(1);
backOffWithMaxRetries.setMultiplier(2.0);
backOffWithMaxRetries.setMaxInterval(5);
container.setCommonErrorHandler(new DefaultErrorHandler(backOffWithMaxRetries));
};
}
How do I chain CommonContainerStoppingErrorHandler along with above error handler, so the failed batch is not commited and replayed upon restart ?
with BatchListenerFailedException from consumer, it is possible to fail entire batch (including one or other valid records before any problematic record in that batch) ?
Add a custom recoverer to the error handler - see this answer for an example: How do you exit spring boot application programmatically when retries are exhausted, to prevent kafka offset commit
No; records before the failed one will have their offsets committed.
I've been trying to implement retry logic for Spring cloud stream kafka such that if an exception is throw when producing an event to the topic sample-topic, It retries two more time.
I added in the following configuration to the application.properties file
spring.cloud.stream.bindings.processSampleEvent.destination=sample-topic
spring.cloud.stream.bindings.processSampleEvent.content-type=application/json
spring.cloud.stream.bindings.processSampleEvent.consumer.maxAttempts=2
I've written the lister code in way that it simply logs the received message and throws a NullPointerException so that I can test out the retry.
#StreamListener(ListenerBind.SAMPLE_CHANNEL)
public void processSampleEvent(String productEventDto) {
System.out.println("Entering listener: " + productEventDto);
throw new NullPointerException();
}
But when testing out by producing an event to the sample-topic, I see that in the logs the event has been retries 20 times but I've specified in the properties to try only two time and also a weird thing happens when I change to it 3 times, It retries 30 times.
I'm pretty new to Spring cloud streams and any help on this would be really helpful.
Thanks in Advance 😊
The default error handler in the listener container is now a SeekToCurrentErrorHandler with 10 delivery attempts.
You can either disable the retries in the binder, and configure a STCEH with the retry semantics you want, or use retries in the binder and replace the default error handler with a simple LoggingErrorHandler.
To configure the container's error handler, add a ListenerContainerCustomizer<AbstractKafkaListenerContainerFactory> #Bean.
I faced the same problem.
My working solution was to create a ListenerContainerCustomizer Bean, give it desired number of max attempts, and set consumer binding maxAttempts: 1
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?,?>> listenerContainerCustomizer(){
return (container, dest, group) ->
container.setErrorHandler(containerAwareErrorHandler());
}
public SeekToCurrentErrorHandler containerAwareErrorHandler(){
return new SeekToCurrentErrorHandler(new FixedBackOff(0, maxAttempts-1);
}
The below shows a snippet for the function, please suggest how to send data to different topics based on if it has error or not
public Function<KStream<String,?>, KStream<String,?>> process(){
return input -> input.map(key, value) {
try{
// logic of function here
}catch(Exception e) {
// How do I send to different topic from here??
}
return new KeyValue<>(key,value);
}
}
Set the kafka consumer binding's enableDlq option to true; when the listener throws an exception the record is sent to the dead letter topic after retries are exhausted. If you want to fail immediately, set the consumer binding's maxAttempts property to 1 (default is 3).
See the documentation.
enableDlq
When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named error.<destination>.<group>. The DLQ topic name can be configurable by setting the dlqName property or by defining a #Bean of type DlqDestinationResolver. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See Dead-Letter Topic Processing processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[]. By default, a failed record is sent to the same partition number in the DLQ topic as the original record. See Dead-Letter Topic Partition Selection for how to change that behavior. Not allowed when destinationIsPattern is true.
Default: false.
I'm trying to implement some CloudEvent demo.
I have a hew spring boot services with RabbitMQ as a message bus they all send messages to a queue and one listens to the queue messages.
I try to wrap my messages as CloudEvent to make them more standard.
I use the following code to wrap the message (data) as a CloudEvent.
try {
inputEvent = CloudEventBuilder.v1()
.withSource(new URI("app://" + messageData.getChangeRequestId().toString()))
.withDataContentType("application/json")
.withId(messageData.myId().toString())
.withType("com.data.BaseMessageData")
.withData(objMapper.writeValueAsBytes(eventData))
.build();
} catch (Exception e) {
throw new MyMessagingException("Failed to convert the message into json. (See inner exception for further details)", e);
}
The data is converted to bytes since the message CloudEventData is based on bytes.
Of course, that on my listener method I get exception since SimpleMessageConverter can't handle bytes array.
Now, I can try and implement some custom message handler or try to check out CloudEvent AMQP suggested binding solution but I'm not keen with the amount of code it involves and I don't want to involve more technologies if not absolutely necessary.
Should I go down this path and implement a custom message conveter?
Is there any other standard solution for standardizing services messaging over qeueus?
You will need a custom message converter; but see this blog post:
https://spring.io/blog/2020/12/10/cloud-events-and-spring-part-1
I am using reactor kafka to send in kafka messages and receive and process them.
While receiving the kakfa payload, I do some deserialization, and if there is an exception, I want to just log that payload ( by saving to mongo ), and then continue receiving other payloads.
For this I am using the below approach -
#EventListener(ApplicationStartedEvent.class)
public void kafkaReceiving() {
for(Flux<ReceiverRecord<String, Object>> flux: kafkaService.getFluxReceives()) {
flux.delayUntil(//some function to do something)
.doOnNext(r -> r.receiverOffset().acknowledge())
.onErrorResume(this::handleException()) // here I'll just save to mongo
.subscribe();
}
}
private Publisher<? extends ReceiverRecord<String,Object>> handleException(object ex) {
// save to mongo
return Flux.empty();
}
Here I expect that whenever I encounter an exception while receiving a payload, the onErrorResume should catch it and log to mongo and then I should be good to continue receiving more messages when I send to the kafka queue. However, I see that after the exception, even though the onErrorResume method gets invoked, but I am not able to process anymore messages sent to Kakfa topic.
Anything I might be missing here?
If you need to handle the error gracefully, you can add onErrorResume inside delayUntil:
flux
.delayUntil(r -> {
return process(r)
.onErrorReturn(e -> saveToMongo(r));
});
.doOnNext(r -> r.receiverOffset().acknowledge())
.subscribe();
Reactive operators treat error as a terminal signal, and, if your inner logic (inside delayUntil) throws an error, delayUntil will terminate the sequence, and onErrorReturn after delayUntil will not make it continue processing the events from Kafka.
As mentioned by #bsideup too, I ultimately went ahead with not throwing exception from the deserializer, since the kafka is not able to commit offset for that record, and there is no clean way of ignoring that record and going ahead with further consumption of records as we dont have the offset information of the record( since it is malformed). So even if I try to ignore the record using reactive error operators, the poll fetches the same record, and the consumer is then kind of stuck