In functional model of spring cloud stream and kafka, how can I send to another topic( error topic) in case of an exception occured? - spring-boot

The below shows a snippet for the function, please suggest how to send data to different topics based on if it has error or not
public Function<KStream<String,?>, KStream<String,?>> process(){
return input -> input.map(key, value) {
try{
// logic of function here
}catch(Exception e) {
// How do I send to different topic from here??
}
return new KeyValue<>(key,value);
}
}

Set the kafka consumer binding's enableDlq option to true; when the listener throws an exception the record is sent to the dead letter topic after retries are exhausted. If you want to fail immediately, set the consumer binding's maxAttempts property to 1 (default is 3).
See the documentation.
enableDlq
When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named error.<destination>.<group>. The DLQ topic name can be configurable by setting the dlqName property or by defining a #Bean of type DlqDestinationResolver. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See Dead-Letter Topic Processing processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[]. By default, a failed record is sent to the same partition number in the DLQ topic as the original record. See Dead-Letter Topic Partition Selection for how to change that behavior. Not allowed when destinationIsPattern is true.
Default: false.

Related

Implementing DLQ in Kafka using Spring Cloud Stream with Batch mode enabled

I am trying to implement DLQ using spring cloud stream with Batch mode enabled
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(BatchErrorHandler handler) {
return ((container, destinationName, group) -> {
if(dlqEnabledTopic.contains(destinationName))
container.setBatchErrorHandler(handler);});
}
#Bean
public BatchErrorHandler batchErrorHandler(KafkaOperations<String, byte[]> kafkaOperations) {
CustomDeadLetterPublishingRecoverer recoverer = new CustomDeadLetterPublishingRecoverer(kafkaOperations,
(cr, e) -> new TopicPartition(cr.topic()+"_dlq", cr.partition()));
return new RecoveringBatchErrorHandler(recoverer, new FixedBackOff(1000, 1));
}
but have a few queries:
how to configure key/value Serializer using properties - my message is String type but KafkaOperations is using ByteArraySerializer
In the batch multiple messages are there , but if first message failed it went to DLQ but don't see the processing of next message.
Requirement - at any index if batch fails, I need only that message to be sent to DLQ and rest of the message should be processed again.
Is DLQ now supported with batch mode now ? just like with record mode it can be enabled using properties
spring.kafka.producer.* properties - however, the DLT publishing should use the same serializers as the main stream app. ByteArraySerializer is generally correct.
The recovering batch error handler will perform seeks for the unprocessed records and they will be returned. Debug logging should help you figure out what's wrong. If you can't figure it out, provide an MCRE that exhibits the behavior you are seeing.
No; the binder does not support DLQ for batch mode; configuring the error handler is the correct approach.

What is the best way to handle #SqsListener processing failure in Spring Boot?

We have implemented sqslistner as the documentation suggests, the best way to receive AWS SQS message Cloud Spring Doc.
There are two ways for receiving SQS messages, either use the receive
methods of the QueueMessagingTemplate or with annotation-driven
listener endpoints. The latter is by far the more convenient way to
receive messages.
Everything is working as expected. If business process failed, we throw a runtime exception. The particular message is sent back to the SQS queue for retry. When visibility timeout passed the message reappears to the worker for processing.
Sample Code is here:
#SqsListener(value="sample-standard-queue",deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void receiveMessage(String message) {
log.info("Message Received **************************** "+message );
log.info("After Conversion"+new JSONObject(message).getString("payload"));
throw new RuntimeException("An exception was thrown during the execution of the SQS listener method and Message will be still available in Queue");
}
But there are some examples where "Acknowledgment" is used instead of throwing run time exception. Documentation doesn't suggest that.
Which one is the best way to deal with a business logic failure scenario?Is Acknowledgment necessary?
Thanks in advance.
One way is to keep a track of messages being processed in some RDS table. If any message gets retried then increase the retry count in the table for that particular message.
There should be some configured numbers of retries that you want to retry one particular message and then you may want to move that to a dead-letter-queue or you may log it and just simply discard it.
There can be multiple ways of handling it: One way can be:
#SqsListener(value="sample-standard-queue",deletionPolicy = SqsMessageDeletionPolicy.ON_SUCCESS)
public void receiveMessage(String message) {
try{
log.info("Message Received **************************** "+message );
log.info("After Conversion"+new JSONObject(message).getString("payload"));
}catch(Exception e){
// check if its retry count has exhausted or not
// if exhausted - then acknowledge it (push it into dead-letter-queue) and dont throw the exception
// If not exhausted - increase the retry count in the table before throwing exception
throw new RuntimeException("An exception was thrown during the execution of the SQS listener method and Message will be still available in Queue");
}
}

JmsListener called again and again when a error happen in the method

In a spring boot application, I have a class with jms listener.
public class PaymentNotification{
#JmsListener(destination="payment")
public void receive(String payload) throws Exception{
//mapstring conversion
....
paymentEvent = billingService.insert(paymentEvent); //transactional method
//call rest...
billingService.save(paymentEvent);
//send info to jms
}
}
I saw then when a error happen, data is inserted in the database, that ok, but it's like receive method is called again and again... but queue is empty when I check on the server.
If there is an error, I don't want method is called again, Is there something for that.
The JMS Message Headers might contain additional information to help with your processing. In particular JMSRedelivered could be of some value. The Oracle doc states that "If a client receives a message with the JMSRedelivered field set, it is likely, but not guaranteed, that this message was delivered earlier but that its receipt was not acknowledged at that time."
I ran the following code to explore what was available in my configuration (Spring Boot with IBM MQ).
#JmsListener(destination="DEV.QUEUE.1")
public void receive(Message message) throws Exception{
for (Enumeration<String> e = message.getPropertyNames(); e.hasMoreElements();)
System.out.println(e.nextElement());
}
From here I could find JMSXDeliveryCount is available in JMS 2.0. If that property is not available, then you may well find something similar for your own configuration.
One strategy would be to use JMSXDeliveryCount, a vendor specific property or maybe JMSRedelivered (if suitable for your needs) as a way to check before you process the message. Typically, the message would be sent to a specific blackout queue where the redelivery count exceeds a set threshold.
Depending on the messaging provider you are using it might also be possible to configure back out queue processing as properties of the queue.

How to do Event-Driven Microservices with quarkus and smallrye correctly

Dears,
I am trying to do some kind of event-driven Microservices. Currently, I was able to consume a message from Kafka and update database record when message is received using Quarkus & Smallrye-Reactive messaging extension. What I want to achieve further is to be able to send a message to other topic in case of success and send a message to error topic otherwise. I know that we can use return and #outgoing annotation for emitting new message but I don't think it will fit in my use case. I need a guidance here, if error happens while consuming a message. Should I return message to the original topic (by not acknowledging the message) or should I consume it and produce error message to different topic to rollback the original transaction.
Here is my code :
#Incoming("new-payment")
public void newMessage(String msg) {
LOG.info("New payment has been received.");
LOG.info("Payload is {}", msg);
PaymentEvent pe = jsob.fromJson(msg, PaymentEvent.class);
mysqlPool.preparedQuery("select totalBuyers from Book where isbn = ? ",
Tuple.of(pe.getIsbn()))
.thenApply(rs -> {
RowIterator<Row> iterator = rs.iterator();
if (iterator.hasNext()) {
return iterator.next().getInteger(0) + 1;
} else {
return Integer.valueOf(0);
}
})
.thenApply(totalCount -> {
return mysqlPool.preparedQuery("update Book set totalBuyers = ?",
Tuple.of(totalCount));
})
.whenComplete((rs, err) -> {
if (err != null) {
//Emit an error to error topic.
} else {
//Emit a msg to other service.
}
});
}
Also if you've better code please submit, I am still newbie in reactive programming :).
I've been doing enterprise integration for years and I think that you would want to do both.
Should I return message to the original topic (by not acknowledging
the message) or should I consume it and produce error message to
different topic to rollback the original transaction.
The event should remain on the topic for another instance to potentially pick up and process. And an error message should be logged as an event. Perhaps the same consumer could pick up and reprocess the event successfully.
An EDA (Event Driven Architecture) may offer different ways to handle this but on an ESB the message would be marked as tried. Generally three tried attempts would send it to a dead-letter queue so that it can be corrected and reprocessed later.
Our enterprise is also starting to design and build applications using EDA so I am interested to read what others have to say on this question. And KUDOS to you for focusing on Quarkus. I believe that this is one of the best technologies to come from Redhat that I have seen yet!
Another problem with this approach is that you are doing “2 writes in 1 service” e.g. one call to the db and another one to a topic. And this can become problematic when one of the 2 writes fails.
If you want to avoid this and use a pure event driven approach, then you need to reorder your events in such a way that writing to a db is the last event in the whole flow so that you can prevent 2 writes from 1 service.
Thus in your case: change the 2nd thenApply(..) method from updating the db into firing a new event to another topic. And the consumer of this new topic should do the db update. Thus the flow becomes like this:
Producer -> topic1 -> consumer (select from ...) & fire event to another topic -> topic2 -> consumer (update table).

Spring Kafka discard message by condition in listener

In my Spring Boot/Kafka project I have the following listener:
#KafkaListener(topics = "${kafka.topic.update}", containerFactory = "updateKafkaListenerContainerFactory")
public void onUpdateReceived(ConsumerRecord<String, Update> consumerRecord, Acknowledgment ack) {
// do some logic
ack.acknowledge();
}
Inside of the listener I need to check some condition according to my business logic and if it is not met - skip processing of this certain message and let Kafka know to redeliver this message one more time.
The reason I need this - according to the business logic of my application I need to avoid sending more than one post per second into the particular Telegram chat. This why I'd like to check the chatLastSent time in the Kafka listener and postpone message sending if needed(via message redelivery to this Kafka topic)
How to properly do it? Do I only need to not perform the ack.acknowledge(); this time or there is another, more proper way in order to achieve it?
Use the SeekToCurrentErrorHandler.
When you throw an exception, the container will invoke the error handler which will re-seek the unprocessed messages so they will be fetched again on the next poll.
You can use a RecordFilterStrategy.
See doc here : https://docs.spring.io/spring-kafka/docs/2.0.5.RELEASE/reference/html/_reference.html#_filtering_messages

Resources