How do I tigger the doOnError() in a Reactive Kakfa consumer? - events

private Flux<Record> consumeRecord() {
return reactiveKafkaConsumerTemplate
.receive()
.doOnNext(consumerRecord -> {
Record record = consumerRecord.value();
recordWorkflowService.handleRecord(record);
}
)
.map(ConsumerRecord::value)
.doOnError(throwable -> {
log.error("something bad happened while consuming : {}", throwable.getMessage());
});
}
Currently this is the code I have in my consumer. When a record comes in I do see that my recordWorflowService.handleRecord is called and the record is processed successfully, however I cannot get the error case to trigger.
I have a use case where I am consumer records from a kafka topic and do some processing on them. However, if any part of that processing fails I do not want the kafka record to be committed so that it can get reprocessed. So if any error occurs in the recordWorkflowService I want .doOnError() to be triggered and to not commit the offset (So it can be reprocessed).
Am I on the right path here? I have tried manually throwing an exception within handleRecord() but .doOnError() never seems to get triggered.

Related

Immediately return first emitted value from two Monos while continuing to process the other asynchronously

I have two data sources, each returning a Mono:
class CacheCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
class MasterCustomerClient {
Mono<Entity> createCustomer(Customer customer)
}
Callers to my application are hitting a Spring WebFlux controller:
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public Flux<Entity> createCustomer(#RequestBody Customer customer) {
return customerService.createNewCustomer(entity);
}
As long as either data source successfully completes its create operation, I want to immediately return a success response to the caller, however, I still want my service to continue processing the result of the other Mono stream, in the event that an error was encountered, so it can be logged.
The problem seems to be that as soon as a value is returned to the controller, a cancel signal is propagated back through the stream by Spring WebFlux and, thus, no information is logged about a failure.
Here's one attempt:
public Flux<Entity> createCustomer(final Customer customer) {
var cacheCreate = cacheClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in cache"));
var masterCreate = masterClient
.createCustomer(customer)
.doOnError(WebClientResponseException.class,
err -> log.error("Customer creation failed in master"));
return Flux.firstWithValue(cacheCreate, masterCreate)
.onErrorMap((err) -> new Exception("Customer creation failed in cache and master"));
}
Flux.firstWithValue() is great for emitting the first non-error value, but then whichever source is lagging behind is cancelled, meaning that any error is never logged out. I've also tried scheduling these two sources on their own Schedulers and that didn't seem to help either.
How can I perform these two calls asynchronously, and emit the first value to the caller, while continuing to listen for emissions on the slower source?
You can achieve that by transforming you operators to "hot" publishers using share() operator:
First subscriber launch the upstream operator, and additional subscribers get back result cached from the first subscriber:
Further Subscriber will share [...] the same result.
Once a second subscription has been done, the publisher is not cancellable:
It's worth noting this is an un-cancellable Subscription.
So, to achieve your requirement:
Apply share() on each of your operators
Launch a subscription on shared publishers to trigger processing
Use shared operators in your pipeline (here firstWithValue).
Sample example:
import java.time.Duration;
import reactor.core.publisher.Mono;
public class TestUncancellableMono {
// Mock a mono successing quickly
static Mono<String> quickSuccess() {
return Mono.delay(Duration.ofMillis(200)).thenReturn("SUCCESS !");
}
// Mock a mono taking more time and ending in error.
static Mono<String> longError() {
return Mono.delay(Duration.ofSeconds(1))
.<String>then(Mono.error(new Exception("ERROR !")))
.doOnCancel(() -> System.out.println("CANCELLED"))
.doOnError(err -> System.out.println(err.getMessage()));
}
public static void main(String[] args) throws Exception {
// Transform to hot publisher
var sharedQuick = quickSuccess().share();
var sharedLong = longError().share();
// Trigger launch
sharedQuick.subscribe();
sharedLong.subscribe();
// Subscribe back to get the cached result
Mono
.firstWithValue(sharedQuick, sharedLong)
.subscribe(System.out::println, err -> System.out.println(err.getMessage()));
// Wait for subscription to end.
Thread.sleep(2000);
}
}
The output of the sample is:
SUCCESS !
ERROR !
We can see that error message has been propagated properly, and that upstream publisher has not been cancelled.

What does "Too Many Messages without acknowledgement in topic" meaning for Quarkus consummer?

I see this message in my Too Many Messages without acknowledgement in topic my-topic-retry ... amount 1 ... The connector cannot commit as a record processing has not completed.
Does this affect that specific topic/connector or does it affect all the topics/connectors registered in my Quarkus application? I have 3 topics configured with mp.messaging.incoming.[some-name-here].topic=some-topic-here. They are not connected to each other.
Can my code below somehow cause this issue?
How is one message not acked considered Too many (See above amount 1)?
#Incoming("my-topic-retry")
#Outgoing("my-topic-back")
public Uni<Message<MyRequest>> retry(Message<MyRequest> in) {
try {
// Check if the message should be reprocessed immediately or delay it.
if (delayTimeSecs == 0)
return Uni
.createFrom()
.item(in.addMetadata(metadataOut)
.withPayload(in.getPayload()));
else
return Uni
.createFrom()
.item(in.addMetadata(metadataOut)
.withPayload(in.getPayload()))
.onItem().delayIt()
.by(Duration.ofSeconds(delayTimeSecs)); // Setting is 300 seconds.
} catch(Exception ex) {
in.nack(new IllegalStateException("An error occurred while trying to process the retry.", ex));
return Uni.createFrom().nullItem();
}
}

How to throw multiple record failures via kafka BatchListenerFailedException?

I am collecting the results of parallelly processed records and throwing BatchListenerFailedException with the failed record; my intention is to commit the offset of successful record and throw the failed record for RecoveringBatchErrorHandler for retry.
But whats happening is once BatchListenerFailedException is thrown it exits of the loop and does n't acknowledge the remaining. So I tried placing ack.acknowledge in my called #Async service; but then when a BatchListenerFailedException is thrown on failure everything in that batch is thrown and none acked. Any help is appreciated.
for (String futureIndex :resultset)
{
logger.info("The Records Results are "+futureIndex);
if (futureIndex.contains("SUCCESS"))
{
logger.info("***Acknowledging -->"+futureIndex.split("~")[2]);
ack.acknowledge();
}
else {
String errorindex=futureIndex.split("~")[2];
throw new KafkaConsumerException("Exception occured in sending json via HTTPS",records.get(Integer.parseInt(errorindex)));
}
}
With a batch listener, the Acknowledgment is for the entire batch, not one record at a time. You should only call ack() if the whole batch is successful.
Throwing a BatchListenerFailedException with the RecoveringBatchErrorHandler will commit the offsets prior to the failed record and re-seek the partitions so the remaining records are redelivered.

Stop consumption of message if it cannot be completed

I'm new to mass transit and have a question regarding how I should solve a failure to consume a message. Given the below code I am consuming INotificationRequestContract's. As you can see the code will break and not complete.
public class NotificationConsumerWorker : IConsumer<INotificationRequestContract>
{
private readonly ILogger<NotificationConsumerWorker> _logger;
private readonly INotificationCreator _notificationCreator;
public NotificationConsumerWorker(ILogger<NotificationConsumerWorker> logger, INotificationCreator notificationCreator)
{
_logger = logger;
_notificationCreator = notificationCreator;
}
public Task Consume(ConsumeContext<INotificationRequestContract> context)
{
try
{
throw new Exception("Horrible error");
}
catch (Exception e)
{
// >>>>> insert code here to put message back for later consumption. <<<<<
_logger.LogError(e, "Failed to consume message");
throw;
}
}
}
How do I best handle a scenario such as this where the consumption fails? In my specific case this is likely to occur if a required external service is unavailable.
I can see two solutions.
If there is a way to put the message back, or cancel the consumption so that it will be tried again.
I could store it locally in a database and create my own re-try method to wrap this (but would prefer not to for sake of simplicity).
The exceptions section of the documentation provides sufficient guidance for dealing with consumer exceptions.
There are two retry approaches, which can be used in combination:
Message Retry, which waits while the message is locked, in-process, for the next retry. Therefore, these should be short, to deal with transient issues.
Message Redelivery, which delays the message using either the broker delayed delivery, or a message scheduler, so that it is redelivered to the receive endpoint at some point in the future.
Once all retry/redelivery attempts are exhausted, the message is moved to the _error queue.

Remove In-memory kafka records receive from fetch in Springboot project

I have a requirment, if i can remove in-memory kafka messages which was fetched as i have max-poll-records: 10. so the scenario is : while processing the record one by one, if my program encounter any error, i don't need to process any further left over records which stored in-memory.
Ex : i fetch 10 records at once as my max-poll-interval is 10. i processed 5 records successfully (committing manually) but during 6th records i encounter an error, now i have to remove all left over 5 records from in-memory. below is my listener code :
#KafkaListener(topics = "#{'${kafka.consumer.allTopicList}'.split(',')}", groupId = Constant.GROUP_ID)
public void consumeAllTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
switch (topic) {
case Constant.toipc1:
if (!StringUtils.isEmpty(message)) {
try {
//processing logic
acknowledgment.acknowledge();
}catch(Exception e){e.printStackTrace();}
}
i want to remove records through code. Please help me to understand if it is possible, and if so how can i achive this.
It's not clear what you mean by "remove". If you mean "ignore" or "skip", you would need to to throw an exception and configure a custom error handler.
See the documentation.
If an ErrorHandler implements RemainingRecordsErrorHandler, the error handler is provided with the failed record and any unprocessed records retrieved by the previous poll(). Those records are not passed to the listener after the handler exits.
There is no standard error handler to "skip" the remaining records; it's an unusual requiremeent.
Most people would use a SeekToCurrentErrorHandler (which is now the default in the upcoming 2.5 release).

Resources