How to retry consuming message, then stop consuming, when error occurs in listener - spring

I have Kafka listener writing data to a database, in case of database timeout (JdbcException), I'd like to retry, and if timeouts persist, stop consuming Kafka message.
As far as I understand, Spring Kafka 2.9 has 2 CommonErrorHandler implementations:
DefaultErrorHandler tries to redeliver messages several times and then send failed messages to logs or DLQ
CommonContainerStoppingErrorHandler stops listener container and message consumption
I would like to chain both of them: first try to redeliver messages several times, then stop container when delivery doesn't succeed.
How can I do that?

Use a DefaultErrorHandler with a custom recoverer that calls a CommonContainerStoppingErrorHandler after the retries are exhausted.
See this answer for an example
How do you exit spring boot application programmatically when retries are exhausted, to prevent kafka offset commit
(It uses the older SeekToCurrentErrorHandler, but the same concept applies.)

Related

Spring Rabbit mq Listner complete the process even if is killed

I have integrated rabbitmq in a spring application.In my application i am doing indexing on solr using rabbit mq.
On my every queue i have set only one listner.
I want to stop the listner on message progess.But the problem is that when i am going to stop the listner by registry.stop the rabbit mq ui and logs showing that the listner is stopped. but the message on which it works sucessfully index on solr.
As per my knowledge after killing the listner, the message also not going to further process.
That's not correct. It just stops to consume more messages from the queue. Currently in-flight messages are processed gracefully. Why would one want do not do that? You would lose the data which was consumed and acknowledged on the broker.

Method annotated with Spring Kafka listener is not receiving message if previous message processing is blocked

In my project, I am using Spring Kafka listener to consume messages from Kafka. I have a doubt that if the consume method code gets blocked due to some reason and never returned back, in this case, will this listener be able to receive new messages and proceed further or it will be hanged? In my case, it looks like, Kafka listener also got blocked and not processing further messages, even, another consumer of same group is also not receiving messages.
No; you will not get more records while a thread is blocked, unless the concurrency is > 1 and there are at least that many partitions. Even then, you will receive no more messages for the partition(s) assigned to the blocked consumer.

spring boot retry with RabbitMQ

I want to user Spring's retry on failed message processing when receiving messages from RabbitMQ.
Do I need to set requeue=true in order to for the message after several retries to end up in dead letter exchange?
Does retry means that the message is sent back to queue each time the processing is failed?
There are a number of properties related to retry; see the documentation.
You can add a RejectAndDontRequeueRecoverer bean to cause the message to be routed to the DLQ when retries are exhausted.

Running RabbitMQ consumer in different thread after consuming message

I need to know about the processing of consumed message(thread flow) via Spring SimpleMessageListenerContainer
I have following understanding
1) Messages consumed via consumer threads. (you can define the consumer thread pools via task executors).
2) the same consumer thread who receives the message process it and gets blocked until it does not finish the execution of handler method.
3) meanwhile other consumer threads gets created to consume the other messages and process those message. The interval to create those consumer threads is based on setStartConsumerMinInterval settings.
Please let me know if I am correct?
The next part is
I want to separate the consuming of message and processing of message in different threads(differnt pools for consuming and processing) how we can do that?
I have tried this way, I have made the handle message of handler as #Async to run it in different threads. Is it a correct way or any better way is available?
The last part is
in my Spring boot application I am both publishing and consuming messages, and I am using a single connection factory(CachingConnectionFactory). Should I use 2 connections factories 1 for publishing and other for consuming? and pass respecctive connection factory to the publishing and consuming beans?

What is the ideal way to store the consumer offset using spring boot kafka consumer client?

I have spring kafka consumer application. The application acts as pass through which polls the messages from kafka broker and send to IBM MQ. What would be a best/simplistic approach to store the offset in case of failure?
The simplest approach is to use the default mechanism of storing the offsets in kafka itself.
If you add a SeekToCurrentErrorHandler, the container will keep redelivering records that are failed in the listener, up to 10 times by default but it can be configured for infinite retries.
If you add stateful retry, the listener adapter can add a delay between each delivery attempt.
See Stateful Retry.
ackOnError should be set to false.

Resources