Spring kafka consumer stops receiving message - spring

I have a spring microservice using kafka.
Here are consumer 5 config properties :
BOOTSTRAP_SERVERS_CONFIG -> <ip>:9092
KEY_DESERIALIZER_CLASS_CONFIG -> StringDeserializer.class
VALUE_DESERIALIZER_CLASS_CONFIG -> StringDeserializer.class
GROUP_ID_CONFIG -> "Group1"
MAX_POLL_INTERVAL_MS_CONFIG -> Integer.INT_MAX
It has been observed that when microservice is restarted , then kafka consumer stops receiving messages. Please help me in this.

I believe your max.poll.interval.ms is the issue. It is set to 24days!! This represents the time the consumer is given to process the message. The the broker will hang for that long when the processing thread dies! Try setting it to a smaller value than Integer.INT_MAX, for example 30 seconds 30000ms.

Related

How to limit Message consumption rate of Kafka Consumer in SpringBoot? (Kafka Stream)

I want to limit my Kafka Consumer message consumption rate to 1 Message per 10 seconds .I'm using kafka streams in Spring boot .
Following is the property I tried to Make this work but it didn't worked out s expected(Consumed many messages at once).
config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, brokersUrl);
config.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
//
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,1);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 10000);
is there any way to Manually ACK(Manual offsetCommits) in KafkaStreams? which will be usefull to control the msg consumption rate .
Please note that i'm using Kstreams(KafkaStreams)
Any help is really appreciated . :)
I think you misunderstand what MAX_POLL_INTERVAL_MS_CONFIG actually does.
That is the max allowed time for the client to read an event.
From docs
controls the maximum time between poll invocations before the consumer will proactively leave the group (5 minutes by default). The value of the configuration request.timeout.ms (default to 30 seconds) must always be smaller than max.poll.interval.ms(default to 5 minutes), since that is the maximum time that a JoinGroup request can block on the server while the consumer is rebalance
"maximum time" not saying any "delay" between poll invocations.
Kafka Streams will constantly poll; you cannot easily pause/start it and delay record polling.
To read an event every 10 seconds without losing consumers in the group due to lost heartbeats, then you should use Consumer API, with pause() method, call Thread.sleep(Duration.ofSeconds(10)), then resume() + poll() while setting max.poll.records=1
Finally ,I achieved the desired message consuming limit using Thread.sleep().
Since , there is no way to control the message consumption rate using kafka config properties itself . I had to use my application code to control the rate of consumption .
Example: if I want control the record consumption rate say 4 msg per 10 seconds . Then i will just consumer 4 msg (will keep a count parallely) once 4 records are consumer then i will make the thread sleep for 10 seconds and will repeat the same process over again .
I know it's not a good solution but there was no other way.
thank you OneCricketeer

Kafka streams keep logging 'Discovered transaction coordinator' after a node crash (with config StreamsConfig.EXACTLY_ONCE_V2)

I have a kafka(kafka_2.13-2.8.0) cluster with 3 partitions and 3 replications distributed in 3 nodes.
A producer cluster is sending messages to the topic.
I also have a consumer cluster using Kafka streams to consume messages from the topic.
To test fault tolerance, I killed a node. Then all consumers get stuck and keep poping below info:
[read-1-producer] o.a.k.c.p.internals.TransactionManager : [Producer clientId=streams-app-3-0451a24c-7e5c-498c-98d4-d30a6f5ecfdb-StreamThread-1-producer, transactionalId=streams-app-3-0451a24c-7e5c-498c-98d4-d30a6f5ecfdb-1] Discovered transaction coordinator myhost:9092 (id: 3 rack: null)
what I found out by now is there are sth relevant to the configuration of StreamsConfig.EXACTLY_ONCE_V2, because if I change it to StreamsConfig.AT_LEAST_ONCE the consumer works as expected.
To keep the EOS consuming, did I miss any configuration for producer/cluster/consumer?

concurrentmessagelistenercontainer spring kafka is not working

Topic A create with 12 partitions
And in spring Kafka concurrency set as 4 . And can able to view 4 client I'd assigned to 12 partitions ( 3 each )
Containers as well created for 4 concurrency but while consuming the data from topic in listeners , it consumes sequentially but not in parallel.
Example :
Consumer 1-C completes processing the data then
Consumer 2-C starts then it completes then
Consumer 3-C starts then it completes
Then
Consumer 4-C ....
Consumer 1-C
But rather I want like
Consumer 1-C Consumer 2-C Consumer 3-C Consumer 4-C to consume data in parallel
Check the following github issue and compare to your code
https://github.com/spring-projects/spring-kafka/issues/247

Spring Kafka - increase in concurrency produces more duplicate

Topic A created with 12 partition ,
And spring Kafka consumer started with 10 concurrency as spring boot application. While putting 1000 message into topic , there is no issues with duplicate as all 1000 messages got consumed but on pushing more load 10K messages 100TPS , in consumer end received 10K + 8.5K messages as duplicate with 10 concurrency but for one concurrency it's working fine ( No duplicates found )
Enable auto commit is false and doing manual ack after processing the message .
Processing time for one message is 300 milli second
Consumer rebalance occuring because of that duplicates got produced.
How to overcome this situation when we are handling more message in Kafka ?

Why did Kafka Consumer re-processed all the records since last 2 months?

In one of the instances, when the consumer service was restarted, it leads to re-processing all the records which were sent to Kafka.
Kafka Broker: 0.10.0.1
Kafka producer Service: Springboot version 1.4.3.Release
Kafka Consumer Springboot Service: Springboot version 2.2.0.Release
Now for investigating this issue, I want to recreate this scenario again in the dev/local environment which is not happening!!!
What can be the probable cause?
How to check, if the record once is processed from Consumer Side is committed when we send Acknowledgement.acknowledge();
Consumer - properties
Enable Auto commit = false;
Auto offset Reset = earliest;
max poll records = 1;
max poll interval ms config = I am calculating the value of this parameter at runtime from the formula ==>> (number_of_retries * x * 2) <= INTEGER.MaxValue
Retry Policy - Simple
number of retries = 3;
interval between retries = x (millis)
I am creating Topics at runtime on consumer side via beans NewTopic(topic_name, 1, (**short**)1)
There are 2 Kafka clusters and 1 zookeeper instances running.
Any help would be appreciated
That broker is very old; if a consumer receives no records for 24 hours, the offsets were removed and restarting the consumer would cause it to reprocess all the records.
With newer brokers, it was changed to 7 days and the consumer has to be stopped for 7 days for the offsets to be removed.
Spring Boot 1.4.x (and even 1.5.x, 2.0.x) is no longer supported; the current version is 2.3.1.
You should upgrade to a newer broker and a more recent Spring Boot version.

Resources