Spring Kafka - Batch processing not working - spring

I have Spring Kafka consumer and I want to consume 50 records each 60th seconds. I referred few documents and configured my application like this->
Consumer Configurations
#Bean
public ConsumerFactory<String, DeviceInfo> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConfig.getConsumerBootstrapServers());
config.put(ConsumerConfig.GROUP_ID_CONFIG, "fixit-airwatch-etl");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, AUTO_OFFSET_RESET_CONFIG);
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "50");
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "60000");
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(),
new JsonDeserializer<>(DeviceInfo.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, DeviceInfo> kafkaListenerFactory(ConsumerFactory<String, DeviceInfo> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, DeviceInfo> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setBatchListener(true);
factory.setBatchErrorHandler(new BatchLoggingErrorHandler());
return factory;
}
Kafka listener
#KafkaListener(topics = "${app.kafka.topic}", groupId = "etl-group", containerFactory = "kafkaListenerFactory")
public void receive(#Payload List<DeviceInfo> messages) {
log.info("Got these many records from the topic {}", messages.size());
}
application.properties
spring.kafka.listener.type=batch
Inspite of having all these configurations, looks like I'm not seeing the expected behavior. Log statements are as below.
2022-07-04 12:07:22.533 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 9
2022-07-04 12:07:22.533 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 4
2022-07-04 12:07:22.534 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 6
2022-07-04 12:07:22.534 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 8
2022-07-04 12:07:22.535 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 8
2022-07-04 12:07:22.535 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 6
2022-07-04 12:07:22.536 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 11
Eventhough, I mentioned the batch size as 50, it is fetching random number of records. Also, the delay between each batch processing is not what I configured. Did I miss anything in this? Please share your thoughts. TIA.

Your configuration looks fine, but you need to keep one thing in mind is, at every 60 secs, 50 records must be available in the topic to be consumed, for this consumer to work as expected. Or you need to adjust the value of the following properties.
fetch.min.bytes
This property allows a consumer to specify the minimum amount of data that it
wants to receive from the broker when fetching records. If a broker receives a request
for records from a consumer but the new records amount to fewer bytes than
min.fetch.bytes, the broker will wait until more messages are available before send‐
ing the records back to the consumer. This reduces the load on both the consumer
and the broker as they have to handle fewer back-and-forth messages in cases where
the topics don’t have much new activity (or for lower activity hours of the day). You
will want to set this parameter higher than the default if the consumer is using too
much CPU when there isn’t much data available, or reduce load on the brokers when
you have large number of consumers.
fetch.max.wait.ms
By setting fetch.min.bytes, you tell Kafka to wait until it has enough data to send
before responding to the consumer. fetch.max.wait.ms lets you control how long to
wait. By default, Kafka will wait up to 500 ms. This results in up to 500 ms of extra
latency in case there is not enough data flowing to the Kafka topic to satisfy the mini‐
mum amount of data to return. If you want to limit the potential latency (usually due
to SLAs controlling the maximum latency of the application), you can set
fetch.max.wait.ms to a lower value. If you set fetch.max.wait.ms to 100 ms and
fetch.min.bytes to 1 MB, Kafka will recieve a fetch request from the consumer and
will respond with data either when it has 1 MB of data to return or after 100 ms,
whichever happens first.

Related

Error Handling in Transactional Kafka Listeners

This question follows my post on Request/Reply and Retry Policy for Kafka Listeners but in the context of Transactional Kafka Listeners (the current implementation is therefore similar to the proposed solution).
Basically, the idea is to be able to support a complete error management which is, based on the type of exception, either retry X times the record or send it to a dead letter topic for exception raised inside a Kafka Listener tagged with #Transactional.
When I specify the errorHandler parameter to my #KafkaListener, I can see that it goes through my logic for the first time but then, after sending to the dead letter topic (and returning my custom response in case of #SendTo), it rolls back the transaction and retries to process my record as defined by the BackOff period of the DefaultAfterRollbackProcessor.
Is there anyway to prevent these retries in the case the exception has been properly handled and then just carry on with the next transaction ?
Here are my various handlers defined as suggested by the solution in the above link:
#Bean
public ErrorHandler errorHandler(MyDeadLetterQueueHandler deadLetterQueueHandler) {
//set with retry policy higher than KafkaListenerErrorHandler
return new SeekToCurrentErrorHandler((data, thrownException) -> {
deadLetterQueueHandler.send(data, thrownException);
}, new FixedBackOff(15000, 20));
}
#Bean
public AfterRollbackProcessor<?, ?> afterRollbackProcessor(MyDeadLetterQueueHandler deadLetterQueueHandler) {
//set with retry policy higher than KafkaListenerErrorHandler
final var afterRollbackProcessor = new DefaultAfterRollbackProcessor<Object, Object>(((data, thrownException) -> {
deadLetterQueueHandler.send(data, thrownException);
}, new FixedBackOff(15000, 20));
afterRollbackProcessor.setCommitRecovered(true);
return afterRollbackProcessor;
}
#Primary
KafkaListenerErrorHandler kafkaListenerErrorHandler(MyDeadLetterQueueHandler deadLetterQueueHandler,
MyExceptionHandler exceptionHandler) {
return (message, exception) -> {
final var cause = (Exception) exception.getCause();
final var consumerRecord = message.getHeaders().get(KafkaHeaders.RAW_DATA, ConsumerRecord.class);
if (shouldGoToDLT(cause)) {
sendToDeadLetterTopic(deadLetterQueueHandler, consumerRecord, cause);
return new CustomResponse(cause.getMessage());
// should end transaction rollback and go to next transaction
} else {
// retry 10 times before killing the app
var deliveryAttempt = message.getHeaders().get(KafkaHeaders.DELIVERY_ATTEMPT, Integer.class);
if (deliveryAttempt > 10) {
exceptionHandler.handle(cause);
}
}
throw exception;
};
}
and the logs that I get from my test using EmbeddedKafkaBroker and throwing an exception in a #Transactional Kafka Listener:
2021-07-01 17:12:34.791 INFO [,0beec62e5e3dbb97,0beec62e5e3dbb97] 19210 --- [ntainer#1-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-consumer-group.rollback-db-employee-topic.0, transactionalId=consumer-group.rollback-db-employee-topic.0] Aborting incomplete transaction
2021-07-01 17:12:34.812 ERROR [,0beec62e5e3dbb97,0beec62e5e3dbb97] 19210 --- [ntainer#1-0-C-1] essageListenerContainer$ListenerConsumer : Transaction rolled back
org.springframework.transaction.HeuristicCompletionException: Heuristic completion: outcome state is rolled back; nested exception is org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:195)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeInTransaction(KafkaMessageListenerContainer.java:2072)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListenerInTx(KafkaMessageListenerContainer.java:2041)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2017)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1702)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1272)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1264)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1161)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:752)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.data.transaction.MultiTransactionStatus.commit(MultiTransactionStatus.java:74)
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:168)
... 11 common frames omitted
2021-07-01 17:12:34.918 INFO [,0beec62e5e3dbb97,0beec62e5e3dbb97] 19210 --- [ntainer#1-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-group-3, groupId=consumer-group] Seeking to offset 0 for partition rollback-db-employee-topic-0
2021-07-01 17:12:34.929 INFO [,0beec62e5e3dbb97,2d5e98ce0b91d04a] 19210 --- [ntainer#1-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-consumer-group.rollback-db-employee-topic.0, transactionalId=consumer-group.rollback-db-employee-topic.0] Aborting incomplete transaction
2021-07-01 17:12:34.931 ERROR [,0beec62e5e3dbb97,2d5e98ce0b91d04a] 19210 --- [ntainer#1-0-C-1] essageListenerContainer$ListenerConsumer : Transaction rolled back
org.springframework.transaction.HeuristicCompletionException: Heuristic completion: outcome state is rolled back; nested exception is org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:195)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeInTransaction(KafkaMessageListenerContainer.java:2072)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListenerInTx(KafkaMessageListenerContainer.java:2041)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2017)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1702)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1272)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1264)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1161)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:752)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.data.transaction.MultiTransactionStatus.commit(MultiTransactionStatus.java:74)
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:168)
... 11 common frames omitted
2021-07-01 17:12:35.034 INFO [,0beec62e5e3dbb97,2d5e98ce0b91d04a] 19210 --- [ntainer#1-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-group-3, groupId=consumer-group] Seeking to offset 0 for partition rollback-db-employee-topic-0
2021-07-01 17:12:35.445 INFO [,0beec62e5e3dbb97,4ef5c58e90699a09] 19210 --- [ntainer#1-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-consumer-group.rollback-db-employee-topic.0, transactionalId=consumer-group.rollback-db-employee-topic.0] Aborting incomplete transaction
2021-07-01 17:12:35.448 ERROR [,0beec62e5e3dbb97,4ef5c58e90699a09] 19210 --- [ntainer#1-0-C-1] essageListenerContainer$ListenerConsumer : Transaction rolled back
org.springframework.transaction.HeuristicCompletionException: Heuristic completion: outcome state is rolled back; nested exception is org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:195)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:152)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeInTransaction(KafkaMessageListenerContainer.java:2072)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListenerInTx(KafkaMessageListenerContainer.java:2041)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2017)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1702)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1272)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1264)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1161)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.transaction.UnexpectedRollbackException: Transaction silently rolled back because it has been marked as rollback-only
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:752)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.data.transaction.MultiTransactionStatus.commit(MultiTransactionStatus.java:74)
at org.springframework.data.transaction.ChainedTransactionManager.commit(ChainedTransactionManager.java:168)
... 11 common frames omitted
…
Thanks for your help.
EDIT: Here's my listener:
#KafkaListener(topics = "user-topic", groupId = "consumer-group-1", errorHandler ="errorHandler")
#Transactional
public void onReceive(User command) {
// update database
userRepository.save(command);
switch (command.getName()) {
case "GOTODLT":
var volunteerArithmeticException = 7 / 0;
break;
case "SHOULDRETRY":
throw new IllegalStateException("Should be retried 10 times");
}
}
It is not clear why you are using #Transactional since you are using a ChainedKafkaTransactionManager (which is deprecated by the way; see the parent class javadocs). It is ok to use it, as long as you are aware of the limitations.
The transactions have already been started by the transaction managers, so the annotation is not needed.
Since your listener method is wrapped in a transaction interceptor, the transaction is rolled back before your listener error handler is invoked.
Hence the Transaction silently rolled back because it has been marked as rollback-only message.
Remove the annotation and it should work as you expect.
You should not configure both a STCEH and ARP. The former runs inside the transaction, the latter after a rollback.

Spring kafka application with multiple consumer groups stops consuming messages

kafka version 2.3.1
Spring boot verison: 2.2.5.RELEASE
I have spring boot Kafka application with 3 consumer groups. It stops consuming messages because of failing heartbeat. I tried updating the consumer configuration suggested by multiple stack overflow threads. Even after that, I am facing the issue. In the configuration,
Consumers are taking less than one second as per log to consume a message till the point they are being consumed and suddenly it stops. Also, some of the processing in the consumer happening in an asynchronous thread
Below is the configuration for one of the consumer factories.
I gave 10 seconds buffered time for each record and based on that configured MAX_POLL_INTERVAL_MS_CONFIG
#Bean
public ConsumerFactory<Object, Object> reqConsumerFactory()
{
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.GROUP_ID_CONFIG, "req-event-group");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "req-event-group");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerConfig.getBootstrapAddress());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG, 1000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 50*15*1000);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 5000);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 50*10*1000);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> reqKafkaListenerContainerFactory()
{
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(reqConsumerFactory());
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler(2));
factory.setConcurrency(2);
return factory;
}
One of the consumer method
#KafkaListener(topicPattern = "${process.update.requirement.topic.name}", containerFactory = "reqKafkaListenerContainerFactory", groupId = "req-event-group")
public void handleCompleteAndErrorRequirement(ConsumerRecord<String, Object> consumerRecord,Acknowledgment acknowledgment)
{
RequirementEventMsg requirementEventMsg = (RequirementEventMsg)consumerRecord;
acknowledgment.acknowledge();
//asynchronous method call here
}
I don't see any error other than this
2020-06-23 13:28:06.815 [INFO ] AbstractCoordinator:855 - [Consumer clientId=consumer-6, groupId=process-consumer-group] Attempt to heartbeat failed since group is rebalancing
2020-06-23 13:28:06.835 [INFO ] AbstractCoordinator:855 - [Consumer clientId=consumer-4, groupId=process-consumer-group] Attempt to heartbeat failed since group is rebalancing
2020-06-23 13:28:07.175 [INFO ] ConsumerCoordinator:472 - [Consumer clientId=consumer-4, groupId=process-consumer-group] Revoking previously assigned partitions [UPDATE_REQUIREMENT_TOPIC-1, UPDATE_REQUIREMENT_TOPIC-0]
2020-06-23 13:28:07.176 [INFO ] KafkaMessageListenerContainer:394 - partitions revoked: [UPDATE_REQUIREMENT_TOPIC-1, UPDATE_REQUIREMENT_TOPIC-0]
2020-06-23 13:28:07.177 [INFO ] AbstractCoordinator:509 - [Consumer clientId=consumer-4, groupId=process-consumer-group] (Re-)joining group
2020-06-23 13:28:07.233 [INFO ] ConsumerCoordinator:472 - [Consumer clientId=consumer-6, groupId=process-consumer-group] Revoking previously assigned partitions [PROCESS_EVENT_TOPIC-0, PROCESS_EVENT_TOPIC-1]
2020-06-23 13:28:07.233 [INFO ] KafkaMessageListenerContainer:394 - partitions revoked: [PROCESS_EVENT_TOPIC-0, PROCESS_EVENT_TOPIC-1]

How to stop micro service with Spring Kafka Listener, when connection to Apache Kafka Server is lost?

I am currently implementing a micro service, which reads data from Apache Kafka topic. I am using "spring-boot, version: 1.5.6.RELEASE" for the micro service and "spring-kafka, version: 1.2.2.RELEASE" for the listener in the same micro service. This is my kafka configuration:
#Bean
public Map<String, Object> consumerConfigs() {
return new HashMap<String, Object>() {{
put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetResetConfig);
}};
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
I have implemented the listener via the #KafkaListener annotation:
#KafkaListener(topics = "${kafka.dataSampleTopic}")
public void receive(ConsumerRecord<String, String> payload) {
//business logic
latch.countDown();
}
I need to be able to shutdown the micro service, when the listener looses connection to the Apache Kafka server.
When I kill the kafka server I get the following message in the spring boot log:
2017-11-01 19:58:15.721 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) dead for group TestGroup
When I start the kafka sarver, I get:
2017-11-01 20:01:37.748 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) for group TestGroup.
So clearly the Spring Kafka Listener in my micro service is able to detect when the Kafka Server is up and running and when it's not. In the book by confluent Kafka The Definitive Guide in chapter But How Do We Exit? it is said that the wakeup() method needs to be called on the Consumer, so that a WakeupException would be thrown. So I tried to capture the two events (Kafka server down and Kafka server up) with the #EventListener tag, as described in the Spring for Apache Kafka documentation, and then call wakeup(). But the example in the documentation is on how to detect idle consumer, which is not my case. Could someone please help me with this. Thanks in advance.
I don't know how to get a notification of the server down condition (in my experience, the consumer goes into a tight loop within the poll()).
However, if you figure that out, you can stop the listener container(s) which will wake up the consumer and exit the tight loop...
#Autowired
private KafkaListenerEndpointRegistry registry;
...
this.registry.stop();
2017-11-01 16:29:54.290 INFO 21217 --- [ad | so47062346] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator localhost:9092 (id: 2147483647 rack: null) dead for group so47062346
2017-11-01 16:29:54.346 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
...
2017-11-01 16:30:00.643 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
2017-11-01 16:30:00.680 INFO 21217 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
You can improve the tight loop by adding reconnect.backoff.ms, but the poll() never exits so we can't emit an idle event.
spring:
kafka:
consumer:
enable-auto-commit: false
group-id: so47062346
properties:
reconnect.backoff.ms: 1000
I suppose you could enable idle events and use a timer to detect if you've received no data (or idle events) for some period of time, and then stop the container(s).

reconsume Kafka 0.10.1 topic in Spring

In Spring-Kafka I want to reconsume a Kafka topic from the beginning. Doing this by changing the group.id to something unknown to Kafka of course works:
#KafkaListener(topics = "sensordata.t")
public void receiveMessage(String message) {
...
}
#Bean
public Map consumerConfigs() {
Map props = new HashMap<>();
props.put(ConsumerConfig.GROUP_ID_CONFIG, "NewGroupID");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false"); //it still commits though...
return props;
}
However, starting over by setting the offset to 0 fails.
#KafkaListener(topicPartitions =
{ #TopicPartition(topic = "sensordata.t",
partitionOffsets = #PartitionOffset(partition = "0", initialOffset = "0"))})
public void receiveMessage(String message) {
...
}
#Bean
public Map consumerConfigs() {
Map props = new HashMap<>();
...
props.put(ConsumerConfig.GROUP_ID_CONFIG, "NewGroupID");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "10000"); //making timeout window larger seems to have no influence
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "1"); //setting max records to 1 makes no difference
return props;
}
The error I get:
2016-11-14 14:07:59.018 INFO 8165 --- [ main] c.i.t.s.server.SpringKafkaApplication : Started SpringKafkaApplication in 4.134 seconds (JVM running for 4.745)
2016-11-14 14:07:59.125 INFO 8165 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator bto:9092 (id: 2147483647 rack: null) for group spring8.
2016-11-14 14:07:59.125 INFO 8165 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator bto:9092 (id: 2147483647 rack: null) for group spring8.
2016-11-14 14:07:59.129 INFO 8165 --- [afka-consumer-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group spring8
2016-11-14 14:07:59.129 INFO 8165 --- [afka-consumer-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2016-11-14 14:07:59.129 INFO 8165 --- [afka-consumer-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group spring8
2016-11-14 14:07:59.338 ERROR 8165 --- [afka-consumer-1] essageListenerContainer$ListenerConsumer : Container exception
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:600) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$OffsetCommitResponseHandler.handle(ConsumerCoordinator.java:541) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:426) ~[kafka-clients-0.10.0.1.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1059) ~[kafka-clients-0.10.0.1.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.commitIfNecessary(KafkaMessageListenerContainer.java:939) ~[spring-kafka-1.1.1.RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.processCommits(KafkaMessageListenerContainer.java:816) ~[spring-kafka-1.1.1.RELEASE.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:526) ~[spring-kafka-1.1.1.RELEASE.jar:na]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_92]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_92]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_92]
Anyone familiar with this?
I'm using Kafka 0.10.1.0 and
<properties>
<java.version>1.8</java.version>
<spring-kafka.version>1.1.1.RELEASE</spring-kafka.version>
</properties>
Why have you decided that it is the problem of offset 0?
Your StackTrace says that you have longer pollTimeout is longer than session.timeout.ms:
Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
How about to adjust them properly?

'Write-Only' over a TCP connection with Spring Integration

I am using int-ip:tcp-connection-factory and int-ip:tcp-outbound-gateway to communicate with an external server. The protocol for most of the services offered by the server follows the standard request-response style... which is working great. However, there are a few situations where I only need to send a request and no response is expected.
My question is, how do I configure my channels and connections so that I can specify whether or not to wait for a response? Currently I can't find a way and Spring always blocks after sending the request even if I am not expecting a response.
EDIT:
As suggested, I have used a tcp-outbound-channel-adapter. My config file has only the followings:
<int:channel id="requestChannel" />
<int-ip:tcp-outbound-channel-adapter
channel="requestChannel" connection-factory="client" />
<int-ip:tcp-connection-factory id="client"
type="client" host="smtp.gmail.com" port="587" single-use="false"
so-timeout="10000" />
And here's my Main class:
public final class Main {
private static final Logger LOGGER = Logger.getLogger(Main.class);
public static void main(final String... args) throws IOException {
LOGGER.debug("entered main()...");
final AbstractApplicationContext context = new ClassPathXmlApplicationContext(
"classpath:*-context.xml");
MessageChannel requestChannel = context.getBean("requestChannel", MessageChannel.class);
requestChannel.send(MessageBuilder.withPayload("QUIT").build());
LOGGER.debug("exiting main()...");
}
}
Finally this is what I get in my log:
11:57:15.877 INFO [main][com.together.email.Main] entered main()...
11:57:16.295 INFO [main][org.springframework.integration.config.xml.DefaultConfiguringBeanFactoryPostProcessor] No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
11:57:16.295 INFO [main][org.springframework.integration.config.xml.DefaultConfiguringBeanFactoryPostProcessor] No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created.
11:57:16.480 INFO [main][org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory] started client
11:57:16.480 INFO [main][org.springframework.integration.endpoint.EventDrivenConsumer] Adding {ip:tcp-outbound-channel-adapter} as a subscriber to the 'requestChannel' channel
11:57:16.480 INFO [main][org.springframework.integration.channel.DirectChannel] Channel 'requestChannel' has 1 subscriber(s).
11:57:16.480 INFO [main][org.springframework.integration.endpoint.EventDrivenConsumer] started org.springframework.integration.config.ConsumerEndpointFactoryBean#0
11:57:16.480 INFO [main][org.springframework.integration.endpoint.EventDrivenConsumer] Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
11:57:16.481 INFO [main][org.springframework.integration.channel.PublishSubscribeChannel] Channel 'errorChannel' has 1 subscriber(s).
11:57:16.481 INFO [main][org.springframework.integration.endpoint.EventDrivenConsumer] started _org.springframework.integration.errorLogger
11:57:16.509 DEBUG [main][org.springframework.integration.channel.DirectChannel] preSend on channel 'requestChannel', message: [Payload=QUIT][Headers={timestamp=1381021036509, id=860ebe82-06c6-4393-95c7-0ece1a0a0e5d}]
11:57:16.509 DEBUG [main][org.springframework.integration.ip.tcp.TcpSendingMessageHandler] org.springframework.integration.ip.tcp.TcpSendingMessageHandler#0 received message: [Payload=QUIT][Headers={timestamp=1381021036509, id=860ebe82-06c6-4393-95c7-0ece1a0a0e5d}]
11:57:16.509 DEBUG [main][org.springframework.integration.ip.tcp.connection.TcpNetClientConnectionFactory] Opening new socket connection to smtp.gmail.com:587
11:57:16.745 DEBUG [main][org.springframework.integration.ip.tcp.connection.TcpNetConnection] New connection smtp.gmail.com:587:550c9b68-10a0-442d-b65d-d25d28df306b
11:57:16.748 DEBUG [main][org.springframework.integration.ip.tcp.TcpSendingMessageHandler] Got Connection smtp.gmail.com:587:550c9b68-10a0-442d-b65d-d25d28df306b
11:57:16.749 DEBUG [pool-1-thread-1][org.springframework.integration.ip.tcp.connection.TcpNetConnection] TcpListener exiting - no listener and not single use
11:57:16.750 DEBUG [main][org.springframework.integration.ip.tcp.connection.TcpNetConnection] Message sent [Payload=QUIT][Headers={timestamp=1381021036509, id=860ebe82-06c6-4393-95c7-0ece1a0a0e5d}]
11:57:16.750 DEBUG [main][org.springframework.integration.channel.DirectChannel] postSend (sent=true) on channel 'requestChannel', message: [Payload=QUIT][Headers={timestamp=1381021036509, id=860ebe82-06c6-4393-95c7-0ece1a0a0e5d}]
11:57:16.751 INFO [main][com.together.email.Main] exiting main()...
I have set Spring's logging to debug level so that I can give you more details. As you can see from the last line of the log, my main exits. However, unfortunately the application doesn't terminate as [pool-1-thread-1] continues running. This thread comes to life as soon as send is invoked on requestChannel. Any idea what's going on here?
[In this example, I am sending an SMTP QUIT message as soon as the application connects to Google. In practice, I would actually not start with a QUIT. For example, at the beginning I may start with a HELO message. I have tried hooking in a tcp-inbound-channel-adapter to get the message response and that works great. The problem is with messages where I don't expect a reply.]
So, I suggest you to inject into the <int-ip:tcp-connection-factory> some 'fake' task-executor:
public class NullExecutor implements Executor {
public void execute(Runnable command) {}
}
In this case your connection from AbstractClientConnectionFactory#obtainConnection() won't be configured (runned) to read from socket.
However System.exit(0); as last line of your main is enough. In this case all thread will be terminated, if they aren't daemons.

Resources