How to stop micro service with Spring Kafka Listener, when connection to Apache Kafka Server is lost? - spring

I am currently implementing a micro service, which reads data from Apache Kafka topic. I am using "spring-boot, version: 1.5.6.RELEASE" for the micro service and "spring-kafka, version: 1.2.2.RELEASE" for the listener in the same micro service. This is my kafka configuration:
#Bean
public Map<String, Object> consumerConfigs() {
return new HashMap<String, Object>() {{
put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetResetConfig);
}};
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
I have implemented the listener via the #KafkaListener annotation:
#KafkaListener(topics = "${kafka.dataSampleTopic}")
public void receive(ConsumerRecord<String, String> payload) {
//business logic
latch.countDown();
}
I need to be able to shutdown the micro service, when the listener looses connection to the Apache Kafka server.
When I kill the kafka server I get the following message in the spring boot log:
2017-11-01 19:58:15.721 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) dead for group TestGroup
When I start the kafka sarver, I get:
2017-11-01 20:01:37.748 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) for group TestGroup.
So clearly the Spring Kafka Listener in my micro service is able to detect when the Kafka Server is up and running and when it's not. In the book by confluent Kafka The Definitive Guide in chapter But How Do We Exit? it is said that the wakeup() method needs to be called on the Consumer, so that a WakeupException would be thrown. So I tried to capture the two events (Kafka server down and Kafka server up) with the #EventListener tag, as described in the Spring for Apache Kafka documentation, and then call wakeup(). But the example in the documentation is on how to detect idle consumer, which is not my case. Could someone please help me with this. Thanks in advance.

I don't know how to get a notification of the server down condition (in my experience, the consumer goes into a tight loop within the poll()).
However, if you figure that out, you can stop the listener container(s) which will wake up the consumer and exit the tight loop...
#Autowired
private KafkaListenerEndpointRegistry registry;
...
this.registry.stop();
2017-11-01 16:29:54.290 INFO 21217 --- [ad | so47062346] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator localhost:9092 (id: 2147483647 rack: null) dead for group so47062346
2017-11-01 16:29:54.346 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
...
2017-11-01 16:30:00.643 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
2017-11-01 16:30:00.680 INFO 21217 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
You can improve the tight loop by adding reconnect.backoff.ms, but the poll() never exits so we can't emit an idle event.
spring:
kafka:
consumer:
enable-auto-commit: false
group-id: so47062346
properties:
reconnect.backoff.ms: 1000
I suppose you could enable idle events and use a timer to detect if you've received no data (or idle events) for some period of time, and then stop the container(s).

Related

Spring Kafka - Batch processing not working

I have Spring Kafka consumer and I want to consume 50 records each 60th seconds. I referred few documents and configured my application like this->
Consumer Configurations
#Bean
public ConsumerFactory<String, DeviceInfo> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConfig.getConsumerBootstrapServers());
config.put(ConsumerConfig.GROUP_ID_CONFIG, "fixit-airwatch-etl");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, AUTO_OFFSET_RESET_CONFIG);
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "50");
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "60000");
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(),
new JsonDeserializer<>(DeviceInfo.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, DeviceInfo> kafkaListenerFactory(ConsumerFactory<String, DeviceInfo> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, DeviceInfo> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setBatchListener(true);
factory.setBatchErrorHandler(new BatchLoggingErrorHandler());
return factory;
}
Kafka listener
#KafkaListener(topics = "${app.kafka.topic}", groupId = "etl-group", containerFactory = "kafkaListenerFactory")
public void receive(#Payload List<DeviceInfo> messages) {
log.info("Got these many records from the topic {}", messages.size());
}
application.properties
spring.kafka.listener.type=batch
Inspite of having all these configurations, looks like I'm not seeing the expected behavior. Log statements are as below.
2022-07-04 12:07:22.533 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 9
2022-07-04 12:07:22.533 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 4
2022-07-04 12:07:22.534 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 6
2022-07-04 12:07:22.534 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 8
2022-07-04 12:07:22.535 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 8
2022-07-04 12:07:22.535 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 6
2022-07-04 12:07:22.536 INFO 89732 --- [ntainer#0-0-C-1] c.w.g.g.f.consumer.KafkaConsumer : Got these many records from the topic 11
Eventhough, I mentioned the batch size as 50, it is fetching random number of records. Also, the delay between each batch processing is not what I configured. Did I miss anything in this? Please share your thoughts. TIA.
Your configuration looks fine, but you need to keep one thing in mind is, at every 60 secs, 50 records must be available in the topic to be consumed, for this consumer to work as expected. Or you need to adjust the value of the following properties.
fetch.min.bytes
This property allows a consumer to specify the minimum amount of data that it
wants to receive from the broker when fetching records. If a broker receives a request
for records from a consumer but the new records amount to fewer bytes than
min.fetch.bytes, the broker will wait until more messages are available before send‐
ing the records back to the consumer. This reduces the load on both the consumer
and the broker as they have to handle fewer back-and-forth messages in cases where
the topics don’t have much new activity (or for lower activity hours of the day). You
will want to set this parameter higher than the default if the consumer is using too
much CPU when there isn’t much data available, or reduce load on the brokers when
you have large number of consumers.
fetch.max.wait.ms
By setting fetch.min.bytes, you tell Kafka to wait until it has enough data to send
before responding to the consumer. fetch.max.wait.ms lets you control how long to
wait. By default, Kafka will wait up to 500 ms. This results in up to 500 ms of extra
latency in case there is not enough data flowing to the Kafka topic to satisfy the mini‐
mum amount of data to return. If you want to limit the potential latency (usually due
to SLAs controlling the maximum latency of the application), you can set
fetch.max.wait.ms to a lower value. If you set fetch.max.wait.ms to 100 ms and
fetch.min.bytes to 1 MB, Kafka will recieve a fetch request from the consumer and
will respond with data either when it has 1 MB of data to return or after 100 ms,
whichever happens first.

Spring kafka application with multiple consumer groups stops consuming messages

kafka version 2.3.1
Spring boot verison: 2.2.5.RELEASE
I have spring boot Kafka application with 3 consumer groups. It stops consuming messages because of failing heartbeat. I tried updating the consumer configuration suggested by multiple stack overflow threads. Even after that, I am facing the issue. In the configuration,
Consumers are taking less than one second as per log to consume a message till the point they are being consumed and suddenly it stops. Also, some of the processing in the consumer happening in an asynchronous thread
Below is the configuration for one of the consumer factories.
I gave 10 seconds buffered time for each record and based on that configured MAX_POLL_INTERVAL_MS_CONFIG
#Bean
public ConsumerFactory<Object, Object> reqConsumerFactory()
{
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.GROUP_ID_CONFIG, "req-event-group");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "req-event-group");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerConfig.getBootstrapAddress());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.RETRY_BACKOFF_MS_CONFIG, 1000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 50*15*1000);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, 5000);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 50*10*1000);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object, Object>> reqKafkaListenerContainerFactory()
{
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(reqConsumerFactory());
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler(2));
factory.setConcurrency(2);
return factory;
}
One of the consumer method
#KafkaListener(topicPattern = "${process.update.requirement.topic.name}", containerFactory = "reqKafkaListenerContainerFactory", groupId = "req-event-group")
public void handleCompleteAndErrorRequirement(ConsumerRecord<String, Object> consumerRecord,Acknowledgment acknowledgment)
{
RequirementEventMsg requirementEventMsg = (RequirementEventMsg)consumerRecord;
acknowledgment.acknowledge();
//asynchronous method call here
}
I don't see any error other than this
2020-06-23 13:28:06.815 [INFO ] AbstractCoordinator:855 - [Consumer clientId=consumer-6, groupId=process-consumer-group] Attempt to heartbeat failed since group is rebalancing
2020-06-23 13:28:06.835 [INFO ] AbstractCoordinator:855 - [Consumer clientId=consumer-4, groupId=process-consumer-group] Attempt to heartbeat failed since group is rebalancing
2020-06-23 13:28:07.175 [INFO ] ConsumerCoordinator:472 - [Consumer clientId=consumer-4, groupId=process-consumer-group] Revoking previously assigned partitions [UPDATE_REQUIREMENT_TOPIC-1, UPDATE_REQUIREMENT_TOPIC-0]
2020-06-23 13:28:07.176 [INFO ] KafkaMessageListenerContainer:394 - partitions revoked: [UPDATE_REQUIREMENT_TOPIC-1, UPDATE_REQUIREMENT_TOPIC-0]
2020-06-23 13:28:07.177 [INFO ] AbstractCoordinator:509 - [Consumer clientId=consumer-4, groupId=process-consumer-group] (Re-)joining group
2020-06-23 13:28:07.233 [INFO ] ConsumerCoordinator:472 - [Consumer clientId=consumer-6, groupId=process-consumer-group] Revoking previously assigned partitions [PROCESS_EVENT_TOPIC-0, PROCESS_EVENT_TOPIC-1]
2020-06-23 13:28:07.233 [INFO ] KafkaMessageListenerContainer:394 - partitions revoked: [PROCESS_EVENT_TOPIC-0, PROCESS_EVENT_TOPIC-1]

Spring Boot Kafka Startup error "Connection to node -1 could not be established. Broker may not be available."

I am trying to start Spring-Kafka with Spring Boot 2.1.7.RELEASE on localhost with Java 12.
Getting the Error:
"org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=inter] Connection to node -1 could not be established. Broker may not be available."
I tried switching the Java Version to 11 and 8 and various Properties
spring:
kafka:
consumer:
#bootstrap-servers: localhost:9092
group-id: inter
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: localhost:9092
#Service
public class KafkaHalloWorldMessagingService {
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired
public KafkaHalloWorldMessagingService(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
public void sendHalloToTheSystem(String messageToSend) {
kafkaTemplate.send("interlinked.hallo.topic", messageToSend);
}
}
#Component
public class KafkaHalloWorldListener {
#KafkaListener(topics = "interlinked.hallo.topics", groupId = "inter")
public void handle(String messageToListenTo) {
System.out.println(messageToListenTo.toUpperCase());
}
}
2019-08-22 16:25:20.580 WARN 5865 --- [ restartedMain] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=inter] Connection to node -1 could not be established. Broker may not be available.
Make sure the bootstrap server value in the yml file and the listener in the Kafka server.properties file is same.
Update these two values in the server.properties file.
It can be seen in the config folder of Kafka download directory.
zookeeper.connect=Your IpV4 addrees:2181
listeners=PLAINTEXT://Your IpV4 addrees:9092
eg:zookeeper.connect=10.147.2.161:2181
And why is the consumer's boot strap server property commented out?
Please use the producer's boot strap server value for consumer too.
spring.kafka.bootstrap-servers = Your IpV4 addrees:9092
Or split
producer:
bootstrap-servers: =Your IpV4 addrees:9092
consumer:
bootstrap-servers: =Your IpV4 addrees:9092:
Hope your zookeeper and kafka is up.

spring cloud stream - consumer group bound

My consumer is bound to anonymous consumer group instead of consumer group I specified.
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost
defaultBrokerPort: 9092
zkNodes: localhost
defaultZkPort: 2181
bindings:
inEvent:
group: eventin
destination: event
outEvent:
group: eventout
destination: processevent
My Spring boot applicaition
#SpringBootApplication
#EnableBinding(EventStream.class)
public class ConsumerApplication {
public static void main(String[] args) {
SpringApplication.run(ConsumerApplication.class, args);
}
#StreamListener(value = "inEvent")
public void getEvent(Event event){
System.out.println(event.name);
}
}
My input output channel interface
public interface EventStream {
#Input("inEvent")
SubscribableChannel inEvent();
#Output("outEvent")
MessageChannel outEvent();
}
my console log--
: Started ConsumerApplication in 3.233 seconds (JVM running for 4.004)
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Discovered
group coordinator singh:9092 (id: 2147483647 rack: null)
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Revoking
previously assigned partitions []
: partitions revoked: []
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] (Re-)joining
group
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Successfully
joined group with generation 1
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Setting newly
assigned partitions [inEvent-0]
: [Consumer clientId=consumer-3, groupId=anonymous.0d0c87d6-ef39-4bfe-b475-4491c40caf6d] Resetting
offset for partition inEvent-0 to offset 2.
: partitions assigned: [inEvent-0]
The group property must not be in the kafka tree.
It has to be like this:
My consumer is bound to anonymous consumer group instead of consumer group I specified.
spring:
cloud:
stream:
bindings:
inEvent:
group: eventin
destination: event
See more info in the Docs: http://cloud.spring.io/spring-cloud-static/spring-cloud-stream/2.1.1.RELEASE/single/spring-cloud-stream.html#consumer-groups
The group is a common property, so it is the same independently of the binder implementation. The kafka is for Apache Kafka specific properties, exposed on its binder implementation level.

Not able to shutdown the jms listener which posts message to kafka spring boot application with Runtime.exit, context.close, System.exit()

I am developing a spring boot application which will listen to ibm mq with
#JmsListener(id="abc", destination="${queueName}", containerFactory="defaultJmsListenerContainerFactory")
I have a JmsListenerEndpointRegistry which starts the listenerContainer.
On message will try to push the same message with some business logic to kafka. The poster code is
kafkaTemplate.send(kafkaProp.getTopic(), uniqueId, message)
Now in case a kafka producer fails, I want my boot application to get terminated. So I have added a custom
setErrorHandler.
So I have tried
`System.exit(1)`, `configurableApplicationContextObject.close()`, `Runtime.getRuntime.exit(1)`.
But none of them work. Below is the log that gets generated after
System.exit(0) or above others.
2018-05-24 12:12:47.981 INFO 18904 --- [ Thread-4] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#1d08376: startup date [Thu May 24 12:10:35 IST 2018]; root of context hierarchy
2018-05-24 12:12:48.027 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2018-05-24 12:12:48.044 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 30000 ms.
But the application is still running and below are the running threads
Daemon Thread [Tomcat JDBC Pool Cleaner[14341596:1527144039908]] (Running)
Thread [DefaultMessageListenerContainer-1] (Running)
Thread [DestroyJavaVM] (Running)
Daemon Thread [JMSCCThreadPoolMaster] (Running)
Daemon Thread [RcvThread: com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection#12474910[qmid=*******,fap=**,channel=****,ccsid=***,sharecnv=***,hbint=*****,peer=*******,localport=****,ssl=****]] (Running)
Thread [Thread-4] (Running)
The help is much appreciated. Thanks in advance. I simply want the application should exit.
Below is the thread dump before I call System.exit(1)
"DefaultMessageListenerContainer-1"
java.lang.Thread.State: RUNNABLE
at sun.management.ThreadImpl.getThreadInfo1(Native Method)
at sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:174)
at com.QueueErrorHandler.handleError(QueueErrorHandler.java:42)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeErrorHandler(AbstractMessageListenerContainer.java:931)
at org.springframework.jms.listener.AbstractMessageListenerContainer.handleListenerException(AbstractMessageListenerContainer.java:902)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:326)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:235)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1055)
at java.lang.Thread.run(Thread.java:745)
You should take a thread dump to see what Thread [DefaultMessageListenerContainer-1] (Running) is doing.
Now in case a kafka producer fails
What kind of failure? If the broker is down, the thread will block in the producer library for up to 60 seconds by default.
You can reduce that time by setting the max.block.ms producer property.
Couple of solutions which worked for me to solve above.
Solutions 1.
Get all threads in error handler and interrupt them all and then exist the system.
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
for (ThreadInfo threadInfo : threadInfos) {
Thread.currentThread().interrupt();
}
System.exit(1);
Solution 2. Define a application context manager. Like
public class AppContextManager implements ApplicationContextAware {
private static ApplicationContext _appCtx;
#Override
public void setApplicationContext(ApplicationContext ctx){
_appCtx = ctx;
}
public static ApplicationContext getAppContext(){
return _appCtx;
}
public static void exit(Integer exitCode) {
System.exit(SpringApplication.exit(_appCtx,() -> exitCode));
}
}
Then use same manager to exit in error handler
Executors.newSingleThreadExecutor().execute(new Runnable() {
public void run() {
jmsListenerEndpointRegistry.stop();
AppContextManager.exit(-1);
}
});

Resources