Infinite retries with SeekToCurrentErrorHandler in kafka consumer - spring-boot

I've configured a kafka consumer with SeekToCurrentErrorHandler in Spring boot application using spring-kafka. My consumer configuration is :
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "kafkaserver");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group-id");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer2.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
props.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, StringDeserializer.class.getName());
props.put(JsonDeserializer.KEY_DEFAULT_TYPE, "java.lang.String");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "java.lang.String");
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
SeekToCurrentErrorHandler seekToCurrentErrorHandler = new SeekToCurrentErrorHandler(5);
seekToCurrentErrorHandler.setCommitRecovered(true);
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(seekToCurrentErrorHandler);
return factory;
}
To test SeekToCurrentErrorHandler config, I pushed a record in kafka with incorrect format so that it fails with deserialization exception. As per my understanding the error handler should try to handle the failed record 5 times and after that it should log and move on to the next record.
But it keeps on reading the failed record infinite number of times.
Please tell me where am I going wrong.

I have exactly the same problem and the only fix I do is make sure the concurrency level is same as the number of partition for the topic. Otherwise it will keeps retrying infinitely.
Sounds like a bug of spring kafka.

Related

Spring Kafka Manual Ack and retry happens after exception : unexpected behaviour

I have a Kafka consumer annotated with #KakaListner, with in consumer i have requirement like to acknowledge kafka message first and then execute rest api. So while executing rest api i am getting exception which is expected behaviour and consumer reties happening even i ack. same message before rest call.
#KafkaListener(topics = "${spring.kafka.topic.name}", groupId = "${consumer.topicGroupId}")
public void listenEvent(ConsumerRecord<String, Event> consumerRecord, Acknowledgment acknowledgment) throws IOException {
acknowledgment.acknowledge();
patchService.patch(arguments...);
}
PatchService code :
public class PatchService {
public String patch(arguments....) {
try {
ResponseEntity<String> response = restTemplate.exchange(uri, HttpMethod.PATCH, request, String.class);
return response.getBody();
} catch (HttpClientErrorException ex) {
log.error("Error updating API having error response {}", ex.getResponseBodyAsString());
throw new APIException(ex.getStatusCode(), ex.getResponseBodyAsString());
}
}
}
ConsumerConfiguration Code :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() throws FileNotFoundException {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<String, String>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
With this property set to false in consumerFactory() :
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
What i am doing wrong here ? i don't want retries when exception comes as part of rest api i just want to log that message which i am doing.
Catch the exception and don't throw it.
Kafka maintains two pointers for each group/partition - the current position and the committed offset.
The default error handler resets the current position so the failed record will be redelivered, regardless of the committed offset.

Consumer Class Listener method not getting triggered to receive messages from topic. Kafka with Spring Boot App

I'm using Kafka with Spring Boot. I use Rest Controllers to call Producer/Consumer API's. Producer Class is able to add messages to the topic. I verified using command line utility (Console-consumer.sh). However my Consumer class is not able to receive them in Java for further processing.
#KafkaListener used in Consumer class listener method should be able to receive messages when my Producer class posts messages to the topic which is not happening. Any help appreciated.
Is it still necessary for consumer to subscribe and poll for records when I have already created KafkaListenerContainerFactory that is responsible for invoking Consumer Listener method when a message is posted to the topic?
Consumer Class
#Component
public class KafkaListenersExample {
private final List<KafkaPayload> messages = new ArrayList<>();
#KafkaListener(topics = "test_topic", containerFactory = "kafkaListenerContainerFactory")
public void listener(KafkaPayload data) {
synchronized (messages){
messages.add(data);
}
//System.out.println("message from kafka :"+data);
}
public List<KafkaPayload> getMessages(){
return messages;
}
}
Consumer Config
#Configuration
class KafkaConsumerConfig {
private String bootstrapServers = "localhost:9092";
#Bean
public ConsumerFactory<String, KafkaPayload> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props) ;
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, KafkaPayload> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, KafkaPayload> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerConfigs());
return factory;
}
}
The listener container creates the consumer, subscribes, and takes care of the polling.
Turning on DEBUG logging should help determine what's wrong.
If the records are already in the topic, you need to set ConsumerConfig.AUTO_OFFSET_RESET_CONFIG to earliest. Otherwise, the consumer starts consuming from the end of the topic (latest).

Kafka Fails to Process all the messages - Java Spring Boot

I have a spring boot application(spring version 2.2.2.RELEASE) where I have configured Kafka consumer which processes the data from Kafka and serves to multiple web sockets. The subscription is successful to kafka, but Not all messages from selected Kafka topic is processed by the consumer. Few messages are delayed and few are missed out. But the producer is sending out data which is perfectly ensured. Below I have shared the configuration properties that I have used.
#Bean
public ConsumerFactory<String, String> consumerFactory() {
final String BOOTSTRAP_SERVERS = kafkaBootstrapServer;
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
return new DefaultKafkaConsumerFactory<>(props);
}
Is there any configuration I am missing?
For a new Consumer (never committed offsets for the group.id) you must set AUTO_OFFSET_RESET to earliest to avoid missing any existing records in the topic (default is latest).

How can we access multiple JMS Queues using single consumre in JAVA

I have a requirement of accesssing multiple JMS queues and perform the desired operations based upon the event we are getting. This is being done on Spring Boot project. Could anyone please help
You can configure different #JmsListener in Spring boot and it will receive message from respective Queue you have configured.
#JmsListener(destination = "${abcQueueName}", containerFactory = "abcQueueListenerFactory")
public void receiveQuery(#Payload Test test,
#Headers MessageHeaders headers,
Message message,
Session sessionQuery) {
}
#Bean(name = "abcQueueListenerFactory")
public JmsListenerContainerFactory<?> testQueueListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(false);
factory.setSessionTransacted(true);
factory.setConcurrency(concurrency + "-" + maxConcurrency);
factory.setReceiveTimeout(Long.valueOf(receiveTimeout));
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(jsonMessageConverter);
factory.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
configurer.configure(factory, connectionFactory);
return factory;
}

Consuming Kafka Messages in Spring

By following this tutorial, I was able to create a simple producer-consumer example. In my example there was only 1 topic and i was listening to that topic. For that reason, the code in ReceiverConfig makes sense. Specially the point around GROUP_ID_CONFIG i.e., i create topic topic_name and then it was configured in this configuration. Now my question is that, what if I have more than 1 topic. Let's say i have topic_1, topic_2 and so on? Shall i create ReceiverConfig for each individual topic?
#EnableKafka
#Configuration
public class ReceiverConfig {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(GROUP_ID_CONFIG, "topic_name");
props.put(AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
The short answer is no, you do not need to create multiple configuration for each topic.
Before going any further, I think there is good to specify that the groupId is the group which the Consumer process belongs to and topic to be consumed by the the Consumer process are two different things.
With the sentence below you will tell to the Consumer that its belong to the topic_name group, nothing more.
props.put(GROUP_ID_CONFIG, "topic_name");
If you want a Consumer to read data from multiple topics, there is a subscribe method which receives a Collections as a parameter, in that way you specify all the topics to read data without having to create a new configuration for each topic.
Please check this example out, you will see the method I mentioned
// Subscribe to the topic.
consumer.subscribe(Collections.singletonList(TOPIC));

Resources