Kafka consumer error handling offset reset - spring-boot

I am using consuming events from kafka streams in a spring boot application version 2.4. The version of kafka client is 2.3.There are two consumers consuming the events. I want to put back the events back in kafka incase of any error. I Do NOT want to put the failed event in a dead letter queue. I am using ConsumerAwareListenerErrorHandler.
#Override
public Object handleError(Message<?> message, ListenerExecutionFailedException exception, Consumer<?, ?> consumer) {
ConsumerRecord record = (ConsumerRecord) message.getPayload();
// consumer.seek(new TopicPartition(record.topic(), record.partition()), record.offset());
Collection collection = Arrays.asList(new TopicPartition(record.topic(), record.partition()));
consumer.seekToBeginning(collection);
return null;
}
Now what I want is if I stop the consumer, The same error event should be consumed by the other running consumer. Kindly help.
Thanks

That won't work because any other records fetched by the previous poll() will still be processed; use a SeekToCurrentErrorHandler instead.
https://docs.spring.io/spring-kafka/docs/2.5.5.RELEASE/reference/html/#seek-to-current

Related

Spring JMS DefaultMessageListenerContainer Polling frequency

I am using the DefaultMessageListenerContainer for consuming messages from ActiveMQ queue as below. With this implementation is there any polling mechanism, does the listener poll the queue to see if there is a new message every 1 second or so , or does the onMessage method get invoked whenever there is a new message in the queue? If it uses polling how can we increase or decrease the polling frequency (time) .
DefaultMessageListenerContainer container = new DefaultMessageListenerContainer();
container.setMessageListener(new MessageJmsListener ());
public class MessageJmsListener implements MessageListener {
#Override
public void onMessage(Message message) {
if (message instanceof TextMessage) {
try {
//process the message and create record in Data Base
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
}
The container polls the JMS client, but the broker pushes messages to the client.
So, no, the container does not poll the queue directly.
If there are no messages in the queue, the container will timeout after receiveTimeout and immediately re-poll and will get the next message as soon as the broker sends it.
The prefetch determines how many messages are sent to the consumer by the broker; so that might impact performance (but it's 1000 by default, I think, with recent ActiveMQ versions).
Setting the prefetch to 1 will give you the slowest delivery rate.
If you want to slow things down, you can add a Thread.sleep() in your listener.

When will kafka retry to process the messages that have not been acknowledged?

I have a consumer which is configured with the manual ACK property :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MessageAvro> kafkaListenerContainerFactory() {
final ConcurrentKafkaListenerContainerFactory<String, MessageAvro> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
factory.setConsumerFactory(consumerFactory());
return factory;
}
And a consumer with a #KafkaListener method which did some job like :
#KafkaListener(
topics = "${tpd.topic-name}",
containerFactory = "kafkaListenerContainerFactory",
groupId = "${tpd.group-id}")
public void messageListener(final ConsumerRecord<String, MessageAvro> msg, #Payload final MessageAvro message, final Acknowledgment ack) {
if (someCondition) {
// do something
ack.acknowledge();
} else {
// do not acknoledge the message here in order to retry it later.
}
}
In case where the condition is "false" and we move on to the "else" part, when will my consumer try to read the unacknowledged message again?
And in case it doesn't do it again, how do I tell my #KafkaListener to take into account the unacknowledged messages?
As soon as you commit (or "acknowledge") an offset, all previous offsets are also committed in the sense, that the ConsumerGroup will not try to read it again.
That means: If you hit the "else" condition and your job keeps running in a way that it will hit the "if" condition with the acknowledgment all offsets are committed.
The reason behind this is that a Kafkaconsumer will report back to the brokers which offset to read next. For this to achieve Kafka stores that information within an internal Kafka topic called __consumer_offsets as a key/value pair, where
key: ConsumerGroup, Topic name, Partition
value: next offset to read
That internal topic is a compacted topic which means it will eventually only store the latest value for the mentioned key. As a consequence Kafka will not track the "un-acknowledged" messages in between.
Workaround
What people usually do is to fork those "un-acknowledged" messages into another topic so they can be inspected and consumed together at a later point in time. That way, you will not block your actual application from consuming further messages and you can deal with the un-acknowledged messages seperately.

Spring-Cloud-Streams Kafka - How to stop the consumers

I have a Spring-Cloud-Streams client reading from a Kakfa topic consisting of several partitions. The client calls a webservice for every Kafka message it reads. If the webservice is unavailable after a few retries, I want to stop the consumer from reading from Kafka. Referring to a previous Stackoverflow question (Spring cloud stream kafka pause/resume binders) I autowired BindingsEndpoint and call the changeState() method to try to stop the consumer but the logs show the consumer continuing to read the messages from Kafka after changeState() is invoked.
I am using Spring Boot version 2.1.2.RELEASE with Spring Cloud version Greenwich.RELEASE. The managed version for spring-cloud-stream-binder-kafka is 2.1.0.RELEASE. I have set the properties autoCommitOffset=true and autoCommitOnError=false.
Below is snippet of my codes. Is there something I have missed? Is the first input parameter to changeState() supposed to be the topic name?
If I want the consumer application to exit when the webservice is not available, can I simply do System.exit() without needing to stop the consumer first?
#Autowired
private BindingsEndpoint bindingsEndpoint;
...
...
#StreamListener(MyInterface.INPUT)
public void read(#Payload MyDTO dto,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
try {
logger.info("Processing message "+dto);
process(dto); // this is the method that calls the webservice
} catch (Exception e) {
if (e instanceof IllegalStateException || e instanceof ConnectException) {
bindingsEndpoint.changeState("my.topic.name",
BindingsEndpoint.State.STOPPED);
// Binding<?> b = bindingsEndpoint.queryState("my.topic.name"); ==> Using topic name returns a valid Binding object
}
e.printStackTrace();
throw (e);
}
}
You can do so by utilising Binding visualization and control feature where you can visualize as well as stop/start/pause/resume bindings.
Also, you are aware that System.exit() will shut down the entire JVM?
Had the same issue, the first input parameter to changeState() should be the binding name. It worked for me

WebSphere MQ Messages Disappear From Queue

I figured I would toss a question on here incase anyone has ideas. My MQ Admin created a new queue and alias queue for me to write messages to. I have one application writing to the queue, and another application listening on the alias queue. I am using spring jmsTemplate to write to my queue. We are seeing a behavior where the message is being written to the queue but then instantly being discarded. We disabled gets and to see if an expiry parameter was being set somehow, I used the jms template to set the expiry setting (timeToLive). I set the expiry to 10 minutes but my message still disappears instantly. A snippet of my code and settings are below.
public void publish(ModifyRequestType response) {
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(600000);
jmsTemplate.send(CM_QUEUE_NAME, new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
String responseXML = null;
try {
responseXML myJAXBContext.getInstance().toXML(response);
log.info(responseXML);
TextMessage message = session.createTextMessage(responseXML);
return message;
} catch (myException e) {
e.printStackTrace();
log.info(responseXML);
return null;
}
}
});
}
/////////////////My settings
QUEUE.PUB_SUB_DOMAIN=false
QUEUE.SUBSCRIPTION_DURABLE=false
QUEUE.CLONE_SUPPORT=0
QUEUE.SHARE_CONV_ALLOWED=1
QUEUE.MQ_PROVIDER_VERSION=6
I found my issue. I had a parent method that had the #Transactional annotation. I do not want my new jms message to be part of that transaction so I am going to add jmsTemplate.setSessionTransacted(false); before performing a jmsTemplate.send. I have created a separate jmsTempalte for sending my new message instead of reusing the one that was existing, which needs to be managed.

SpringBoot microservice #StreamListener retry unlimited time when it throw RunTimeException

I have a #StreamListener method where it will perform REST call. When REST call return exception, #StreamListener method will throw RunTimeException and perform retry. #StreamListener method will retry unlimited times when it throw RuntimeException
Spring Cloud Stream Retry configuration:
spring.cloud.stream.kafka.bindings.inputChannel.consumer.enableDlq=true
spring.cloud.stream.bindings.inputChannel.consumer.maxAttempts=3
spring.cloud.stream.bindings.inputChannel.consumer.concurrency=3
spring.cloud.stream.bindings.inputChannel.consumer.backOffInitialInterval=300000
spring.cloud.stream.bindings.inputChannel.consumer.backOffMaxInterval=600000
SpringBoot microservice dependencies version:
Spring Boot 2.0.3
Spring Cloud Stream Elmhurst.RELEASE
Kafka broker 1.1.0
Using RetryTemplate or increasing maxAttempts property has the restriction that retries should be completed within max.poll.interval.ms, otherwise Kafka broker will think that consumer is down and reassigns the partition to another consumer(if available).
Other option is to make the listener re-read the same message from Kafka using consumer.seek method.
#StreamListener("events")
public void handleEvent(#Payload String eventString, #Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) String partitionId,
#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(KafkaHeaders.OFFSET) String offset) {
try {
//do the logic (example: REST call)
} catch (Exception e) { // Catch only specific exceptions that can be retried
consumer.seek(new TopicPartition(topic, Integer.parseInt(partitionId)), Long.parseLong(offset));
}
}
You can certainly increase the number of attempts (maxAttempts property) to something like Integer.MAX_VALUE, or you can provide an instance of your own RetryTemplate bean which could be configured as you wish.
Here is where you can get more info https://docs.spring.io/spring-cloud-stream/docs/current/reference/htmlsingle/#_retry_template
after a few trial and error, we found out that kafka configuration: max.poll.interval.ms is defaulted to 5 minutes. Due to our consumer retry mechanism, our whole retry process will take 15 minutes for the worst case scenario.
So after 5 minutes of the first message being consumed, kafka partition decides that consumer did not provide any response, do a auto-balancing and assign the same message to another partition.

Resources