Spring Kafka's MessageListener still consuming after MessageListenerContainer is stopped - spring

My goal is to create my own Camel component for Spring Kafka.
I have managed to create it and start consuming. I also want to be able to stop the component and consumption (with JMX, with other Camel route,...), without loosing any messages.
To do that, when stopping Camel component, I need to stop a MessageListenerContainer and eventually MessageListener which is registered in MessageListenerContainer.
My problem is that when MessageListenerContainer is stopped, MessageListener is still processing messages.
#Override
protected void doStart() throws Exception {
super.doStart();
if (kafkaMessageListenerContainer != null) {
return;
}
kafkaMessageListenerContainer = kafkaListenerContainerFactory.createContainer(endpoint.getTopicName());
kafkaMessageListenerContainer.setupMessageListener(messageListener());
kafkaMessageListenerContainer.start();
}
#Override
protected void doStop() throws Exception {
LOG.info("STOPPING kafkaMessageListenerContainer");
kafkaMessageListenerContainer.stop();
LOG.info("STOPPED kafkaMessageListenerContainer");
super.doStop();
}
private MessageListener<Object, Object> messageListener() {
return new MessageListener<Object, Object>() {
#Override public void onMessage(ConsumerRecord<Object, Object> data) {
LOG.info("Record received: {}", data.offset());
//...pass a message to Camel processing route
LOG.info("Record processed: {}", data.offset());
}
};
}
This is snippet from log
{"time":"2020-11-27T14:01:57.047Z","message":"Record received: 2051","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"c5efc5db-5981-4477-925a-83ffece49572"}
{"time":"2020-11-27T14:01:57.153Z","message":"STOPPED kafkaMessageListenerContainer","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"Camel (camelContext) thread #2 - ShutdownTask","level":"INFO"}
{"time":"2020-11-27T14:01:57.153Z","message":"Route: testTopic.consumer shutdown complete, was consuming from: my-kafka://events.TestTopic","logger":"org.apache.camel.impl.DefaultShutdownStrategy","thread-id":"Camel (camelContext) thread #2 - ShutdownTask","level":"INFO"}
{"time":"2020-11-27T14:01:57.159Z","message":"Record processed: 2051","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"8c835691-ba8d-43c2-b3e0-90a2f768ed7f"}
{"time":"2020-11-27T14:01:57.165Z","message":"Record received: 2052","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"8c835691-ba8d-43c2-b3e0-90a2f768ed7f"}
{"time":"2020-11-27T14:01:57.275Z","message":"Record processed: 2052","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"f7bcebb4-9e5e-46a1-bc5b-569264914b05"}
...
I would expect that MessageListener would not consume anymore after MessageListenerContainer is gracefully stopped. I must be missing something, any suggestions?
Many thanks!

I found a issue which caused my problem.
For some reason I was overriding consumerFactory which was not correct.
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, localhost:9092);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer);
return new DefaultKafkaConsumerFactory<>(props);
}
After removing this and use default one, which is configured in application.yml, problem was resolved.

Related

Spring kafka producer keep on waiting for one minute in some cases for producing data and failed with exception for some cases

I have implemented Spring Kafka Template for producing event in my spring boot project.The code for producing an event is given below-
Producer Config:
#Beanpublic Map<String, Object> producerConfigs() throws FileNotFoundException {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,kafkaProperties.getBootstrapServers());
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,kafkaProperties.getSecurity().getProtocol());
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,ResourceUtils.getFile("classpath:client.truststoreks").getAbsolutePath());
props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG,StringUtils.EMPTY);props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
props.put(ProducerConfig.LINGER_MS_CONFIG, "100");
return props;
}
Producer Service Code:
public class KafkaProducerService<V> implements KafkaProducer<V> {
#Autowired
KafkaTemplate<String, V> kafkaTemplate;
#Autowired
KafkaTemplate<String, V> transactionLogKafkaTemplate;
public KafkaProducerService(KafkaTemplate<String, V> kafkaTemplate, KafkaTemplate<String, V> transactionLogKafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
this.transactionLogKafkaTemplate = transactionLogKafkaTemplate;
}
#Override
#Retryable({KafkaException.class, TimeoutException.class})
public void produce(String topic, String key, V value) {
log.info("Calling producer service for producing event on topic "+topic);
sendCallbackEvents(kafkaTemplate, topic, key, value);
}
private void sendCallbackEvents(KafkaTemplate<String, V> kafkaTemplate, String topic, String key, V value) {
ProducerRecord<String, V> producerRecord = new ProducerRecord(topic, key, value);
ListenableFuture<SendResult<String, V>> future = kafkaTemplate.send(producerRecord);
future.addCallback(new ListenableFutureCallback<SendResult<String, V>>() {
#Override
public void onSuccess(SendResult<String, V> result) {
log.info(String.format("Produced event to topic %s: key = %-10s value = %s", topic, key, value));
}
#Override
public void onFailure(Throwable ex) {
log.error("Producing of data on topic {} is failed", topic, ex.getCause());
}
});
}
}
P.S: We are using AWS MSK as a broker for producing an event.
But in some cases, it's taking one minute time for producing an event and getting failed with the below error in logs-
ERROR LogAccessor - Exception thrown when sending a message with key='xx' and payload='Event(key=value)' to topic topicName:
Hence it's able to produce the event due to retry logic on producer service but due to that 1-minute delay, I am facing several issues.
I tried to find out the reason for the producer service delay and failure while going through the Spring Kafka dependency classes but no luck.
I am not able to find out the exact reason why the producer is getting delayed and failing in 1st attempt for some cases. Can anyone help me in identifying the reason for that and the solution to the issue?

Question on Spring Kafka Listener Consumer Offset Acknowledgement

I have created the below consumer factory.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setAutoStartup(autoStart);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
return factory;
}
The Kafka listener is given below.
#KafkaListener(id= "${topic1}" ,
topics = "${topic1}",
groupId = "${consumer.group1}", concurrency = "1", containerFactory = "kafkaListenerContainerFactory")
public void consumeEvents1(String jsonObject, #Headers Map<String, String> header, Acknowledgment acknowledgment) {
LOG.info("Message - {}", jsonObject);
LOG.info(header.get(KafkaHeaders.GROUP_ID) + header.get(KafkaHeaders.RECEIVED_TOPIC)+String.valueOf(header.get(KafkaHeaders.OFFSET)));
acknowledgment.acknowledge();
}
In the consumer factory, I did not set factory.setBatchListener(true); My understanding is that the above listener code is called for each message as it is not a batch listener. That is what the behavior I saw. In the batch listener, I get a list of messages instead of the message by message.
As the listener is not batch-based, the acknowledgment.acknowledge() is going to have the same behavior for MANUAL, Or MANUAL_IMMEDIATE. Is that the correct understanding?
I referred to the below material.
With MANUAL, the commit is queued until the whole batch is processed; this is more efficient, but increases the possibility of getting redeliveries.
With MANUAL_IMMEDIATE, the commit occurs right away, as long as you call it on the listener thread.

Spring Kafka - Display Custom Error Message with #Retry

I am currently using Spring Kafka to consume messages from topic along with #Retry of Spring. So basically, I am retrying to process the consumer message in case of an error. But while doing so, I want to avoid the exception message thrown by KafkaMessageListenerContainer. Instead I want to display a custom message. I tried adding an error handler in the ConcurrentKafkaListenerContainerFactory but on doing so, my retry does not get invoked.
Can someone guide me on how to display a custom exception message along with #Retry scenario as well? Below are my code snippets:
ConcurrentKafkaListenerContainerFactory Bean Config
#Bean
ConcurrentKafkaListenerContainerFactory << ? , ? > concurrentKafkaListenerContainerFactory(ConcurrentKafkaListenerContainerFactoryConfigurer configurer, ConsumerFactory < Object, Object > kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory < Object, Object > kafkaListenerContainerFactory =
new ConcurrentKafkaListenerContainerFactory < > ();
configurer.configure(kafkaListenerContainerFactory, kafkaConsumerFactory);
kafkaListenerContainerFactory.setConcurrency(1);
// Set Error Handler
/*kafkaListenerContainerFactory.setErrorHandler(((thrownException, data) -> {
log.info("Retries exhausted);
}));*/
return kafkaListenerContainerFactory;
}
Kafka Consumer
#KafkaListener(
topics = "${spring.kafka.reprocess-topic}",
groupId = "${spring.kafka.consumer.group-id}",
containerFactory = "concurrentKafkaListenerContainerFactory"
)
#Retryable(
include = RestClientException.class,
maxAttemptsExpression = "${spring.kafka.consumer.max-attempts}",
backoff = #Backoff(delayExpression = "${spring.kafka.consumer.backoff-delay}")
)
public void onMessage(ConsumerRecord < String, String > consumerRecord) throws Exception {
// Consume the record
log.info("Consumed Record from topic : {} ", consumerRecord.topic());
// process the record
messageHandler.handleMessage(consumerRecord.value());
}
Below is the exception that I am getting:
You should not use #Retryable as well as the SeekToCurrentErrorHandler (which is now the default, since 2.5; so I presume you are using that version).
Instead, configure a custom SeekToCurrentErrorHandler with max attempts, back off, and retryable exceptions.
That error message is normal; it's logged by the container; it's logging level can be reduced from ERROR to INFO or DEBUG by setting the logLevel property on the SeekToCurrentErrorHandler. You can also add a custom recoverer to it, to log your custom message after the retries are exhausted.
my event retry template,
#Bean(name = "eventRetryTemplate")
public RetryTemplate eventRetryTemplate() {
RetryTemplate template = new RetryTemplate();
ExceptionClassifierRetryPolicy retryPolicy = new ExceptionClassifierRetryPolicy();
Map<Class<? extends Throwable>, RetryPolicy> policyMap = new HashMap<>();
policyMap.put(NonRecoverableException.class, new NeverRetryPolicy());
policyMap.put(RecoverableException.class, new AlwaysRetryPolicy());
retryPolicy.setPolicyMap(policyMap);
ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
backOffPolicy.setInitialInterval(backoffInitialInterval);
backOffPolicy.setMaxInterval(backoffMaxInterval);
backOffPolicy.setMultiplier(backoffMultiplier);
template.setRetryPolicy(retryPolicy);
template.setBackOffPolicy(backOffPolicy);
return template;
}
my kafka listener using the retry template,
#KafkaListener(
groupId = "${kafka.consumer.group.id}",
topics = "${kafka.consumer.topic}",
containerFactory = "eventContainerFactory")
public void eventListener(ConsumerRecord<String, String>
events,
Acknowledgment acknowledgment) {
eventRetryTemplate.execute(retryContext -> {
retryContext.setAttribute(EVENT, "my-event");
eventConsumer.consume(events, acknowledgment);
return null;
});
}
my kafka consumer properties,
private ConcurrentKafkaListenerContainerFactory<String, String>
getConcurrentKafkaListenerContainerFactory(
KafkaProperties kafkaProperties) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setAckOnError(Boolean.TRUE);
kafkaErrorEventHandler.setCommitRecovered(Boolean.TRUE);
factory.setErrorHandler(kafkaErrorEventHandler);
factory.setConcurrency(1);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setConsumerFactory(getEventConsumerFactory(kafkaProperties));
return factory;
}
kafka error event handler is my custom error handler that extends the SeekToCurrentErrorHandler and implements the handle error method some what like this.....
#Override
public void handle(Exception thrownException, List<ConsumerRecord<?, ?>>
records,
Consumer<?, ?> consumer, MessageListenerContainer container) {
log.info("Non recoverable exception. Publishing event to Database");
super.handle(thrownException, records, consumer, container);
ConsumerRecord<String, String> consumerRecord = (ConsumerRecord<String,
String>) records.get(0);
FailedEvent event = createFailedEvent(thrownException, consumerRecord);
failedEventService.insertFailedEvent(event);
log.info("Successfully Published eventId {} to Database...",
event.getEventId());
}
here the failed event service is my custom class again which with put these failed events into a queryable relational DB (I chose this to be my DLQ).

spring kafka, how to gracefully shutdown the spring boot application

I am having Kafka Consumers in a Spring Boot application. I have kept ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG as false and my consumers are manually acknowledging the messages.
Spring-Kafka: 2.2.11.RELEASE
My configuration:
#Override
public Map<String, Object> consumerConfig() {
Map<String, Object> props = new HashMap<>();
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, securityProtocol);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, heartbeatInterval);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, maxPollIntervalMs);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer2.class);
props.put(ErrorHandlingDeserializer2.KEY_DESERIALIZER_CLASS, KafkaAvroDeserializer.class);
props.put(ErrorHandlingDeserializer2.VALUE_DESERIALIZER_CLASS, KafkaAvroDeserializer.class);
props.put(KafkaAvroDeserializerConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryServers);
return props;
}
Connection Factory
ConcurrentKafkaListenerContainerFactory<K, V> kvConcurrentKafkaListenerContainerFactory =
new ConcurrentKafkaListenerContainerFactory<>();
kvConcurrentKafkaListenerContainerFactory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(props, getAvroKeyDeserializer(),
getAvroValueDeserializer());
kvConcurrentKafkaListenerContainerFactory.getContainerProperties()
.setAckOnError(false);
kvConcurrentKafkaListenerContainerFactory.getContainerProperties()
.setAckMode(
ContainerProperties.AckMode.MANUAL_IMMEDIATE);
Kafka Consumer:
#KafkaListener(topics = "${topic-name}", groupId = "${group-id}", containerFactory = CONTAINER_FACTORY)
public void consume(ConsumerRecord<Key, Envelope> record, Acknowledgment acknowledgment) {
final Envelope envelope = record.value();
if(//some condition){
//logic
}
acknowledgment.acknowledge();
}
The issue is offset is lost if the application crashes at If statement.
My understanding is if 'acknowledgment.acknowledge();' is not done and application crashes then on restart the same message should be processed again.
I need help to understand what I am doing wrong here.

Spring amqp delay messaging with rabbitMQ

I am struggling hard to find out the way for scheduled/Delaying messages in Spring AMQP/Rabbit MQ and found solution in here.But i still with a prolem
about Spring AMQP/Rabbit MQ which can not received any message.
My source as the following:
#Configuration
public class AmqpConfig {
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory();
connectionFactory.setAddresses("172.16.101.14:5672");
connectionFactory.setUsername("admin");
connectionFactory.setPassword("admin");
connectionFactory.setPublisherConfirms(true);
return connectionFactory;
}
#Bean
#Scope("prototype")
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
return template;
}
#Bean
CustomExchange delayExchange() {
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
return new CustomExchange("my-exchange", "x-delayed-message", true, false, args);
}
#Bean
public Queue queue() {
return new Queue("spring-boot-queue", true);
}
#Bean
Binding binding(Queue queue, Exchange delayExchange) {
return BindingBuilder.bind(queue).to(delayExchange).with("spring-boot-queue").noargs();
}
#Bean
public SimpleMessageListenerContainer messageContainer() {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory());
container.setQueues(queue());
container.setExposeListenerChannel(true);
container.setMaxConcurrentConsumers(1);
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.setMessageListener(new ChannelAwareMessageListener() {
public void onMessage(Message message, Channel channel) throws Exception {
byte[] body = message.getBody();
System.err.println("receive msg : " + new String(body));
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false); //确认消息成功消费
}
});
return container;
}
}
#Component
public class Send implements RabbitTemplate.ConfirmCallback{
private RabbitTemplate rabbitTemplate;
#Autowired
public Send(RabbitTemplate rabbitTemplate) {
this.rabbitTemplate = rabbitTemplate;
this.rabbitTemplate.setConfirmCallback(this);
rabbitTemplate.setMandatory(true);
}
public void sendMsg(String content) {
CorrelationData correlationId = new CorrelationData(UUID.randomUUID().toString());
rabbitTemplate.convertAndSend("my-exchange", "", content, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("x-delay", 6000);
return message;
}
},correlationId);
System.err.println("delay message send ................");
}
/**
* 回调
*/
#Override
public void confirm(CorrelationData correlationData, boolean ack, String cause) {
System.err.println(" callback id :" + correlationData);
if (ack) {
System.err.println("ok");
} else {
System.err.println("fail:" + cause);
}
}
}
Is there someone could give a help.
Thanks all.
Delay messaging is nothing to do with Spring amqp, it's a library which will reside with your code, so the library can't hold any message as such. There are two approaches you can try:
Old Approach:
Set the TTL(time to live) header in each message/queue(policy) and then introduce a DLQ to handle it. once the ttl expired your messages will move from DLQ to main queue so that your listener can process it.
Latest Approach:
Recently RabbitMQ came up with RabbitMQ Delayed Message Plugin , using which you can achieve the same and this plugin support available since RabbitMQ-3.5.8.
You can declare an exchange with the type x-delayed-message and then publish messages with the custom header x-delay expressing in milliseconds a delay time for the message. The message will be delivered to the respective queues after x-delay milliseconds
Details:
To use the delayed-messaging feature, declare an exchange with the type x-delayed-message:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "direct");
channel.exchangeDeclare("my-exchange", "x-delayed-message", true, false, args);
Note that we pass an extra header called x-delayed-type, more on it under the Routing section.
Once we have the exchange declared we can publish messages providing a header telling the plugin for how long to delay our messages:
byte[] messageBodyBytes = "delayed payload".getBytes("UTF-8");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 5000);
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder().headers(headers);
channel.basicPublish("my-exchange", "", props.build(), messageBodyBytes);
byte[] messageBodyBytes2 = "more delayed payload".getBytes("UTF-8");
Map<String, Object> headers2 = new HashMap<String, Object>();
headers2.put("x-delay", 1000);
AMQP.BasicProperties.Builder props2 = new AMQP.BasicProperties.Builder().headers(headers2);
channel.basicPublish("my-exchange", "", props2.build(), messageBodyBytes2);
In the above example we publish two messages, specifying the delay time with the x-delay header. For this example, the plugin will deliver to our queues first the message with the body "more delayed payload" and then the one with the body "delayed payload".
If the x-delay header is not present, then the plugin will proceed to route the message without delay.
More here: git

Resources