Problem reading kafka headers in a RecordInterceptor after upgrading to spring-kafka 2.8 - spring-boot

I have an application using spring boot & spring kafka that gets the delivery attempt header in a record interceptor so that I can include it in log messages. It has been working well until I upgraded to spring boot 2.6.3 and spring kafka 2.8.2 (from 2.5.5/2.7.7)
Now when I try to read the delivery attempt header it is not available. If I try and do the exact same thing within a message listener then it works fine so the header is clearly there.
This is what a simplified record interceptor and the listener container factory look like:
#Bean
public RecordInterceptor<Object, Object> recordInterceptor() {
return record -> {
int delivery = ByteBuffer.wrap(record.headers().lastHeader(KafkaHeaders.DELIVERY_ATTEMPT).value()).getInt();
log.info("delivery " + delivery);
return record;
};
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setDeliveryAttemptHeader(true);
factory.setRecordInterceptor(recordInterceptor());
return factory;
}
I can't see anything in the spring docs suggesting the behaviour should have changed. Any ideas?

This is a bug; I opened an issue. https://github.com/spring-projects/spring-kafka/issues/2082

Related

spring-kafka: DefaultErrorHandler with DeadLetterPublishingRecoverer(BiFunction) not considered. No DL topic created

In my Spring Boot application using spring-kafka, I am trying to configure an error handler with 2 things:-
Retry message consumption failures a certain times (FixedBackOff) before publishing to a dead letter topic
Create a dead letter topic with a name of my choice
Using
// Version highlights
id 'org.springframework.boot' version '2.7.2'
...
implementation 'org.springframework.kafka:spring-kafka' // 2.8.8
Here is the code I am using based on what I read in Spring docs and reiterated in several articles online:
#Bean
public DefaultErrorHandler byteArrayDefaultErrorHandler(KafkaTemplate<String, byte[]> template) {
var recoverer =
new DeadLetterPublishingRecoverer(
template,
(record, e) -> new TopicPartition("%s.deadLetter".formatted(record.topic()), 0);
);
return new DefaultErrorHandler(recoverer, new FixedBackOff(3000, 3));
}
But the above bean is not considered/used. So, when consumption encounters a failure (currently simulating failure by throwing an exception),
the FixedBackOff is not considered but the default one with 10 attempts back to back is used.
No DL topic is created.
Currently, the consumer config class has minimal stuff:
#Bean public ConsumerFactory<String, byte[]> byteArrayConsumerFactory() { ... }
#Bean public ConcurrentKafkaListenerContainerFactory<String, byte[]> byteArrayListenerContainerFactory() { .. }
#Bean public DefaultErrorHandler byteArrayDefaultErrorHandler(KafkaTemplate<String, byte[]> template) { ...code pasted above... }
And the listener is as follows:
#KafkaListener(
topics = "${app.config.kafka.topic}",
containerFactory = "byteArrayListenerContainerFactory"
)
public void consumeMessage(ConsumerRecord<String, byte[]> record) { ... }
Am at a loss figuring out what I have missed or added something conflicting the wiring. Help figuring out is highly appreciated.
The error handler bean will only be wired in by boot if you are using Boot's auto configured container factory.
Since you are creating your own container factory bean...
#Bean public ConcurrentKafkaListenerContainerFactory<String, byte[]> byteArrayListenerContainerFactory() { .. }
...you must add the error handler yourself - see setCommonErrorHandler().
The framework does not automatically provision the dead letter topic; add a #Bean NewTopic dlt() { ... }.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#configuring-topics

How can we access multiple JMS Queues using single consumre in JAVA

I have a requirement of accesssing multiple JMS queues and perform the desired operations based upon the event we are getting. This is being done on Spring Boot project. Could anyone please help
You can configure different #JmsListener in Spring boot and it will receive message from respective Queue you have configured.
#JmsListener(destination = "${abcQueueName}", containerFactory = "abcQueueListenerFactory")
public void receiveQuery(#Payload Test test,
#Headers MessageHeaders headers,
Message message,
Session sessionQuery) {
}
#Bean(name = "abcQueueListenerFactory")
public JmsListenerContainerFactory<?> testQueueListenerFactory(ConnectionFactory connectionFactory, DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setPubSubDomain(false);
factory.setSessionTransacted(true);
factory.setConcurrency(concurrency + "-" + maxConcurrency);
factory.setReceiveTimeout(Long.valueOf(receiveTimeout));
factory.setConnectionFactory(connectionFactory);
factory.setMessageConverter(jsonMessageConverter);
factory.setSessionAcknowledgeMode(Session.AUTO_ACKNOWLEDGE);
configurer.configure(factory, connectionFactory);
return factory;
}

Filter messages before deserialization based on headers

Sometimes messages can be filtered out before the deserialization based on header values . Are there any existing patterns for this scenario using spring kafka. I am thinking implementing similar to ErrorHandlingDeserializer in addition to delegate take filter predicate also as property. Any suggestions? thanks.
Yes, you can use the same technique used by the ErrorHandlingDeserializer to return a "marker" object instead of doing the deserialization, then add a RecordFilterStrategy, that filters records with such objects, to the listener (container factory when using #KafkaListener or use a filtering adapter for an explicit listener).
EDIT
Spring Boot and adding a filter...
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
kafkaConsumerFactory.setRecordFilterStrategy(myFilter());
return factory;
}

Spring Boot JMS MarshallingMessageConverter in replies

I've a JMS listener similar to the one done here:
Synchronous Message send and receive using JMS template and Spring Boot
It receives a XML payload. But different from that one, I want to also return a reply. If I return a String, it works ok, but as I'm doing with the message payload, I want to return a JAXB object.
However, if I return a Message, Spring tries to convert the object using the SimpleMessageConverter.
How can I configure the MarshallingMessageConverter to be used when converting the reply payload?
I had to configure the DefaultJmsListenerContainerFactory's message converter.
But what is weird, I already have a MarshallingMessageConverter there for DefaultMessageHandlerMethodFactory and JmsMessagingTemplate, and it's from another Java package (org.springframework.messaging.converter.MarshallingMessageConverter).
#Bean
public JmsListenerContainerFactory<?> jmsListenerFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer)
{
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setErrorHandler(errorHandler());
factory.setMessageConverter(jmsMarshallingMessageConverter()); // !!!!
configurer.configure(factory, connectionFactory);
return factory;
}
#Bean
public org.springframework.jms.support.converter.MarshallingMessageConverter jmsMarshallingMessageConverter()
{
Jaxb2Marshaller marshaller = marshaller();
org.springframework.jms.support.converter.MarshallingMessageConverter converter =
new org.springframework.jms.support.converter.MarshallingMessageConverter();
converter.setMarshaller(marshaller);
converter.setUnmarshaller(marshaller);
converter.setTargetType(MessageType.TEXT);
return converter;
}

Spring Integration - kafka Outbound adapter not taking topic value exposed as spring bean

I have successfully integrated kafka outbound channle adapter with fixed topic name. Now, i want to make the topic name configurable and hence, want to expose it via application properties.
application.properties contain one of the following entry:
kafkaTopic:testNewTopic
My configuration class looks like below:
#Configuration
#Component
public class KafkaConfig {
#Value("${kafkaTopic}")
private String kafkaTopicName;
#Bean
public String getTopic(){
return kafkaTopicName;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");//this.brokerAddress);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
// set more properties
return new DefaultKafkaProducerFactory<>(props);
}
}
and in my si-config.xml, i have used the following (ex: topic="getTopic") :
<int-kafka:outbound-channel-adapter
id="kafkaOutboundChannelAdapter" kafka-template="kafkaTemplate"
auto-startup="true" sync="true" channel="inputToKafka" topic="getTopic">
</int-kafka:outbound-channel-adapter>
However, the configuration is unable to pick up the topic name when exposed via bean. But it works fine when i hard code the value of the topic name.
Can someone please suggest what i am doing wrong here?
Does topic within kafka outbound channel accept the value referred as bean?
How do i externalize it as every application using my utility will supply different kafka topic names
The topic attribute is for string value.
However it supports property placeholder resolution:
topic="${kafkaTopic}"
and also SpEL evaluation for aforementioned bean:
topic="#{getTopic}"
Just because this is allowed by the XML parser configuration.
However you may pay attention that KafkaTemplate, which you inject into the <int-kafka:outbound-channel-adapter> has defaultTopic property. Therefore you won't need to worry about that XML.
And one more option available for you is Spring Integration Annotations configuration. Where you can define a #ServiceActivator for the KafkaProducerMessageHandler #Bean:
#ServiceActivator(inputChannel = "inputToKafka")
#Bean
KafkaProducerMessageHandler kafkaOutboundChannelAdapter() {
kafkaOutboundChannelAdapter adapter = new kafkaOutboundChannelAdapter( kafkaTemplate());
adapter.setSync(true);
adapter.setTopicExpression(new LiteralExpression(this.kafkaTopicName));
return adapter;
}

Resources