Spring Boot Kafka Configure DefaultErrorHandler? - spring

I created a batch-consumer following the Spring Kafka docs:
#SpringBootApplication
public class ApplicationConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(ApplicationConsumer.class);
private static final String TOPIC = "foo";
public static void main(String[] args) {
ConfigurableApplicationContext context = SpringApplication.run(ApplicationConsumer.class, args);
}
#Bean
public RecordMessageConverter converter() {
return new JsonMessageConverter();
}
#Bean
public BatchMessagingMessageConverter batchConverter() {
return new BatchMessagingMessageConverter(converter());
}
#KafkaListener(topics = TOPIC)
public void listen(List<Name> ps) {
LOGGER.info("received name beans: {}", Arrays.toString(ps.toArray()));
}
}
I was able to successfully get the consumer running by defining the following additional configuration env variables, that Spring automatically picks up:
export SPRING_KAFKA_BOOTSTRAP-SERVERS=...
export SPRING_KAFKA_CONSUMER_GROUP-ID=...
So the above code works. But now I want to customize the default error handler to use exponential backoff. From the ref docs I tried adding the following to ApplicationConsumer class:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setCommonErrorHandler(new DefaultErrorHandler(new ExponentialBackOffWithMaxRetries(10)));
factory.setConsumerFactory(consumerFactory());
return factory;
}
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
return props;
}
But now I get errors saying that it can't find some of the configuration. It looks like I'm stuck having to redefine all of the properties in consumerConfigs() that were already being automatically defined before. This includes everything from bootstrap server uris to the json-deserialization config.
Is there a good way to update my first version of the code to just override the default-error handler?

Just define the error handler as a #Bean and Boot will automatically wire it into its auto configured container factory.
EDIT
This works as expected for me:
#SpringBootApplication
public class So70884203Application {
public static void main(String[] args) {
SpringApplication.run(So70884203Application.class, args);
}
#Bean
DefaultErrorHandler eh() {
return new DefaultErrorHandler((rec, ex) -> {
System.out.println("Recovered: " + rec);
}, new FixedBackOff(0L, 0L));
}
#KafkaListener(id = "so70884203", topics = "so70884203")
void listen(String in) {
System.out.println(in);
throw new RuntimeException("test");
}
#Bean
NewTopic topic() {
return TopicBuilder.name("so70884203").partitions(1).replicas(1).build();
}
}
foo
Recovered: ConsumerRecord(topic = so70884203, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1643316625291, serialized key size = -1, serialized value size = 3, headers = RecordHeaders(headers = [], isReadOnly = false), key = null, value = foo)

Related

Kafka Streams programmatically configuration not working

I'm doing a spring boot application and I'm trying to configure kafka programmatically, but for some reason is still getting the properties from application.yaml instead of the ones I set programmatically
#Configuration
public class KafkaConfiguration {
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(kafkaConsumerFactory());
factory.setConcurrency(1);
factory.getContainerProperties().setPollTimeout(30000);
return factory;
}
public ConsumerFactory<String, String> kafkaConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "aaa"); // should crash since is not valid
props.put(ConsumerConfig.GROUP_ID_CONFIG, "app1");
return new DefaultKafkaConsumerFactory<>(props);
}
}
#Component
public class StreamListener {
#StreamListener(TestStreams.TEST_STREAM_IN)
public void testStream(#Payload GenericCustomEvent response, #Headers MessageHeaders headers) throws Exception {
log.debug("Received generic event {} with headers {}", response, headers);
}
}
public interface TestStreams {
String TEST_STREAM_IN = "test-stream-in";
#Input(TEST_STREAM_IN)
SubscribableChannel inputTestStream();
}
#EnableBinding({TestStreams.class})
#SpringBootApplication
public class KafkaApplication {
public static void main(String[] args) {
SpringApplication.run(KafkaApplication .class, args);
}
}
The binder doesn't use a KafkaListenerContainerFactory, it creates the container itself from the yaml.
You can modify the container(s) by adding a ListenerContainerCustomizer bean.
Example here Can I apply graceful shutdown when using Spring Cloud Stream Kafka 3.0.3.RELEASE?

Loading a custom ApplicationContextInitializer in AWS Lambda Spring boot

How to loada custom ApplicationContextInitializer to in spring boot AWS Lambda?
I have an aws lambda application using spring boot, I would like to write an ApplicationContextInitializer for decrypting database passwords. I have the following code that works while running it as a spring boot application locally, but when I deploy it to the AWS console as a lambda it doesn't work.
Here is my code
1. applications.properties
spring.datasource.url=url
spring.datasource.username=testuser
CIPHER.spring.datasource.password=encryptedpassword
The following code is the ApplicationContextInitializer, assuming password is Base64 encoded for testing only (In the actual case it will be encrypted by AWM KMS). The idea here is if the key is starting with 'CIPHER.' (as in CIPHER.spring.datasource.password)I assume it's value needs to be decrypted and another key value pair with actual, key (here spring.datasource.password) and its decrypted value will be added at context initialization.
will be like spring.datasource.password=decrypted password
#Component
public class DecryptedPropertyContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private static final String CIPHER = "CIPHER.";
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
ConfigurableEnvironment environment = applicationContext.getEnvironment();
for (PropertySource<?> propertySource : environment.getPropertySources()) {
Map<String, Object> propertyOverrides = new LinkedHashMap<>();
decodePasswords(propertySource, propertyOverrides);
if (!propertyOverrides.isEmpty()) {
PropertySource<?> decodedProperties = new MapPropertySource("decoded "+ propertySource.getName(), propertyOverrides);
environment.getPropertySources().addBefore(propertySource.getName(), decodedProperties);
}
}
}
private void decodePasswords(PropertySource<?> source, Map<String, Object> propertyOverrides) {
if (source instanceof EnumerablePropertySource) {
EnumerablePropertySource<?> enumerablePropertySource = (EnumerablePropertySource<?>) source;
for (String key : enumerablePropertySource.getPropertyNames()) {
Object rawValue = source.getProperty(key);
if (rawValue instanceof String && key.startsWith(CIPHER)) {
String cipherRemovedKey = key.substring(CIPHER.length());
String decodedValue = decode((String) rawValue);
propertyOverrides.put(cipherRemovedKey, decodedValue);
}
}
}
}
public String decode(String encodedString) {
byte[] valueDecoded = org.apache.commons.codec.binary.Base64.decodeBase64(encodedString);
return new String(valueDecoded);
}
Here is the Spring boot initializer
#SpringBootApplication
#ComponentScan(basePackages = "com.amazonaws.serverless.sample.springboot.controller")
public class Application extends SpringBootServletInitializer {
#Bean
public HandlerMapping handlerMapping() {
return new RequestMappingHandlerMapping();
}
#Bean
public HandlerAdapter handlerAdapter() {
return new RequestMappingHandlerAdapter();
}
#Bean
public HandlerExceptionResolver handlerExceptionResolver() {
return new HandlerExceptionResolver() {
#Override
public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) {
return null;
}
};
}
//loading the initializer here
public static void main(String[] args) {
SpringApplication application=new SpringApplication(Application.class);
application.addInitializers(new DecryptedPropertyContextInitializer());
application.run(args);
}
This is working when run as a spring boot appliaction, But when it deployed as a lambda into AWS the main() method in my SpringBootServletInitializer will never be called by lambda. Here is my Lambda handler.
public class StreamLambdaHandler implements RequestStreamHandler {
private static Logger LOGGER = LoggerFactory.getLogger(StreamLambdaHandler.class);
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
handler.onStartup(servletContext -> {
FilterRegistration.Dynamic registration = servletContext.addFilter("CognitoIdentityFilter", CognitoIdentityFilter.class);
registration.addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*");
});
} catch (ContainerInitializationException e) {
e.printStackTrace();
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
#Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {
handler.proxyStream(inputStream, outputStream, context);
outputStream.close();
}
}
What change is to be made in the code to load the ApplicationContextInitializer by Lambda? Any help will be highly appreciated.
I was able to nail it in the following way.
First changed the property value with place holder with a prefix, where the prefix denotes the values need to be decrypted, ex.
spring.datasource.password=${MY_PREFIX_placeHolder}
aws lambda environment variable name should match to the placeholder
('MY_PREFIX_placeHolder') and it value is encrypted using AWS KMS (This sample is base64 decoding).
create an ApplicationContextInitializer which will decrypt the property value
public class DecryptedPropertyContextInitializer
implements ApplicationContextInitializer<ConfigurableApplicationContext> {
private static final String CIPHER = "MY_PREFIX_";
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
ConfigurableEnvironment environment = applicationContext.getEnvironment();
for (PropertySource<?> propertySource : environment.getPropertySources()) {
Map<String, Object> propertyOverrides = new LinkedHashMap<>();
decodePasswords(propertySource, propertyOverrides);
if (!propertyOverrides.isEmpty()) {
PropertySource<?> decodedProperties = new MapPropertySource("decoded "+ propertySource.getName(), propertyOverrides);
environment.getPropertySources().addBefore(propertySource.getName(), decodedProperties);
}
}
}
private void decodePasswords(PropertySource<?> source, Map<String, Object> propertyOverrides) {
if (source instanceof EnumerablePropertySource) {
EnumerablePropertySource<?> enumerablePropertySource = (EnumerablePropertySource<?>) source;
for (String key : enumerablePropertySource.getPropertyNames()) {
Object rawValue = source.getProperty(key);
if (rawValue instanceof String && key.startsWith(CIPHER)) {
String decodedValue = decode((String) rawValue);
propertyOverrides.put(key, decodedValue);
}
}
}
}
public String decode(String encodedString) {
byte[] valueDecoded = org.apache.commons.codec.binary.Base64.decodeBase64(encodedString);
return new String(valueDecoded);
}
}
The above code will decrypt all the values with prefix MY_PREFIX_ and add them at the top of the property source.
As the spring boot is deployed into aws lambda, lambda will not invoke the main() function, so if the ApplicationContextInitializer is initialized in main() it is not going to work. In order to make it work need to override createSpringApplicationBuilder() method of SpringBootServletInitializer, so SpringBootServletInitializer will be like
#SpringBootApplication
#ComponentScan(basePackages = "com.amazonaws.serverless.sample.springboot.controller")
public class Application extends SpringBootServletInitializer {
#Bean
public HandlerMapping handlerMapping() {
return new RequestMappingHandlerMapping();
}
#Bean
public HandlerAdapter handlerAdapter() {
return new RequestMappingHandlerAdapter();
}
#Bean
public HandlerExceptionResolver handlerExceptionResolver() {
return new HandlerExceptionResolver() {
#Override
public ModelAndView resolveException(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) {
return null;
}
};
}
#Override
protected SpringApplicationBuilder createSpringApplicationBuilder() {
SpringApplicationBuilder builder = new SpringApplicationBuilder();
builder.initializers(new DecryptedPropertyContextInitializer());
return builder;
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
No need to make any changes for the lambdahandler.

Spring Kafka and exactly once delivery guarantee

I use Spring Kafka and Spring Boot and just wondering how to configure my consumer, for example:
#KafkaListener(topics = "${kafka.topic.post.send}", containerFactory = "postKafkaListenerContainerFactory")
public void sendPost(ConsumerRecord<String, Post> consumerRecord, Acknowledgment ack) {
// do some logic
ack.acknowledge();
}
to use the exactly once delivery guarantee?
Should I only add org.springframework.transaction.annotation.Transactional annotation over sendPost method and that's it or do I need to perform some extra steps in order to achieve this?
UPDATED
This is my current config
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(KafkaProperties kafkaProperties, KafkaTransactionManager<Object, Object> transactionManager) {
kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, kafkaConsumerMaxPollIntervalMs);
kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, kafkaConsumerMaxPollRecords);
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
//factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setTransactionManager(transactionManager);
factory.setConsumerFactory(consumerFactory(kafkaProperties));
return factory;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 15000000);
return props;
}
#Bean
public ProducerFactory<String, Post> postProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Post> postKafkaTemplate() {
return new KafkaTemplate<>(postProducerFactory());
}
#Bean
public ProducerFactory<String, Update> updateProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Update> updateKafkaTemplate() {
return new KafkaTemplate<>(updateProducerFactory());
}
#Bean
public ProducerFactory<String, Message> messageProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Message> messageKafkaTemplate() {
return new KafkaTemplate<>(messageProducerFactory());
}
but it fails with the following error:
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method kafkaTransactionManager in org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration required a single bean, but 3 were found:
- postProducerFactory: defined by method 'postProducerFactory' in class path resource [com/example/domain/configuration/messaging/KafkaProducerConfig.class]
- updateProducerFactory: defined by method 'updateProducerFactory' in class path resource [com/example/domain/configuration/messaging/KafkaProducerConfig.class]
- messageProducerFactory: defined by method 'messageProducerFactory' in class path resource [com/example/domain/configuration/messaging/KafkaProducerConfig.class]
What am I doing wrong ?
You should not use manual acknowledgments. Instead, inject a KafkaTransactionManager into the listener container and the container will send the offset to the transaction when the listener method exits normally (or rollback otherwise).
You should not do acks via the consumer for exactly once.
EDIT
application.yml
spring:
kafka:
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
properties:
isolation:
level: read_committed
producer:
transaction-id-prefix: myTrans.
App
#SpringBootApplication
public class So52570118Application {
public static void main(String[] args) {
SpringApplication.run(So52570118Application.class, args);
}
#Bean // override boot's auto-config to add txm
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
KafkaTransactionManager<Object, Object> transactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(transactionManager);
return factory;
}
#Autowired
private KafkaTemplate<String, String> template;
#KafkaListener(id = "so52570118", topics = "so52570118")
public void listen(String in) throws Exception {
System.out.println(in);
Thread.sleep(5_000);
this.template.send("so52570118out", in.toUpperCase());
System.out.println("sent");
}
#KafkaListener(id = "so52570118out", topics = "so52570118out")
public void listenOut(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner() {
return args -> this.template.executeInTransaction(t -> t.send("so52570118", "foo"));
}
#Bean
public NewTopic topic1() {
return new NewTopic("so52570118", 1, (short) 1);
}
#Bean
public NewTopic topic2() {
return new NewTopic("so52570118out", 1, (short) 1);
}
}

Simple embedded Kafka test example with spring boot

Edit FYI: working gitHub example
I was searching the internet and couldn't find a working and simple example of an embedded Kafka test.
My setup is:
Spring boot
Multiple #KafkaListener with different topics in one class
Embedded Kafka for test which is starting fine
Test with Kafkatemplate which is sending to topic but the
#KafkaListener methods are not receiving anything even after a huge sleep time
No warnings or errors are shown, only info spam from Kafka in logs
Please help me. There are mostly over configured or overengineered examples. I am sure it can be done simple.
Thanks, guys!
#Controller
public class KafkaController {
private static final Logger LOG = getLogger(KafkaController.class);
#KafkaListener(topics = "test.kafka.topic")
public void receiveDunningHead(final String payload) {
LOG.debug("Receiving event with payload [{}]", payload);
//I will do database stuff here which i could check in db for testing
}
}
private static String SENDER_TOPIC = "test.kafka.topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Test
public void testSend() throws InterruptedException, ExecutionException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps);
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 0, "message00")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 1, "message01")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 1, 0, "message10")).get();
Thread.sleep(10000);
}
Embedded Kafka tests work for me with below configs,
Annotation on test class
#EnableKafka
#SpringBootTest(classes = {KafkaController.class}) // Specify #KafkaListener class if its not the same class, or not loaded with test config
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
brokerProperties = {
"listeners=PLAINTEXT://localhost:3333",
"port=3333"
})
public class KafkaConsumerTest {
#Autowired
KafkaEmbedded kafkaEmbeded;
#Autowired
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
Before annotation for setup method
#Before
public void setUp() throws Exception {
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(messageListenerContainer,
kafkaEmbeded.getPartitionsPerTopic());
}
}
Note: I am not using #ClassRule for creating embedded Kafka rather auto-wiring #Autowired embeddedKafka
#Test
public void testReceive() throws Exception {
kafkaTemplate.send(topic, data);
}
Hope this helps!
Edit: Test configuration class marked with #TestConfiguration
#TestConfiguration
public class TestConfig {
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(KafkaTestUtils.producerProps(kafkaEmbedded));
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
kafkaTemplate.setDefaultTopic(topic);
return kafkaTemplate;
}
Now #Test method will autowire KafkaTemplate and use is to send message
kafkaTemplate.send(topic, data);
Updated answer code block with above line
since the accepted answer doesn't compile or work for me. I find another solution based on https://blog.mimacom.com/testing-apache-kafka-with-spring-boot/ what I would like to share with you.
The dependency is 'spring-kafka-test' version: '2.2.7.RELEASE'
#RunWith(SpringRunner.class)
#EmbeddedKafka(partitions = 1, topics = { "testTopic" })
#SpringBootTest
public class SimpleKafkaTest {
private static final String TEST_TOPIC = "testTopic";
#Autowired
EmbeddedKafkaBroker embeddedKafkaBroker;
#Test
public void testReceivingKafkaEvents() {
Consumer<Integer, String> consumer = configureConsumer();
Producer<Integer, String> producer = configureProducer();
producer.send(new ProducerRecord<>(TEST_TOPIC, 123, "my-test-value"));
ConsumerRecord<Integer, String> singleRecord = KafkaTestUtils.getSingleRecord(consumer, TEST_TOPIC);
assertThat(singleRecord).isNotNull();
assertThat(singleRecord.key()).isEqualTo(123);
assertThat(singleRecord.value()).isEqualTo("my-test-value");
consumer.close();
producer.close();
}
private Consumer<Integer, String> configureConsumer() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("testGroup", "true", embeddedKafkaBroker);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Consumer<Integer, String> consumer = new DefaultKafkaConsumerFactory<Integer, String>(consumerProps)
.createConsumer();
consumer.subscribe(Collections.singleton(TEST_TOPIC));
return consumer;
}
private Producer<Integer, String> configureProducer() {
Map<String, Object> producerProps = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
return new DefaultKafkaProducerFactory<Integer, String>(producerProps).createProducer();
}
}
I solved the issue now
#BeforeClass
public static void setUpBeforeClass() {
System.setProperty("spring.kafka.bootstrap-servers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
}
while I was debugging, I saw that the embedded kaka server is taking a random port.
I couldn't find the configuration for it, so I am setting the kafka config same as the server. Looks still a bit ugly for me.
I would love to have just the #Mayur mentioned line
#EmbeddedKafka(partitions = 1, controlledShutdown = false, brokerProperties = {"listeners=PLAINTEXT://localhost:9092", "port=9092"})
but can't find the right dependency in the internet.
In integration testing, having fixed ports like 9092 is not recommended because multiple tests should have the flexibility to open their own ports from embedded instances. So, following implementation is something like that,
NB: this implementation is based on junit5(Jupiter:5.7.0) and spring-boot 2.3.4.RELEASE
TestClass:
#EnableKafka
#SpringBootTest(classes = {ConsumerTest.Config.class, Consumer.class})
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false)
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class ConsumerTest {
#Autowired
private EmbeddedKafkaBroker kafkaEmbedded;
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#BeforeAll
public void setUp() throws Exception {
for (final MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(messageListenerContainer,
kafkaEmbedded.getPartitionsPerTopic());
}
}
#Value("${topic.name}")
private String topicName;
#Autowired
private KafkaTemplate<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestKafkaTemplate;
#Test
public void consume_success() {
requestKafkaTemplate.send(topicName, load);
}
#Configuration
#Import({
KafkaListenerConfig.class,
TopicConfig.class
})
public static class Config {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestProducerFactory() {
final Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestKafkaTemplate() {
return new KafkaTemplate<>(requestProducerFactory());
}
}
}
Listener Class:
#Component
public class Consumer {
#KafkaListener(
topics = "${topic.name}",
containerFactory = "listenerContainerFactory"
)
#Override
public void listener(
final ConsumerRecord<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> consumerRecord,
final #Payload Optional<Map<String, List<ImmutablePair<String, String>>>> payload
) {
}
}
Listner Config:
#Configuration
public class KafkaListenerConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${topic.name}")
private String resolvedTreeQueueName;
#Bean
public ConsumerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> resolvedTreeConsumerFactory() {
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, resolvedTreeQueueName);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new CustomDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> resolvedTreeListenerContainerFactory() {
final ConcurrentKafkaListenerContainerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(resolvedTreeConsumerFactory());
return factory;
}
}
TopicConfig:
#Configuration
public class TopicConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${topic.name}")
private String requestQueue;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
return new NewTopic(requestQueue, 1, (short) 1);
}
}
application.properties:
spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}
This assignment is the most important assignment that would bind the embedded instance port to the KafkaTemplate and, KafkaListners.
Following the above implementation, you could open dynamic ports per test class and, it would be more convenient.

kafka listener container in spring boot with annotations does not consume message

Not sure if I am doing some thing wrong but I could not manage to make it work. Below is my code:
public class EventsApp {
private static final Logger log = LoggerFactory.getLogger(EventsApp.class);
#Value("${kafka.topic:test}")
private String topic;
#Value("${kafka.messageKey:si.key}")
private String messageKey;
#Value("${kafka.broker.address:localhost:9092}")
private String brokerAddress;
#Value("${kafka.zookeeper.connect:localhost:2181}")
private String zookeeperConnect;
/**
* Main method, used to run the application.
*
* #param args the command line arguments
* #throws UnknownHostException if the local host name could not be resolved into an address
*/
public static void main(String[] args) throws UnknownHostException, Exception {
ConfigurableApplicationContext context
= new SpringApplicationBuilder(EventsApp.class)
.web(false)
.run(args);
MessageChannel toKafka = context.getBean("toKafka", MessageChannel.class);
for (int i = 0; i < 10; i++) {
System.out.println("sending.."+toKafka.toString());
toKafka.send(new GenericMessage<>("foo" + i));
}
Thread.sleep(115000);
context.close();
System.exit(0);
}
#KafkaListener(id = "baz", topics = "test",
containerFactory = "kafkaListenerContainerFactory")
public void listen(String data, Acknowledgment ack) {
System.out.println("----- "+data);
ack.acknowledge();
}
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(1);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
#ServiceActivator(inputChannel = "toKafka")
#Bean
public MessageHandler handler() throws Exception {
KafkaProducerMessageHandler<String, String> handler =
new KafkaProducerMessageHandler<>(kafkaTemplate());
handler.setTopicExpression(new LiteralExpression(this.topic));
handler.setMessageKeyExpression(new LiteralExpression(this.messageKey));
return handler;
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public ConsumerFactory<Integer, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.brokerAddress);
//props.put("zookeeper.connect", this.zookeeperConnect);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "siTestGroup");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 100);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public TopicCreator topicCreator() {
return new TopicCreator(this.topic, this.zookeeperConnect);
}
public static class TopicCreator implements SmartLifecycle {
private final String topic;
private final String zkConnect;
private volatile boolean running;
public TopicCreator(String topic, String zkConnect) {
this.topic = topic;
this.zkConnect = zkConnect;
}
#Override
public void start() {
ZkUtils zkUtils = new ZkUtils(new ZkClient(this.zkConnect, 6000, 6000,
ZKStringSerializer$.MODULE$), null, false);
try {
AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties());
}
catch (TopicExistsException e) {
// no-op
}
this.running = true;
}
#Override
public void stop() {
}
#Override
public boolean isRunning() {
return this.running;
}
#Override
public int getPhase() {
return Integer.MIN_VALUE;
}
#Override
public boolean isAutoStartup() {
return true;
}
#Override
public void stop(Runnable callback) {
callback.run();
}
}
}
While I am able to produce message. I am using spring boot version 1.4.0.RELEASE and spring-integration-kafka version 2.0.1.RELEASE.
Try adding
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
to the consumer config.
By default, kafka will start the consumer at the end of the topic.
If that doesn't work, turn on DEBUG logging to figure out what's going on.

Resources