Spring Boot KafkaTemplate and KafkaListener test with EmbeddedKafka fails - spring-boot

I have 2 Spring Boot apps, one is Kafka publisher and the other is consumer. I am trying to write an integration test to make sure that events are sent and received.
The test is green when run in IDE or from command line without other tests, like mvn test -Dtest=KafkaPublisherTest. However, when I build the whole project, the test fails with org.awaitility.core.ConditionTimeoutException. There are multiple #EmbeddedKafka tests in the project.
The test gets stuck after these lines in logs:
2021-11-30 09:17:12.366 INFO 1437 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : wages-consumer-test: partitions assigned: [wages-test-0, wages-test-1]
2021-11-30 09:17:14.464 INFO 1437 --- [er-event-thread] kafka.controller.KafkaController : [Controller id=0] Processing automatic preferred replica leader election
If you have a better idea on how to test such things, please share.
Here is how the test looks like:
#SpringBootTest(properties = { "kafka.wages-topic.bootstrap-address=${spring.embedded.kafka.brokers}" })
#EmbeddedKafka(partitions = 1, topics = "${kafka.wages-topic.name}")
class KafkaPublisherTest {
#Autowired
private TestWageProcessor testWageProcessor;
#Autowired
private KafkaPublisher kafkaPublisher;
#Autowired
private KafkaTemplate<String, WageEvent> kafkaTemplate;
#Test
void publish() {
Date date = new Date();
WageCreateDto wageCreateDto = new WageCreateDto().setName("test").setSurname("test").setWage(BigDecimal.ONE).setEventTime(date);
kafkaPublisher.publish(wageCreateDto);
kafkaTemplate.flush();
WageEvent expected = new WageEvent().setName("test").setSurname("test").setWage(BigDecimal.ONE).setEventTimeMillis(date.toInstant().toEpochMilli());
await()
.atLeast(Duration.ONE_HUNDRED_MILLISECONDS)
.atMost(Duration.TEN_SECONDS)
.with()
.pollInterval(Duration.ONE_HUNDRED_MILLISECONDS)
.until(testWageProcessor::getLastReceivedWageEvent, equalTo(expected));
}
}
Publisher config:
#Configuration
#EnableConfigurationProperties(WagesTopicPublisherProperties.class)
public class KafkaConfiguration {
#Bean
public KafkaAdmin kafkaAdmin(WagesTopicPublisherProperties wagesTopicPublisherProperties) {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, wagesTopicPublisherProperties.getBootstrapAddress());
return new KafkaAdmin(configs);
}
#Bean
public NewTopic wagesTopic(WagesTopicPublisherProperties wagesTopicPublisherProperties) {
return new NewTopic(wagesTopicPublisherProperties.getName(), wagesTopicPublisherProperties.getPartitions(), wagesTopicPublisherProperties.getReplicationFactor());
}
#Primary
#Bean
public WageEventSerde wageEventSerde() {
return new WageEventSerde();
}
#Bean
public ProducerFactory<String, WageEvent> producerFactory(WagesTopicPublisherProperties wagesTopicPublisherProperties, WageEventSerde wageEventSerde) {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
wagesTopicPublisherProperties.getBootstrapAddress());
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
wageEventSerde.serializer().getClass());
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, WageEvent> kafkaTemplate(ProducerFactory<String, WageEvent> producerFactory) {
return new KafkaTemplate<>(producerFactory);
}
}
Consumer config:
#Configuration
#EnableConfigurationProperties(WagesTopicConsumerProperties.class)
public class ConsumerConfiguration {
#ConditionalOnMissingBean(WageEventSerde.class)
#Bean
public WageEventSerde wageEventSerde() {
return new WageEventSerde();
}
#Bean
public ConsumerFactory<String, WageEvent> wageConsumerFactory(WagesTopicConsumerProperties wagesTopicConsumerProperties, WageEventSerde wageEventSerde) {
Map<String, Object> props = new HashMap<>();
props.put(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
wagesTopicConsumerProperties.getBootstrapAddress());
props.put(
ConsumerConfig.GROUP_ID_CONFIG,
wagesTopicConsumerProperties.getGroupId());
props.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
wageEventSerde.deserializer().getClass());
return new DefaultKafkaConsumerFactory<>(
props,
new StringDeserializer(),
wageEventSerde.deserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, WageEvent> wageEventConcurrentKafkaListenerContainerFactory(ConsumerFactory<String, WageEvent> wageConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, WageEvent> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(wageConsumerFactory);
return factory;
}
}
#KafkaListener(
topics = "${kafka.wages-topic.name}",
containerFactory = "wageEventConcurrentKafkaListenerContainerFactory")
public void consumeWage(WageEvent wageEvent) {
log.info("Wage event received: " + wageEvent);
wageProcessor.process(wageEvent);
}
Here is the project source code: https://github.com/aleksei17/springboot-rest-kafka-mysql
Here are the logs of failed build: https://drive.google.com/file/d/1uE2w8rmJhJy35s4UJXf4_ON3hs9JR6Au/view?usp=sharing

When I used Testcontainers Kafka instead of #EmbeddedKafka, the issue was solved. The tests looked like this:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class PublisherApplicationTest {
public static final KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka").withTag("5.4.3"));
static {
kafka.start();
System.setProperty("kafka.wages-topic.bootstrap-address", kafka.getBootstrapServers());
}
However, I could not say I understand the issue. When I used a singleton pattern as described here, I had the same issue. Maybe something like #DirtiesContext could help: it helped to fix one test at work, but not in this learning project.

Related

Listener not consuming message on Test with EmbeddedKafka Spring

I was trying to see if a service is invoked when the consumer receives a message from kafka topic but the test is not passing and the consumer is not even receiving the message.
My test:
#SpringBootTest
#DirtiesContext
#EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:9092", "port=9092" })
class ConsumerTest {
#Autowired
Consumer consumer;
#Mock
private Service service;
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Test
public void givenEmbeddedKafkaBroker_whenSendingtoSimpleProducer_thenMessageReceived()
throws Exception {
String message = "hi";
kafkaTemplate.send("topic", message);
consumer.getLatch().await(10000, TimeUnit.MILLISECONDS);
verify(service, times(1)).addMessage();
}
}
The consumer, in main package, is a normal consumer with #KafkaListener(topics = "topic").
Then I have a configuration file:
#EnableKafka
#Configuration
public class KafkaConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new StringDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
And also in application.properties (inside test package) i put this:
spring:
kafka:
consumer:
auto-offset-reset: earliest
group-id: group

Exactly Once Two Kafka Clusters

I have 2 Kafka clusters. Cluster A and Cluster B. These clusters are completely separate.
I have a spring-boot application that listens to a topic on Cluster A transforms the event and then produces it onto Cluster B. I require exactly once as these are financial events. I have noticed that with my current application I sometimes get duplicates as well as miss some events. I have tried to implement exactly once as best I could. One of the posts said flink would be a better option over spring-boot. Should i move over to flink? Please see the Spring code below.
ConsumerConfig
#Configuration
public class KafkaConsumerConfig {
#Value("${kafka.server.consumer}")
String server;
#Value("${kafka.kerberos.service.name:}")
String kerberosServiceName;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, AvroDeserializer.class);
config.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
// skip kerberos if no value is provided
if (kerberosServiceName.length() > 0) {
config.put("security.protocol", "SASL_PLAINTEXT");
config.put("sasl.kerberos.service.name", kerberosServiceName);
}
return config;
}
#Bean
public ConsumerFactory<String, AccrualSchema> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new AvroDeserializer<>(AccrualSchema.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, AccrualSchema> kafkaListenerContainerFactory(ConsumerErrorHandler errorHandler) {
ConcurrentKafkaListenerContainerFactory<String, AccrualSchema> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setAutoStartup(true);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setErrorHandler(errorHandler);
return factory;
}
#Bean
public KafkaConsumerAccrual receiver() {
return new KafkaConsumerAccrual();
}
}
ProducerConfig
#Configuration
public class KafkaProducerConfig {
#Value("${kafka.server.producer}")
String server;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-1");
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy");
config.put(ProducerConfig.LINGER_MS_CONFIG, "10");
config.put(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32*1024));
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
KafkaProducer
#Service
public class KafkaTopicProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void topicProducer(String payload, String topic) {
kafkaTemplate.executeInTransaction(kt->kt.send(topic, payload));
}
}
KafkaConsumer
public class KafkaConsumerAccrual {
#Autowired
KafkaTopicProducer kafkaTopicProducer;
#Autowired
Gson gson;
#KafkaListener(topics = "topicname", groupId = "groupid", id = "listenerid")
public void consume(AccrualSchema accrual,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partition, #Header(KafkaHeaders.OFFSET) Long offset,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
AccrualEntity accrualEntity = convertBusinessObjectToAccrual(accrual,partition,offset);
kafkaTopicProducer.topicProducer(gson.toJson(accrualEntity, AccrualEntity.class), accrualTopic);
}
public AccrualEntity convertBusinessObjectToAccrual(AccrualSchema ao, Integer partition,
Long offset) {
//Transform code goes here
return ae;
}
}
Exactly Once semantics are not supported across clusters; the key is producing records and committing the consumed offset(s) within one atomic transaction.
That is not possible in your environment since a transaction can't span two clusters.
According to this specific Draft KIP :
https://cwiki.apache.org/confluence/display/KAFKA/KIP-656%3A+MirrorMaker2+Exactly-once+Semantics
The better thing that you can expect right know is an at-least-one semantic between two clusters.
It implies that you have to protect from duplication on the consumer side of the target broker.
As an example, you can identify a set a properties that must be unique for a specific windows of time. But i will really depend on your use case.

Spring Kafka and exactly once delivery guarantee

I use Spring Kafka and Spring Boot and just wondering how to configure my consumer, for example:
#KafkaListener(topics = "${kafka.topic.post.send}", containerFactory = "postKafkaListenerContainerFactory")
public void sendPost(ConsumerRecord<String, Post> consumerRecord, Acknowledgment ack) {
// do some logic
ack.acknowledge();
}
to use the exactly once delivery guarantee?
Should I only add org.springframework.transaction.annotation.Transactional annotation over sendPost method and that's it or do I need to perform some extra steps in order to achieve this?
UPDATED
This is my current config
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(KafkaProperties kafkaProperties, KafkaTransactionManager<Object, Object> transactionManager) {
kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, kafkaConsumerMaxPollIntervalMs);
kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, kafkaConsumerMaxPollRecords);
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
//factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.getContainerProperties().setTransactionManager(transactionManager);
factory.setConsumerFactory(consumerFactory(kafkaProperties));
return factory;
}
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 15000000);
return props;
}
#Bean
public ProducerFactory<String, Post> postProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Post> postKafkaTemplate() {
return new KafkaTemplate<>(postProducerFactory());
}
#Bean
public ProducerFactory<String, Update> updateProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Update> updateKafkaTemplate() {
return new KafkaTemplate<>(updateProducerFactory());
}
#Bean
public ProducerFactory<String, Message> messageProducerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#Bean
public KafkaTemplate<String, Message> messageKafkaTemplate() {
return new KafkaTemplate<>(messageProducerFactory());
}
but it fails with the following error:
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method kafkaTransactionManager in org.springframework.boot.autoconfigure.kafka.KafkaAutoConfiguration required a single bean, but 3 were found:
- postProducerFactory: defined by method 'postProducerFactory' in class path resource [com/example/domain/configuration/messaging/KafkaProducerConfig.class]
- updateProducerFactory: defined by method 'updateProducerFactory' in class path resource [com/example/domain/configuration/messaging/KafkaProducerConfig.class]
- messageProducerFactory: defined by method 'messageProducerFactory' in class path resource [com/example/domain/configuration/messaging/KafkaProducerConfig.class]
What am I doing wrong ?
You should not use manual acknowledgments. Instead, inject a KafkaTransactionManager into the listener container and the container will send the offset to the transaction when the listener method exits normally (or rollback otherwise).
You should not do acks via the consumer for exactly once.
EDIT
application.yml
spring:
kafka:
consumer:
auto-offset-reset: earliest
enable-auto-commit: false
properties:
isolation:
level: read_committed
producer:
transaction-id-prefix: myTrans.
App
#SpringBootApplication
public class So52570118Application {
public static void main(String[] args) {
SpringApplication.run(So52570118Application.class, args);
}
#Bean // override boot's auto-config to add txm
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
ConsumerFactory<Object, Object> kafkaConsumerFactory,
KafkaTransactionManager<Object, Object> transactionManager) {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, kafkaConsumerFactory);
factory.getContainerProperties().setTransactionManager(transactionManager);
return factory;
}
#Autowired
private KafkaTemplate<String, String> template;
#KafkaListener(id = "so52570118", topics = "so52570118")
public void listen(String in) throws Exception {
System.out.println(in);
Thread.sleep(5_000);
this.template.send("so52570118out", in.toUpperCase());
System.out.println("sent");
}
#KafkaListener(id = "so52570118out", topics = "so52570118out")
public void listenOut(String in) {
System.out.println(in);
}
#Bean
public ApplicationRunner runner() {
return args -> this.template.executeInTransaction(t -> t.send("so52570118", "foo"));
}
#Bean
public NewTopic topic1() {
return new NewTopic("so52570118", 1, (short) 1);
}
#Bean
public NewTopic topic2() {
return new NewTopic("so52570118out", 1, (short) 1);
}
}

#KafkaListener in Unit test case does not consume from the container factory

I wrote a JUnit test case to test the code in the "With Java Configuration" lesson in the Spring Kafka docs. (https://docs.spring.io/spring-kafka/reference/htmlsingle/#_with_java_configuration). The onedifference is that I am using an Embedded Kafka Server in the class, instead of a localhost server. I am using Spring Boot 2.0.2 and its Spring-Kafka dependency.
While running this test case, I see that the Consumer is not reading the message from the topic and the "assertTrue" check fails. There are no other errors.
#RunWith(SpringRunner.class)
public class SpringConfigSendReceiveMessage {
public static final String DEMO_TOPIC = "demo_topic";
#Autowired
private Listener listener;
#Test
public void testSimple() throws Exception {
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}
#Autowired
private KafkaTemplate<Integer, String> template;
#Configuration
#EnableKafka
public static class Config {
#Bean
public KafkaEmbedded kafkaEmbedded() {
return new KafkaEmbedded(1, true, 1, DEMO_TOPIC);
}
#Bean
public ConsumerFactory<Integer, String> createConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(createConsumerFactory());
return factory;
}
#Bean
public Listener listener() {
return new Listener();
}
#Bean
public ProducerFactory<Integer, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
}
}
class Listener {
public final CountDownLatch latch = new CountDownLatch(1);
#KafkaListener(id = "foo", topics = DEMO_TOPIC)
public void listen1(String foo) {
this.latch.countDown();
}
}
I think that this is because the #KafkaListener is using some wrong/default setting when reading from the topic. I dont see any errors in the logs.
Is this unit test case correct? How can i find the object that is created for the KafkaListener annotation and see which Kafka broker it consumes from? Any inputs will be helpful. Thanks.
The message is sent before the consumer starts.
By default, new consumers start consuming at the end of the topic.
Add
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
The answer by #gary-russell is the best solution. Another way to resolve this issue was to delay the message send step by some time. This will enable the consumer to be ready. The following is also a correct solution.
Lesson learned - For unit testing Kafka consumers, either consume all the messages in the test case, or ensure that the Consumer is ready before Producer sends the message.
#Test
public void testSimple() throws Exception {
Thread.sleep(1000);
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}

Simple embedded Kafka test example with spring boot

Edit FYI: working gitHub example
I was searching the internet and couldn't find a working and simple example of an embedded Kafka test.
My setup is:
Spring boot
Multiple #KafkaListener with different topics in one class
Embedded Kafka for test which is starting fine
Test with Kafkatemplate which is sending to topic but the
#KafkaListener methods are not receiving anything even after a huge sleep time
No warnings or errors are shown, only info spam from Kafka in logs
Please help me. There are mostly over configured or overengineered examples. I am sure it can be done simple.
Thanks, guys!
#Controller
public class KafkaController {
private static final Logger LOG = getLogger(KafkaController.class);
#KafkaListener(topics = "test.kafka.topic")
public void receiveDunningHead(final String payload) {
LOG.debug("Receiving event with payload [{}]", payload);
//I will do database stuff here which i could check in db for testing
}
}
private static String SENDER_TOPIC = "test.kafka.topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Test
public void testSend() throws InterruptedException, ExecutionException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps);
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 0, "message00")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 1, "message01")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 1, 0, "message10")).get();
Thread.sleep(10000);
}
Embedded Kafka tests work for me with below configs,
Annotation on test class
#EnableKafka
#SpringBootTest(classes = {KafkaController.class}) // Specify #KafkaListener class if its not the same class, or not loaded with test config
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
brokerProperties = {
"listeners=PLAINTEXT://localhost:3333",
"port=3333"
})
public class KafkaConsumerTest {
#Autowired
KafkaEmbedded kafkaEmbeded;
#Autowired
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
Before annotation for setup method
#Before
public void setUp() throws Exception {
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(messageListenerContainer,
kafkaEmbeded.getPartitionsPerTopic());
}
}
Note: I am not using #ClassRule for creating embedded Kafka rather auto-wiring #Autowired embeddedKafka
#Test
public void testReceive() throws Exception {
kafkaTemplate.send(topic, data);
}
Hope this helps!
Edit: Test configuration class marked with #TestConfiguration
#TestConfiguration
public class TestConfig {
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(KafkaTestUtils.producerProps(kafkaEmbedded));
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
kafkaTemplate.setDefaultTopic(topic);
return kafkaTemplate;
}
Now #Test method will autowire KafkaTemplate and use is to send message
kafkaTemplate.send(topic, data);
Updated answer code block with above line
since the accepted answer doesn't compile or work for me. I find another solution based on https://blog.mimacom.com/testing-apache-kafka-with-spring-boot/ what I would like to share with you.
The dependency is 'spring-kafka-test' version: '2.2.7.RELEASE'
#RunWith(SpringRunner.class)
#EmbeddedKafka(partitions = 1, topics = { "testTopic" })
#SpringBootTest
public class SimpleKafkaTest {
private static final String TEST_TOPIC = "testTopic";
#Autowired
EmbeddedKafkaBroker embeddedKafkaBroker;
#Test
public void testReceivingKafkaEvents() {
Consumer<Integer, String> consumer = configureConsumer();
Producer<Integer, String> producer = configureProducer();
producer.send(new ProducerRecord<>(TEST_TOPIC, 123, "my-test-value"));
ConsumerRecord<Integer, String> singleRecord = KafkaTestUtils.getSingleRecord(consumer, TEST_TOPIC);
assertThat(singleRecord).isNotNull();
assertThat(singleRecord.key()).isEqualTo(123);
assertThat(singleRecord.value()).isEqualTo("my-test-value");
consumer.close();
producer.close();
}
private Consumer<Integer, String> configureConsumer() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("testGroup", "true", embeddedKafkaBroker);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Consumer<Integer, String> consumer = new DefaultKafkaConsumerFactory<Integer, String>(consumerProps)
.createConsumer();
consumer.subscribe(Collections.singleton(TEST_TOPIC));
return consumer;
}
private Producer<Integer, String> configureProducer() {
Map<String, Object> producerProps = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
return new DefaultKafkaProducerFactory<Integer, String>(producerProps).createProducer();
}
}
I solved the issue now
#BeforeClass
public static void setUpBeforeClass() {
System.setProperty("spring.kafka.bootstrap-servers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
}
while I was debugging, I saw that the embedded kaka server is taking a random port.
I couldn't find the configuration for it, so I am setting the kafka config same as the server. Looks still a bit ugly for me.
I would love to have just the #Mayur mentioned line
#EmbeddedKafka(partitions = 1, controlledShutdown = false, brokerProperties = {"listeners=PLAINTEXT://localhost:9092", "port=9092"})
but can't find the right dependency in the internet.
In integration testing, having fixed ports like 9092 is not recommended because multiple tests should have the flexibility to open their own ports from embedded instances. So, following implementation is something like that,
NB: this implementation is based on junit5(Jupiter:5.7.0) and spring-boot 2.3.4.RELEASE
TestClass:
#EnableKafka
#SpringBootTest(classes = {ConsumerTest.Config.class, Consumer.class})
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false)
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class ConsumerTest {
#Autowired
private EmbeddedKafkaBroker kafkaEmbedded;
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#BeforeAll
public void setUp() throws Exception {
for (final MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(messageListenerContainer,
kafkaEmbedded.getPartitionsPerTopic());
}
}
#Value("${topic.name}")
private String topicName;
#Autowired
private KafkaTemplate<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestKafkaTemplate;
#Test
public void consume_success() {
requestKafkaTemplate.send(topicName, load);
}
#Configuration
#Import({
KafkaListenerConfig.class,
TopicConfig.class
})
public static class Config {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestProducerFactory() {
final Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestKafkaTemplate() {
return new KafkaTemplate<>(requestProducerFactory());
}
}
}
Listener Class:
#Component
public class Consumer {
#KafkaListener(
topics = "${topic.name}",
containerFactory = "listenerContainerFactory"
)
#Override
public void listener(
final ConsumerRecord<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> consumerRecord,
final #Payload Optional<Map<String, List<ImmutablePair<String, String>>>> payload
) {
}
}
Listner Config:
#Configuration
public class KafkaListenerConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${topic.name}")
private String resolvedTreeQueueName;
#Bean
public ConsumerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> resolvedTreeConsumerFactory() {
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, resolvedTreeQueueName);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new CustomDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> resolvedTreeListenerContainerFactory() {
final ConcurrentKafkaListenerContainerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(resolvedTreeConsumerFactory());
return factory;
}
}
TopicConfig:
#Configuration
public class TopicConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${topic.name}")
private String requestQueue;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
return new NewTopic(requestQueue, 1, (short) 1);
}
}
application.properties:
spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}
This assignment is the most important assignment that would bind the embedded instance port to the KafkaTemplate and, KafkaListners.
Following the above implementation, you could open dynamic ports per test class and, it would be more convenient.

Resources