Spring boot Kafka Consumer Bootstrap Servers always picking Localhost 9092 - spring-boot

I have created a Spring boot Application for Kafka Producer and Consumer. Both uses the same properties and bootstrap servers. While the Producer picks up the correct bootstrap server names, the consumer always picks up localhost:9092 as the bootstrap servers.
Consumer Config class:
#Configuration
#RequiredArgsConstructor
#Slf4j
#EnableKafka
public class KafkaConsumerConfig {
private final KafkaConfig kafkaConfig;
Map<String, Object> consumerProperties() {
log.info("INSIDE consumerProperties:");
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaConfig.getBootstrapServers());
log.info("ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG:"+props.get(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG));
props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaConfig.getConsumerGroupId());
props.put("ssl.truststore.location", kafkaConfig.getSslTrustStoreLocation());
props.put("ssl.truststore.password", kafkaConfig.getSslTrustStorePassword());
props.put("ssl.keystore.location", kafkaConfig.getSslTrustStoreLocation());
props.put("ssl.keystore.password", kafkaConfig.getSslTrustStorePassword());
props.put("ssl.key.location", kafkaConfig.getSslTrustStoreLocation());
props.put("ssl.key.password", kafkaConfig.getSslTrustStorePassword());
props.put("security.protocol", kafkaConfig.getSecurityProtocol());
props.put("ssl.endpoint.identification.algorithm", "");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, kafkaConfig.getConsumerAutoOffsetReset());
log.info("props:"+props);
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory()
{
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
#Bean
public ConcurrentKafkaListenerContainerFactory concurrentKafkaListenerContainerFactory()
{
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setIdleBetweenPolls(10000);
return factory;
}
}
KafkaConfig class
#ConfigurationProperties(prefix = "proj.kafka")
#Getter
#Setter
public class KafkaConfig {
private String schemaRegistryUrl;
private String bootstrapServers;
private String securityProtocol;
private String sslTrustStoreLocation;
private String sslTrustStorePassword;
private String consumerGroupId;
private String consumerEnableAutoCommit;
private String consumerAutoOffsetReset;
private String consumerSessionTimeoutMs;
private String producerRetries;
private long consumerRetries;
private long consumerBackoff;
private String producerMaxInflightConnections;
private String primaryCluster;
private String secondaryCluster;
private long delayMs;
private long deleteMappingDelay;
private String groupId;
}
KafkaConsumerService class
#Slf4j
#RequiredArgsConstructor
#Service
public class KafkaConsumerService {
#Value("${proj.kafka.topic}")
private String topicName;
#KafkaListener(topics = "${proj.kafka.topic}", groupId = "${proj.kafka.consumer-group-id}")
public void consumeMessage(ConsumerRecord<String, String> message) throws InterruptedException {
log.info("INSIDE consumeMessage:");
log.info("Tópic:", topicName);
log.info("Headers:", message.headers());
log.info("Partion:", message.partition());
log.info("key:", message.key());
log.info("Order:", message.value())
}
}
Logs
2022-10-06 18:10:12.969 INFO 34048 --- [ main] c.v.a.v.e.config.KafkaConsumerConfig : INSIDE consumerFactory:
2022-10-06 18:10:12.969 INFO 34048 --- [ main] c.v.a.v.e.config.KafkaConsumerConfig : INSIDE consumerProperties:
2022-10-06 18:10:12.970 INFO 34048 --- [ main] c.v.a.v.e.config.KafkaConsumerConfig :
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG:SSL://server1.com:9092,SSL://server2.com:9092,SSL://server3.com:9092
2022-10-06 18:10:14.500 INFO 34048 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-proj-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
application.properties
proj.kafka.bootstrap-servers=SSL://server1.com:9092,SSL://server2.com:9092,SSL://server3.com:9092
I have also tried adding the below admin code in Consumer Config class, but it was still pointing to the localhost
#Value(value = "${proj.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapAddress);
return new KafkaAdmin(configs);
}
I also modified the below prop, but it was still pointing to the localhost
props.put("spring.kafka.bootstrap-servers", kafkaConfig.getBootstrapServers());
As i have mentioned, i am using the same properties and config setting for Producer, which is pointing to the correct brokers. Please see below producer log:
2022-10-06 21:09:33.949 INFO 9 --- [nio-8443-exec-3] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [SSL://server1.com:9092, SSL://server2.com:9092, SSL://server3.com:9092]
Please suggest on how to point the Consumer to the correct bootstrap servers

My issue got resolved by making following changes:
Replacing the method name from concurrentKafkaListenerContainerFactory to kafkaListenerContainerFactory
Adding below 2 properties:
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
Now the bootstrap server names are getting picked up correctly, but the consumer is not reading any messages although the KafkaConsumerService class is getting called each time any message is being produced.
Please share if anyone has any suggestion on this

How do you know that KafkaConsumerService is called when each message is produced? Can you add KafkaConsumerService?

Related

Field properties in xx.xxx.configuration.ElasticSearchConfiguration required a single bean, but 2 were found:

As part of spring-framework up-gradation from 1.5.6.RELEASE to 2.4.0 i am facing one issue
I am using 2 different ElasticSearch Host so i create 2 different classes one for ESConfigForMasterData and Other for ElasticSearchConfiguration while running spring boot application it throws an error
APPLICATION FAILED TO START
**Description:
Field properties in xxx.xxx.configuration.ElasticSearchConfiguration required a single bean, but 2 were found:
- integrationGlobalProperties: defined in null
- systemProperties: a programmatically registered singleton
Action:
Consider marking one of the beans as #Primary, updating the consumer to accept multiple beans, or using #Qualifier to identify the bean that should be consumed**
#Configuration
#PropertySource("classpath:application.properties")
public class ElasticSearchConfiguration {
private static final Logger logger = LoggerFactory.getLogger(ElasticSearchConfiguration.class);
#Autowired
private Properties properties;
#Value("${elasticsearch.cluster-nodes}")
private String clusterNodes;
#Value("${elasticsearch.cluster-name}")
private String clusterName;
#Value("${elasticsearch.host}")
private String elasticHost;
#Value("${elasticsearch.port}")
private String elasticPort;
#Value("${elasticsearch.protocol}")
private String elasticProtocol;
private Header[] headers = { new BasicHeader(HttpHeaders.CONTENT_TYPE, "application/json")};
#Bean(name = "restClient")
public RestClient restClient(){
return RestClient.builder(new HttpHost(elasticHost, Integer.parseInt(elasticPort), elasticProtocol)).setDefaultHeaders(headers).build();
}
#Bean(name = "restHighLevelClient")
public RestHighLevelClient restHighLevelClient(){
logger.info(" elastic port = " + elasticPort + " host = " + elasticHost);
return new RestHighLevelClient(RestClient.builder(new HttpHost(elasticHost, Integer.parseInt(elasticPort), elasticProtocol)));
}
}
#Configuration
#PropertySource("classpath:application.properties")
public class
ESConfigForMasterData {
private static final Logger logger = LoggerFactory.getLogger(ElasticSearchConfiguration.class);
#Autowired
private Properties properties;
#Value("${elasticsearch.masterdata.cluster-nodes}")
private String clusterNodes;
#Value("${elasticsearch.masterdata.cluster-name}")
private String clusterName;
#Value("${elasticsearch.masterdata.host}")
private String elasticHost;
#Value("${elasticsearch.masterdata.port}")
private String elasticPort;
#Value("${elasticsearch.masterdata.protocol}")
private String protocol;
private Header[] headers = { new BasicHeader(HttpHeaders.CONTENT_TYPE, "application/json")};
#Bean(name = "restHighLevelClientMasterData")
public RestHighLevelClient restHighLevelClientMasterData(){
logger.info(" elastic port = " + elasticPort + " host = " + elasticHost);
return new RestHighLevelClient(RestClient.builder(new HttpHost(elasticHost, Integer.parseInt(elasticPort), protocol)));
}
#Bean(name = "restClientMasterData")
public RestClient restClientMasterData(){
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, Integer.parseInt(elasticPort), protocol))
.setRequestConfigCallback(new RestClientBuilder.RequestConfigCallback() {
#Override
public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder requestConfigBuilder) {
return requestConfigBuilder.setConnectTimeout(60000000)
.setSocketTimeout(60000000);
}
});
//TODO: As of now we are ignoring it we need to find alternative method for this setMaxRetryTimeoutMillis
// .setMaxRetryTimeoutMillis(60000000);
//return RestClient.builder(new HttpHost(elasticHost, Integer.parseInt(elasticPort), protocol)).setDefaultHeaders(headers).build();
return builder.build();
}
}
#Repository
public class SearchDao {
private static final Logger logger = LoggerFactory.getLogger(SearchDao.class);
#Autowired
#Qualifier("restHighLevelClientMasterData")
private RestHighLevelClient restHighLevelClientMasterData;
#Autowired
#Qualifier("restClientMasterData")
private RestClient restClientMasterData;
}
I have tryed keeping #Primary & #Qualifier annotation but it doesn't work for me
please help me in sorting this issue

Spring Boot KafkaTemplate and KafkaListener test with EmbeddedKafka fails

I have 2 Spring Boot apps, one is Kafka publisher and the other is consumer. I am trying to write an integration test to make sure that events are sent and received.
The test is green when run in IDE or from command line without other tests, like mvn test -Dtest=KafkaPublisherTest. However, when I build the whole project, the test fails with org.awaitility.core.ConditionTimeoutException. There are multiple #EmbeddedKafka tests in the project.
The test gets stuck after these lines in logs:
2021-11-30 09:17:12.366 INFO 1437 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : wages-consumer-test: partitions assigned: [wages-test-0, wages-test-1]
2021-11-30 09:17:14.464 INFO 1437 --- [er-event-thread] kafka.controller.KafkaController : [Controller id=0] Processing automatic preferred replica leader election
If you have a better idea on how to test such things, please share.
Here is how the test looks like:
#SpringBootTest(properties = { "kafka.wages-topic.bootstrap-address=${spring.embedded.kafka.brokers}" })
#EmbeddedKafka(partitions = 1, topics = "${kafka.wages-topic.name}")
class KafkaPublisherTest {
#Autowired
private TestWageProcessor testWageProcessor;
#Autowired
private KafkaPublisher kafkaPublisher;
#Autowired
private KafkaTemplate<String, WageEvent> kafkaTemplate;
#Test
void publish() {
Date date = new Date();
WageCreateDto wageCreateDto = new WageCreateDto().setName("test").setSurname("test").setWage(BigDecimal.ONE).setEventTime(date);
kafkaPublisher.publish(wageCreateDto);
kafkaTemplate.flush();
WageEvent expected = new WageEvent().setName("test").setSurname("test").setWage(BigDecimal.ONE).setEventTimeMillis(date.toInstant().toEpochMilli());
await()
.atLeast(Duration.ONE_HUNDRED_MILLISECONDS)
.atMost(Duration.TEN_SECONDS)
.with()
.pollInterval(Duration.ONE_HUNDRED_MILLISECONDS)
.until(testWageProcessor::getLastReceivedWageEvent, equalTo(expected));
}
}
Publisher config:
#Configuration
#EnableConfigurationProperties(WagesTopicPublisherProperties.class)
public class KafkaConfiguration {
#Bean
public KafkaAdmin kafkaAdmin(WagesTopicPublisherProperties wagesTopicPublisherProperties) {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, wagesTopicPublisherProperties.getBootstrapAddress());
return new KafkaAdmin(configs);
}
#Bean
public NewTopic wagesTopic(WagesTopicPublisherProperties wagesTopicPublisherProperties) {
return new NewTopic(wagesTopicPublisherProperties.getName(), wagesTopicPublisherProperties.getPartitions(), wagesTopicPublisherProperties.getReplicationFactor());
}
#Primary
#Bean
public WageEventSerde wageEventSerde() {
return new WageEventSerde();
}
#Bean
public ProducerFactory<String, WageEvent> producerFactory(WagesTopicPublisherProperties wagesTopicPublisherProperties, WageEventSerde wageEventSerde) {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
wagesTopicPublisherProperties.getBootstrapAddress());
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
wageEventSerde.serializer().getClass());
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, WageEvent> kafkaTemplate(ProducerFactory<String, WageEvent> producerFactory) {
return new KafkaTemplate<>(producerFactory);
}
}
Consumer config:
#Configuration
#EnableConfigurationProperties(WagesTopicConsumerProperties.class)
public class ConsumerConfiguration {
#ConditionalOnMissingBean(WageEventSerde.class)
#Bean
public WageEventSerde wageEventSerde() {
return new WageEventSerde();
}
#Bean
public ConsumerFactory<String, WageEvent> wageConsumerFactory(WagesTopicConsumerProperties wagesTopicConsumerProperties, WageEventSerde wageEventSerde) {
Map<String, Object> props = new HashMap<>();
props.put(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
wagesTopicConsumerProperties.getBootstrapAddress());
props.put(
ConsumerConfig.GROUP_ID_CONFIG,
wagesTopicConsumerProperties.getGroupId());
props.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
wageEventSerde.deserializer().getClass());
return new DefaultKafkaConsumerFactory<>(
props,
new StringDeserializer(),
wageEventSerde.deserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, WageEvent> wageEventConcurrentKafkaListenerContainerFactory(ConsumerFactory<String, WageEvent> wageConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, WageEvent> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(wageConsumerFactory);
return factory;
}
}
#KafkaListener(
topics = "${kafka.wages-topic.name}",
containerFactory = "wageEventConcurrentKafkaListenerContainerFactory")
public void consumeWage(WageEvent wageEvent) {
log.info("Wage event received: " + wageEvent);
wageProcessor.process(wageEvent);
}
Here is the project source code: https://github.com/aleksei17/springboot-rest-kafka-mysql
Here are the logs of failed build: https://drive.google.com/file/d/1uE2w8rmJhJy35s4UJXf4_ON3hs9JR6Au/view?usp=sharing
When I used Testcontainers Kafka instead of #EmbeddedKafka, the issue was solved. The tests looked like this:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class PublisherApplicationTest {
public static final KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka").withTag("5.4.3"));
static {
kafka.start();
System.setProperty("kafka.wages-topic.bootstrap-address", kafka.getBootstrapServers());
}
However, I could not say I understand the issue. When I used a singleton pattern as described here, I had the same issue. Maybe something like #DirtiesContext could help: it helped to fix one test at work, but not in this learning project.

How to capture Redis connection failure on Spring Boot Redis Session implementation?

I have implemented Redis session management using LettuceConnectionFactory on my Spring Boot java application. I couldn't figure out a way to capture Redis connection failure. Is there a way to capture the connection failure?
Spring Boot version: 2.2.6
Code:
#Configuration
#PropertySource("classpath:application.properties")
#EnableRedisHttpSession
public class HttpSessionConfig extends AbstractHttpSessionApplicationInitializer {
Logger logger = LoggerFactory.getLogger(HttpSessionConfig.class);
#Value("${spring.redis.host}")
private String redisHostName;
#Value("${spring.redis.port}")
private int redisPort;
#Value("${spring.redis.password}")
private String redisPassword;
private #Value("${spring.redis.custom.command.timeout}")
Duration redisCommandTimeout;
private #Value("${spring.redis.timeout}")
Duration socketTimeout;
#Bean
LettuceConnectionFactory lettuceConnectionFactory() {
final SocketOptions socketOptions = SocketOptions.builder().connectTimeout(socketTimeout).build();
final ClientOptions clientOptions = ClientOptions.builder()
.socketOptions(socketOptions)
.build();
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.commandTimeout(redisCommandTimeout)
.clientOptions(clientOptions)
.readFrom(ReadFrom.SLAVE_PREFERRED)
.build();
RedisStandaloneConfiguration serverConfig = new RedisStandaloneConfiguration(redisHostName,
redisPort);
final LettuceConnectionFactory lettuceConnectionFactory = new LettuceConnectionFactory(serverConfig,
clientConfig);
return lettuceConnectionFactory;
}
#Bean
public RedisTemplate<Object, Object> sessionRedisTemplate() {
final RedisTemplate<Object, Object> template = new RedisTemplate<>();
template.setConnectionFactory(lettuceConnectionFactory());
return template;
}
#Bean
public ConfigureRedisAction configureRedisAction() {
return ConfigureRedisAction.NO_OP;
}
}

Spring: 2 Repositories out of a single Entity

What I need is 2 Repositories created out of a single entity:
interface TopicRepository implements ReactiveCrudRepository<Topic, String>
interface BackupTopicRepository implements ReactiveCrudRepository<Topic, String>
How is that possible? Right now only one is created.
This is how you would do it.
#Configuration
#ConfigurationProperties(prefix = "mongodb.topic")
#EnableMongoRepositories(basePackages = "abc.def.repository.topic", mongoTemplateRef = "topicMongoTemplate")
#Setter
class TopicMongoConfig {
private String host;
private int port;
private String database;
#Primary
#Bean(name = "topicMongoTemplate")
public MongoTemplate topicMongoTemplate() throws Exception {
final Mongo mongoClient = createMongoClient(new ServerAddress(host, port));
return new MongoTemplate(mongoClient, database);
}
private Mongo createMongoClient(ServerAddress serverAddress) {
return new MongoClient(serverAddress);
}
}
Another configuration
#Configuration
#ConfigurationProperties(prefix = "mongodb.backuptopic")
#EnableMongoRepositories(basePackages = "abc.def.repository.backuptopic", mongoTemplateRef = "backupTopicMongoTemplate")
#Setter
class BackupTopicMongoConfig {
private String host;
private int port;
private String database;
#Primary
#Bean(name = "backupTopicMongoTemplate")
public MongoTemplate backupTopicMongoTemplate() throws Exception {
final Mongo mongoClient = createMongoClient(new ServerAddress(host, port));
return new MongoTemplate(mongoClient, database);
}
private Mongo createMongoClient(ServerAddress serverAddress) {
return new MongoClient(serverAddress);
}
}
Your TopicRepository and BackuoTopicRepository should reside in abc.def.repository.topic and abc.def.repository.backuptopic respectively.
And also you need to have these properties defined in your properties or yml file
mongodb:
topic:
host:
database:
port:
backuptopic:
host:
database:
port:
Lastly, disable springboot autoconfiguration for mongo.
#SpringBootApplication(exclude = {MongoAutoConfiguration.class, MongoDataAutoConfiguration.class})

Simple embedded Kafka test example with spring boot

Edit FYI: working gitHub example
I was searching the internet and couldn't find a working and simple example of an embedded Kafka test.
My setup is:
Spring boot
Multiple #KafkaListener with different topics in one class
Embedded Kafka for test which is starting fine
Test with Kafkatemplate which is sending to topic but the
#KafkaListener methods are not receiving anything even after a huge sleep time
No warnings or errors are shown, only info spam from Kafka in logs
Please help me. There are mostly over configured or overengineered examples. I am sure it can be done simple.
Thanks, guys!
#Controller
public class KafkaController {
private static final Logger LOG = getLogger(KafkaController.class);
#KafkaListener(topics = "test.kafka.topic")
public void receiveDunningHead(final String payload) {
LOG.debug("Receiving event with payload [{}]", payload);
//I will do database stuff here which i could check in db for testing
}
}
private static String SENDER_TOPIC = "test.kafka.topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Test
public void testSend() throws InterruptedException, ExecutionException {
Map<String, Object> senderProps = KafkaTestUtils.producerProps(embeddedKafka);
KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps);
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 0, "message00")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 0, 1, "message01")).get();
producer.send(new ProducerRecord<>(SENDER_TOPIC, 1, 0, "message10")).get();
Thread.sleep(10000);
}
Embedded Kafka tests work for me with below configs,
Annotation on test class
#EnableKafka
#SpringBootTest(classes = {KafkaController.class}) // Specify #KafkaListener class if its not the same class, or not loaded with test config
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
brokerProperties = {
"listeners=PLAINTEXT://localhost:3333",
"port=3333"
})
public class KafkaConsumerTest {
#Autowired
KafkaEmbedded kafkaEmbeded;
#Autowired
KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
Before annotation for setup method
#Before
public void setUp() throws Exception {
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(messageListenerContainer,
kafkaEmbeded.getPartitionsPerTopic());
}
}
Note: I am not using #ClassRule for creating embedded Kafka rather auto-wiring #Autowired embeddedKafka
#Test
public void testReceive() throws Exception {
kafkaTemplate.send(topic, data);
}
Hope this helps!
Edit: Test configuration class marked with #TestConfiguration
#TestConfiguration
public class TestConfig {
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(KafkaTestUtils.producerProps(kafkaEmbedded));
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
KafkaTemplate<String, String> kafkaTemplate = new KafkaTemplate<>(producerFactory());
kafkaTemplate.setDefaultTopic(topic);
return kafkaTemplate;
}
Now #Test method will autowire KafkaTemplate and use is to send message
kafkaTemplate.send(topic, data);
Updated answer code block with above line
since the accepted answer doesn't compile or work for me. I find another solution based on https://blog.mimacom.com/testing-apache-kafka-with-spring-boot/ what I would like to share with you.
The dependency is 'spring-kafka-test' version: '2.2.7.RELEASE'
#RunWith(SpringRunner.class)
#EmbeddedKafka(partitions = 1, topics = { "testTopic" })
#SpringBootTest
public class SimpleKafkaTest {
private static final String TEST_TOPIC = "testTopic";
#Autowired
EmbeddedKafkaBroker embeddedKafkaBroker;
#Test
public void testReceivingKafkaEvents() {
Consumer<Integer, String> consumer = configureConsumer();
Producer<Integer, String> producer = configureProducer();
producer.send(new ProducerRecord<>(TEST_TOPIC, 123, "my-test-value"));
ConsumerRecord<Integer, String> singleRecord = KafkaTestUtils.getSingleRecord(consumer, TEST_TOPIC);
assertThat(singleRecord).isNotNull();
assertThat(singleRecord.key()).isEqualTo(123);
assertThat(singleRecord.value()).isEqualTo("my-test-value");
consumer.close();
producer.close();
}
private Consumer<Integer, String> configureConsumer() {
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("testGroup", "true", embeddedKafkaBroker);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
Consumer<Integer, String> consumer = new DefaultKafkaConsumerFactory<Integer, String>(consumerProps)
.createConsumer();
consumer.subscribe(Collections.singleton(TEST_TOPIC));
return consumer;
}
private Producer<Integer, String> configureProducer() {
Map<String, Object> producerProps = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
return new DefaultKafkaProducerFactory<Integer, String>(producerProps).createProducer();
}
}
I solved the issue now
#BeforeClass
public static void setUpBeforeClass() {
System.setProperty("spring.kafka.bootstrap-servers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.binder.zkNodes", embeddedKafka.getZookeeperConnectionString());
}
while I was debugging, I saw that the embedded kaka server is taking a random port.
I couldn't find the configuration for it, so I am setting the kafka config same as the server. Looks still a bit ugly for me.
I would love to have just the #Mayur mentioned line
#EmbeddedKafka(partitions = 1, controlledShutdown = false, brokerProperties = {"listeners=PLAINTEXT://localhost:9092", "port=9092"})
but can't find the right dependency in the internet.
In integration testing, having fixed ports like 9092 is not recommended because multiple tests should have the flexibility to open their own ports from embedded instances. So, following implementation is something like that,
NB: this implementation is based on junit5(Jupiter:5.7.0) and spring-boot 2.3.4.RELEASE
TestClass:
#EnableKafka
#SpringBootTest(classes = {ConsumerTest.Config.class, Consumer.class})
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false)
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
public class ConsumerTest {
#Autowired
private EmbeddedKafkaBroker kafkaEmbedded;
#Autowired
private KafkaListenerEndpointRegistry kafkaListenerEndpointRegistry;
#BeforeAll
public void setUp() throws Exception {
for (final MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
ContainerTestUtils.waitForAssignment(messageListenerContainer,
kafkaEmbedded.getPartitionsPerTopic());
}
}
#Value("${topic.name}")
private String topicName;
#Autowired
private KafkaTemplate<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestKafkaTemplate;
#Test
public void consume_success() {
requestKafkaTemplate.send(topicName, load);
}
#Configuration
#Import({
KafkaListenerConfig.class,
TopicConfig.class
})
public static class Config {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestProducerFactory() {
final Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> requestKafkaTemplate() {
return new KafkaTemplate<>(requestProducerFactory());
}
}
}
Listener Class:
#Component
public class Consumer {
#KafkaListener(
topics = "${topic.name}",
containerFactory = "listenerContainerFactory"
)
#Override
public void listener(
final ConsumerRecord<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> consumerRecord,
final #Payload Optional<Map<String, List<ImmutablePair<String, String>>>> payload
) {
}
}
Listner Config:
#Configuration
public class KafkaListenerConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${topic.name}")
private String resolvedTreeQueueName;
#Bean
public ConsumerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> resolvedTreeConsumerFactory() {
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, resolvedTreeQueueName);
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new CustomDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> resolvedTreeListenerContainerFactory() {
final ConcurrentKafkaListenerContainerFactory<String, Optional<Map<String, List<ImmutablePair<String, String>>>>> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(resolvedTreeConsumerFactory());
return factory;
}
}
TopicConfig:
#Configuration
public class TopicConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Value(value = "${topic.name}")
private String requestQueue;
#Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
return new KafkaAdmin(configs);
}
#Bean
public NewTopic requestTopic() {
return new NewTopic(requestQueue, 1, (short) 1);
}
}
application.properties:
spring.kafka.bootstrap-servers=${spring.embedded.kafka.brokers}
This assignment is the most important assignment that would bind the embedded instance port to the KafkaTemplate and, KafkaListners.
Following the above implementation, you could open dynamic ports per test class and, it would be more convenient.

Resources