#KafkaListener in Unit test case does not consume from the container factory - spring-boot

I wrote a JUnit test case to test the code in the "With Java Configuration" lesson in the Spring Kafka docs. (https://docs.spring.io/spring-kafka/reference/htmlsingle/#_with_java_configuration). The onedifference is that I am using an Embedded Kafka Server in the class, instead of a localhost server. I am using Spring Boot 2.0.2 and its Spring-Kafka dependency.
While running this test case, I see that the Consumer is not reading the message from the topic and the "assertTrue" check fails. There are no other errors.
#RunWith(SpringRunner.class)
public class SpringConfigSendReceiveMessage {
public static final String DEMO_TOPIC = "demo_topic";
#Autowired
private Listener listener;
#Test
public void testSimple() throws Exception {
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}
#Autowired
private KafkaTemplate<Integer, String> template;
#Configuration
#EnableKafka
public static class Config {
#Bean
public KafkaEmbedded kafkaEmbedded() {
return new KafkaEmbedded(1, true, 1, DEMO_TOPIC);
}
#Bean
public ConsumerFactory<Integer, String> createConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(createConsumerFactory());
return factory;
}
#Bean
public Listener listener() {
return new Listener();
}
#Bean
public ProducerFactory<Integer, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
}
}
class Listener {
public final CountDownLatch latch = new CountDownLatch(1);
#KafkaListener(id = "foo", topics = DEMO_TOPIC)
public void listen1(String foo) {
this.latch.countDown();
}
}
I think that this is because the #KafkaListener is using some wrong/default setting when reading from the topic. I dont see any errors in the logs.
Is this unit test case correct? How can i find the object that is created for the KafkaListener annotation and see which Kafka broker it consumes from? Any inputs will be helpful. Thanks.

The message is sent before the consumer starts.
By default, new consumers start consuming at the end of the topic.
Add
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

The answer by #gary-russell is the best solution. Another way to resolve this issue was to delay the message send step by some time. This will enable the consumer to be ready. The following is also a correct solution.
Lesson learned - For unit testing Kafka consumers, either consume all the messages in the test case, or ensure that the Consumer is ready before Producer sends the message.
#Test
public void testSimple() throws Exception {
Thread.sleep(1000);
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}

Related

Getting ProducerFencedException on producing record in Kafka listener thread

I am getting this exception when producing the message inside the kafka listener container.
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-tx-group.topicA.1
org.apache.kafka.common.errors.ProducerFencedException: The producer has been rejected from the broker because it tried to use an old epoch with the transactionalId
My listener looks like this
#Transactional
#kafkaListener(...)
listener(topicA, message){
process(message)
produce(topicB, notification) // use Kafkatemplate to send the message
}
My configuration looks like this
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory(KafkaTransactionManager kafkaTransactionManager) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setTransactionManager(kafkaTransactionManager);
return factory;
}
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, enableIdempotence);
DefaultKafkaProducerFactory<String, Object> factory = new
DefaultKafkaProducerFactory<>(props);
factory.setTransactionIdPrefix(transactionIdPrefix);
return factory;
}
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
KafkaTemplate<String, Object> template = new KafkaTemplate<>(producerFactory());
return template;
}
#Bean
public KafkaTransactionManager kafkaTransactionManager() {
KafkaTransactionManager manager = new KafkaTransactionManager(producerFactory());
return manager;
}
I know when ProducerFencedException is thrown by Kafka, But what I am trying to figure out here where is the second producer with the same transaction.id.
If I set the unique transaction prefix in the Kafka template it works fine
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
KafkaTemplate<String, Object> template = new KafkaTemplate<>(producerFactory());
template.setTransactionIdPrefix(MessageFormat.format("{0}-{1}", transactionIdPrefix, UUID.randomUUID().toString()));
return template;
}
But I am trying to understand the exception here, from where the other producer is being started with the same transaction id which follow this pattern for listener started transactions as per spring docs group.id/topic/partition
I am just trying this locally on single application instance.
I found the root cause, I was creating two producer instances here
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, enableIdempotence);
DefaultKafkaProducerFactory<String, Object> factory = new
DefaultKafkaProducerFactory<>(props);
factory.setTransactionIdPrefix(transactionIdPrefix);
return factory;
}
I was missing Bean configuration.
Adding #Bean on producing factor and properly auto wiring it in template and TM fixed the issue.

Spring Boot KafkaTemplate and KafkaListener test with EmbeddedKafka fails

I have 2 Spring Boot apps, one is Kafka publisher and the other is consumer. I am trying to write an integration test to make sure that events are sent and received.
The test is green when run in IDE or from command line without other tests, like mvn test -Dtest=KafkaPublisherTest. However, when I build the whole project, the test fails with org.awaitility.core.ConditionTimeoutException. There are multiple #EmbeddedKafka tests in the project.
The test gets stuck after these lines in logs:
2021-11-30 09:17:12.366 INFO 1437 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : wages-consumer-test: partitions assigned: [wages-test-0, wages-test-1]
2021-11-30 09:17:14.464 INFO 1437 --- [er-event-thread] kafka.controller.KafkaController : [Controller id=0] Processing automatic preferred replica leader election
If you have a better idea on how to test such things, please share.
Here is how the test looks like:
#SpringBootTest(properties = { "kafka.wages-topic.bootstrap-address=${spring.embedded.kafka.brokers}" })
#EmbeddedKafka(partitions = 1, topics = "${kafka.wages-topic.name}")
class KafkaPublisherTest {
#Autowired
private TestWageProcessor testWageProcessor;
#Autowired
private KafkaPublisher kafkaPublisher;
#Autowired
private KafkaTemplate<String, WageEvent> kafkaTemplate;
#Test
void publish() {
Date date = new Date();
WageCreateDto wageCreateDto = new WageCreateDto().setName("test").setSurname("test").setWage(BigDecimal.ONE).setEventTime(date);
kafkaPublisher.publish(wageCreateDto);
kafkaTemplate.flush();
WageEvent expected = new WageEvent().setName("test").setSurname("test").setWage(BigDecimal.ONE).setEventTimeMillis(date.toInstant().toEpochMilli());
await()
.atLeast(Duration.ONE_HUNDRED_MILLISECONDS)
.atMost(Duration.TEN_SECONDS)
.with()
.pollInterval(Duration.ONE_HUNDRED_MILLISECONDS)
.until(testWageProcessor::getLastReceivedWageEvent, equalTo(expected));
}
}
Publisher config:
#Configuration
#EnableConfigurationProperties(WagesTopicPublisherProperties.class)
public class KafkaConfiguration {
#Bean
public KafkaAdmin kafkaAdmin(WagesTopicPublisherProperties wagesTopicPublisherProperties) {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, wagesTopicPublisherProperties.getBootstrapAddress());
return new KafkaAdmin(configs);
}
#Bean
public NewTopic wagesTopic(WagesTopicPublisherProperties wagesTopicPublisherProperties) {
return new NewTopic(wagesTopicPublisherProperties.getName(), wagesTopicPublisherProperties.getPartitions(), wagesTopicPublisherProperties.getReplicationFactor());
}
#Primary
#Bean
public WageEventSerde wageEventSerde() {
return new WageEventSerde();
}
#Bean
public ProducerFactory<String, WageEvent> producerFactory(WagesTopicPublisherProperties wagesTopicPublisherProperties, WageEventSerde wageEventSerde) {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
wagesTopicPublisherProperties.getBootstrapAddress());
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
wageEventSerde.serializer().getClass());
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, WageEvent> kafkaTemplate(ProducerFactory<String, WageEvent> producerFactory) {
return new KafkaTemplate<>(producerFactory);
}
}
Consumer config:
#Configuration
#EnableConfigurationProperties(WagesTopicConsumerProperties.class)
public class ConsumerConfiguration {
#ConditionalOnMissingBean(WageEventSerde.class)
#Bean
public WageEventSerde wageEventSerde() {
return new WageEventSerde();
}
#Bean
public ConsumerFactory<String, WageEvent> wageConsumerFactory(WagesTopicConsumerProperties wagesTopicConsumerProperties, WageEventSerde wageEventSerde) {
Map<String, Object> props = new HashMap<>();
props.put(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
wagesTopicConsumerProperties.getBootstrapAddress());
props.put(
ConsumerConfig.GROUP_ID_CONFIG,
wagesTopicConsumerProperties.getGroupId());
props.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
wageEventSerde.deserializer().getClass());
return new DefaultKafkaConsumerFactory<>(
props,
new StringDeserializer(),
wageEventSerde.deserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, WageEvent> wageEventConcurrentKafkaListenerContainerFactory(ConsumerFactory<String, WageEvent> wageConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, WageEvent> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(wageConsumerFactory);
return factory;
}
}
#KafkaListener(
topics = "${kafka.wages-topic.name}",
containerFactory = "wageEventConcurrentKafkaListenerContainerFactory")
public void consumeWage(WageEvent wageEvent) {
log.info("Wage event received: " + wageEvent);
wageProcessor.process(wageEvent);
}
Here is the project source code: https://github.com/aleksei17/springboot-rest-kafka-mysql
Here are the logs of failed build: https://drive.google.com/file/d/1uE2w8rmJhJy35s4UJXf4_ON3hs9JR6Au/view?usp=sharing
When I used Testcontainers Kafka instead of #EmbeddedKafka, the issue was solved. The tests looked like this:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
class PublisherApplicationTest {
public static final KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka").withTag("5.4.3"));
static {
kafka.start();
System.setProperty("kafka.wages-topic.bootstrap-address", kafka.getBootstrapServers());
}
However, I could not say I understand the issue. When I used a singleton pattern as described here, I had the same issue. Maybe something like #DirtiesContext could help: it helped to fix one test at work, but not in this learning project.

Listener not consuming message on Test with EmbeddedKafka Spring

I was trying to see if a service is invoked when the consumer receives a message from kafka topic but the test is not passing and the consumer is not even receiving the message.
My test:
#SpringBootTest
#DirtiesContext
#EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:9092", "port=9092" })
class ConsumerTest {
#Autowired
Consumer consumer;
#Mock
private Service service;
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Test
public void givenEmbeddedKafkaBroker_whenSendingtoSimpleProducer_thenMessageReceived()
throws Exception {
String message = "hi";
kafkaTemplate.send("topic", message);
consumer.getLatch().await(10000, TimeUnit.MILLISECONDS);
verify(service, times(1)).addMessage();
}
}
The consumer, in main package, is a normal consumer with #KafkaListener(topics = "topic").
Then I have a configuration file:
#EnableKafka
#Configuration
public class KafkaConfig {
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new StringDeserializer());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
And also in application.properties (inside test package) i put this:
spring:
kafka:
consumer:
auto-offset-reset: earliest
group-id: group

Exactly Once Two Kafka Clusters

I have 2 Kafka clusters. Cluster A and Cluster B. These clusters are completely separate.
I have a spring-boot application that listens to a topic on Cluster A transforms the event and then produces it onto Cluster B. I require exactly once as these are financial events. I have noticed that with my current application I sometimes get duplicates as well as miss some events. I have tried to implement exactly once as best I could. One of the posts said flink would be a better option over spring-boot. Should i move over to flink? Please see the Spring code below.
ConsumerConfig
#Configuration
public class KafkaConsumerConfig {
#Value("${kafka.server.consumer}")
String server;
#Value("${kafka.kerberos.service.name:}")
String kerberosServiceName;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30000);
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, AvroDeserializer.class);
config.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
// skip kerberos if no value is provided
if (kerberosServiceName.length() > 0) {
config.put("security.protocol", "SASL_PLAINTEXT");
config.put("sasl.kerberos.service.name", kerberosServiceName);
}
return config;
}
#Bean
public ConsumerFactory<String, AccrualSchema> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(),
new AvroDeserializer<>(AccrualSchema.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, AccrualSchema> kafkaListenerContainerFactory(ConsumerErrorHandler errorHandler) {
ConcurrentKafkaListenerContainerFactory<String, AccrualSchema> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setAutoStartup(true);
factory.getContainerProperties().setAckMode(AckMode.RECORD);
factory.setErrorHandler(errorHandler);
return factory;
}
#Bean
public KafkaConsumerAccrual receiver() {
return new KafkaConsumerAccrual();
}
}
ProducerConfig
#Configuration
public class KafkaProducerConfig {
#Value("${kafka.server.producer}")
String server;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
config.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
config.put(ProducerConfig.ACKS_CONFIG, "all");
config.put(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "prod-1");
config.put(ProducerConfig.COMPRESSION_TYPE_CONFIG,"snappy");
config.put(ProducerConfig.LINGER_MS_CONFIG, "10");
config.put(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32*1024));
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
KafkaProducer
#Service
public class KafkaTopicProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void topicProducer(String payload, String topic) {
kafkaTemplate.executeInTransaction(kt->kt.send(topic, payload));
}
}
KafkaConsumer
public class KafkaConsumerAccrual {
#Autowired
KafkaTopicProducer kafkaTopicProducer;
#Autowired
Gson gson;
#KafkaListener(topics = "topicname", groupId = "groupid", id = "listenerid")
public void consume(AccrualSchema accrual,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) Integer partition, #Header(KafkaHeaders.OFFSET) Long offset,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
AccrualEntity accrualEntity = convertBusinessObjectToAccrual(accrual,partition,offset);
kafkaTopicProducer.topicProducer(gson.toJson(accrualEntity, AccrualEntity.class), accrualTopic);
}
public AccrualEntity convertBusinessObjectToAccrual(AccrualSchema ao, Integer partition,
Long offset) {
//Transform code goes here
return ae;
}
}
Exactly Once semantics are not supported across clusters; the key is producing records and committing the consumed offset(s) within one atomic transaction.
That is not possible in your environment since a transaction can't span two clusters.
According to this specific Draft KIP :
https://cwiki.apache.org/confluence/display/KAFKA/KIP-656%3A+MirrorMaker2+Exactly-once+Semantics
The better thing that you can expect right know is an at-least-one semantic between two clusters.
It implies that you have to protect from duplication on the consumer side of the target broker.
As an example, you can identify a set a properties that must be unique for a specific windows of time. But i will really depend on your use case.

How to use "li-apache-kafka-clients" in spring boot app to send large message (above 1MB) from Kafka producer?

How to use li-apache-kafka-clients in spring boot app to send large message (above 1MB) from Kafka producer to Kafka Consumer? Below is the GitHub link of li-apache-kafka-clients:
https://github.com/linkedin/li-apache-kafka-clients
I have imported .jar file of li-apache-kafka-clients and put the below configuration for producer:
props.put("large.message.enabled", "true");
props.put("max.message.segment.bytes", 1000 * 1024);
props.put("segment.serializer", DefaultSegmentSerializer.class.getName());
and for consumer:
message.assembler.buffer.capacity,
max.tracked.messages.per.partition,
exception.on.message.dropped,
segment.deserializer.class
but still getting error for large message. Please help me to solve error.
Below is my code, please let me know where I need to create LiKafkaProducer:
#Configuration
public class KafkaProducerConfig {
#Value("${kafka.boot.server}")
private String kafkaServer;
#Bean
public ProducerFactory<String, String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfig());
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<String, String>(producerFactory());
}
#Bean
public Map<String, Object> producerConfig() {
// TODO Auto-generated method stub
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put("bootstrap.servers", "localhost:9092");
config.put("acks", "all");
config.put("retries", 0);
config.put("batch.size", 16384);
config.put("linger.ms", 1);
config.put("buffer.memory", 33554432);
// The following properties are used by LiKafkaProducerImpl
config.put("large.message.enabled", "true");
config.put("max.message.segment.bytes", 1000 * 1024);
config.put("segment.serializer", DefaultSegmentSerializer.class.getName());
config.put("auditor.class", LoggingAuditor.class.getName());
return config;
}
}
#RestController
#RequestMapping("/kafkaProducer")
public class KafkaProducerController {
#Autowired
private KafkaSender sender;
#PostMapping
public ResponseEntity<List<Student>> sendData(#RequestBody List<Student> student){
sender.sendData(student);
return new ResponseEntity<List<Student>>(student, HttpStatus.OK);
}
}
#Service
public class KafkaSender {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaSender.class);
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Value("${kafka.topic.name}")
private String topicName;
public void sendData(List<Student> student) {
// TODO Auto-generated method stub
Map<String, Object> headers = new HashMap<>();
headers.put(KafkaHeaders.TOPIC, topicName);
headers.put("payload", student.get(0));
// Construct a JSONObject from a Map.
JSONObject HeaderObject = new JSONObject(headers);
System.out.println("\nMethod-2: Using new JSONObject() ==> " + HeaderObject);
final String record = HeaderObject.toString();
Message<String> message = MessageBuilder.withPayload(record).setHeader(KafkaHeaders.TOPIC, topicName)
.setHeader(KafkaHeaders.MESSAGE_KEY, "Message")
.build();
kafkaTemplate.send(topicName, message.toString());
}
}
You would need to implement your own ConsumerFactory and ProducerFactory to create the LiKafkaConsumer and LiKafkaProducer respectively.
You should be able to subclass the default factories provided by the framework.

Resources