How can I read a message from Kafka topic on demand. I have the topic name, offsetId, PartitionID, using these three params, how can i retrieve a specific message from Kafka Topic. Is it possible using Spring Kafka ?
I am using spring boot 2.2.4.RELEASE
create consumer
assign the topic/partition
seek
poll for one record
close consumer
#SpringBootApplication
public class So64759726Application {
public static void main(String[] args) {
SpringApplication.run(So64759726Application.class, args);
}
#Bean
ApplicationRunner runner(ConsumerFactory<String, String> cf) {
return args -> {
try (Consumer<String, String> consumer = cf.createConsumer()) {
TopicPartition tp = new TopicPartition("so64759726", 0);
consumer.assign(Collections.singleton(tp));
consumer.seek(tp, 2);
ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(5));
System.out.println(records.iterator().next().value());
}
};
}
}
application.properties
spring.kafka.consumer.max-poll-records=1
UPDATE
Since this answer was posted, the KafkaTemplate now has receive() methods for on-demand consumption.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#kafka-template-receive
ConsumerRecord<K, V> receive(String topic, int partition, long offset);
ConsumerRecord<K, V> receive(String topic, int partition, long offset, Duration pollTimeout);
ConsumerRecords<K, V> receive(Collection<TopicPartitionOffset> requested);
ConsumerRecords<K, V> receive(Collection<TopicPartitionOffset> requested, Duration pollTimeout);
Related
I have implemented Spring Kafka Template for producing event in my spring boot project.The code for producing an event is given below-
Producer Config:
#Beanpublic Map<String, Object> producerConfigs() throws FileNotFoundException {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,kafkaProperties.getBootstrapServers());
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG,kafkaProperties.getSecurity().getProtocol());
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG,ResourceUtils.getFile("classpath:client.truststoreks").getAbsolutePath());
props.put(SslConfigs.SSL_ENDPOINT_IDENTIFICATION_ALGORITHM_CONFIG,StringUtils.EMPTY);props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
props.put(ProducerConfig.LINGER_MS_CONFIG, "100");
return props;
}
Producer Service Code:
public class KafkaProducerService<V> implements KafkaProducer<V> {
#Autowired
KafkaTemplate<String, V> kafkaTemplate;
#Autowired
KafkaTemplate<String, V> transactionLogKafkaTemplate;
public KafkaProducerService(KafkaTemplate<String, V> kafkaTemplate, KafkaTemplate<String, V> transactionLogKafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
this.transactionLogKafkaTemplate = transactionLogKafkaTemplate;
}
#Override
#Retryable({KafkaException.class, TimeoutException.class})
public void produce(String topic, String key, V value) {
log.info("Calling producer service for producing event on topic "+topic);
sendCallbackEvents(kafkaTemplate, topic, key, value);
}
private void sendCallbackEvents(KafkaTemplate<String, V> kafkaTemplate, String topic, String key, V value) {
ProducerRecord<String, V> producerRecord = new ProducerRecord(topic, key, value);
ListenableFuture<SendResult<String, V>> future = kafkaTemplate.send(producerRecord);
future.addCallback(new ListenableFutureCallback<SendResult<String, V>>() {
#Override
public void onSuccess(SendResult<String, V> result) {
log.info(String.format("Produced event to topic %s: key = %-10s value = %s", topic, key, value));
}
#Override
public void onFailure(Throwable ex) {
log.error("Producing of data on topic {} is failed", topic, ex.getCause());
}
});
}
}
P.S: We are using AWS MSK as a broker for producing an event.
But in some cases, it's taking one minute time for producing an event and getting failed with the below error in logs-
ERROR LogAccessor - Exception thrown when sending a message with key='xx' and payload='Event(key=value)' to topic topicName:
Hence it's able to produce the event due to retry logic on producer service but due to that 1-minute delay, I am facing several issues.
I tried to find out the reason for the producer service delay and failure while going through the Spring Kafka dependency classes but no luck.
I am not able to find out the exact reason why the producer is getting delayed and failing in 1st attempt for some cases. Can anyone help me in identifying the reason for that and the solution to the issue?
I would like to create a spring boot application that reads from several Kafka topics. I realise I can create a comma separated list of topics on my appliation.properties, however I would like the topic names to be listed separately for readability and so I can use each topic name to work out how to process the message.
I've found the following questions, but they all have the topics listed as a comma separated array:
Consume multiple topics in one listener in spring boot kafka
Using multiple topic names with KafkaListener annotation
Enabling #KafkaListener to take in variable topic names from application.yml file
Pass array list of topic names to #KafkaListener
The closest I've come is with the following:
application.properties
kafka.topic1=topic1
kafka.topic2=topic2
KafkaConsumer
#KafkaListener(topics = "#{'${kafka.topic1}'},#{'${kafka.topic2}'}")
public void receive(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(required = false, name= KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Payload(required = false) String payload) throws IOException {
}
This gives the error:
Caused by: org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: [topic1,topic2]
I realise I need it to be {"topic1", "topic2} but I can't work out how.
Having the annotation #KafkaListener(topics = "#{'${kafka.topic1}'}") correctly subscribes to the first topic. And if I change it to #KafkaListener(topics = "#{'${kafka.topic2}'}") I can correctly subscribe to the second topic.
It's just the creating of the array of topics in the annotation that I can't fathom.
Any help would be wonderful
#KafkaListener(id = "so71497475", topics = { "${kafka.topic1}", "${kafka.topic2}" })
EDIT
And this is a more sophisticated technique which would allow you to add more topics without changing any code:
#SpringBootApplication
#EnableConfigurationProperties
public class So71497475Application {
public static void main(String[] args) {
SpringApplication.run(So71497475Application.class, args);
}
#KafkaListener(id = "so71497475", topics = "#{#myProps.kafkaTopics}")
void listen(String in) {
System.out.println(in);
}
#Bean // This will add the topics to the broker if not present
KafkaAdmin.NewTopics topics(MyProps props) {
return new KafkaAdmin.NewTopics(props.getTopics().stream()
.map(t -> TopicBuilder.name(t).partitions(1).replicas(1).build())
.toArray(size -> new NewTopic[size]));
}
}
#ConfigurationProperties("my.kafka")
#Component
class MyProps {
private List<String> topics = new ArrayList<>();
public List<String> getTopics() {
return this.topics;
}
public void setTopics(List<String> topics) {
this.topics = topics;
}
public String[] getKafkaTopics() {
return this.topics.toArray(new String[0]);
}
}
my.kafka.topics[0]=topic1
my.kafka.topics[1]=topic2
my.kafka.topics[2]=topic3
so71497475: partitions assigned: [topic1-0, topic2-0, topic3-0]
If you have your topics configured as comma seperated like:
kafka.topics = topic1,topic2
In this case you can simply use:
#KafkaListener(topics = "#{'${kafka.topics}'.split(',')}")
void listen(){}
This is my application that consumes data from a Kafka topic and then computed results are sent to a topic.
#SpringBootApplication
#EnableBinding(KStreamProcessor.class)
public class WordCountProcessorApplication {
#StreamListener("input")
#SendTo("output")
public KStream<?, WordCount> process(KStream<?, String> input) {
return input
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
.groupBy((key, value) -> value)
.windowedBy(TimeWindows.of(5000))
.count(Materialized.as("WordCounts-multi"))
.toStream()
.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));
}
public static void main(String[] args) {
SpringApplication.run(WordCountProcessorApplication.class, args);
}
How can I print a log befor every consume data from a Kafka topic?
Add a Transformer at the beginning and end of your topology.
See this discussion where there was a request to automatically add custom transformers to the topology by the framework.
It was decided that the work around to add your own is sufficient.
I have a spring-kafka microservice to which I recently added a dead letter to be able to send the various error messages
//some code..
#Component
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("myDeadLetter", message);
}
}
I would like to call the topic kafka of the dead letter as "messageTopic" + "_deadLetter", my main topic being "messageTopic". In my Consumer the topic name gives him the application.yml as follows:
#KafkaListener(topics = "${spring.kafka.topic.name}")
How can I set the same kafka topic by possibly inserting the "+ deadLetter" from the application.yml? I tried such a thing:
#Component
#KafkaListener(topics = "${spring.kafka.topic.name}"+"_deadLetter")
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("messageTopic_deadLetter", message);
}
}
but it creates me two different topics with the same name. I am waiting for some advice, thanks for the help!
Kafka Listener accepts constant for the Topic name, we can't modify the TOPIC name here.
Ideally good to go with separate methods (Kafka listeners) for actual topic and dead letter topic, define two different properties in YAML to hold two topic names.
#KafkaListener(topics = "${spring.kafka.topic.name}")
public void listen(......){
}
#KafkaListener(topics = "${spring.kafka.deadletter.topic.name}")
public void listenDlt(......){
}
To refer topic name inside send(...) from yml or property file
#Component
#KafkaListener(topics = "${spring.kafka.deadletter.topic.name}")
public class KafkaProducer {
#Value("${spring.kafka.deadletter.topic.name}")
private String DLT_TOPIC_NAME;
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send(DLT_TOPIC_NAME, message);
}
}
You can construct the topic name with SpEL:
#KafkaListener(topics = "#{'${spring.kafka.topic.name}' + '_deadLetter'"})
Note the single quotes around the property placeholder and literal.
This example may not be relevant to your use case, but sharing in case it's helpful to someone.
If you are building a Kafka Stream application, variable sink topic names can be achieved with the following:
When producing to the sink topic, pass a lambda that has the context as argument and the method that will handle the name definition.
... /* precedent stream operations */
// terminal operation 'to'.
.to(
(k, v, ctx) -> sinkTopicNameGenerator(ctx),
Produced.with(Serdes, Serdes)
);
Implement the method that generates the sink topic names:
protected static String sinkTopicNameGenerator(RecordContext ctx) {
return ctx.topic().concat("_deadLetter");
}
The above example is simple enough to be simplified to (k, v, ctx) -> ctx.topic().concat("_deadLetter"), but I wanted to keep the separate method approach for cases where further transformations are required, i.e. when part of the topic name will be replaced by some constant or regex defined in the config file.
I am creating a Kafka Spring producer under Spring Boot which will send data to Kafka and then write to a database; I want all that work to be in one transaction. I am new to Kafka and no expert on Spring, and am having some difficulty. Any pointers much appreciated.
So far my code writes to Kafka successfully in a loop. I have not yet set up
the DB, but have proceeded to set up global transactioning by adding a transactionIdPrefix to the producerFactory in the configuration:
producerFactory.setTransactionIdPrefix("MY_SERVER");
and added #Transactional to the method that does the Kafka send. Eventually I plan to do my DB work in that same method.
Problem: the code runs great the first time. But if I stop the program, even cleanly, I find that the code hangs the 2nd time I run it as soon as it enters the #Transactional method. If I comment out the #Transactional, it enters the method but hangs on the kafa template send().
The problem seems to be the transaction ID. If I change the prefix and rerun, the program runs fine again the first time but hangs when I run it again, until a new prefix is chosen. Since after a restart the trans ID counter starts at zero, if the trans ID prefix does not change then the same trans ID will be used upon restart.
It seems to me that the original transID is still open on the server, and was never committed. (I can read the data off the topic using the console-consumer, but that will read uncommitted). But if that is the case, how do I get spring to commit the trans? I am thinking my coniguration must be wrong. Or-- is the issue possibly that trans ID's can never be reused? (In which case, how does one solve that?)
Here is my relevant code. Config is:
#SpringBootApplication
public class MYApplication {
#Autowired
private static ChangeSweeper changeSweeper;
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
DefaultKafkaProducerFactory<String, String> producerFactory=new DefaultKafkaProducerFactory<>(configProps);
producerFactory.setTransactionIdPrefix("MY_SERVER");
return producerFactory;
}
#Bean
public KafkaTransactionManager<String, String> KafkaTransactionManager() {
return new KafkaTransactionManager<String, String>((producerFactory()));
}
#Bean(name="kafkaProducerTemplate")
public KafkaTemplate<String, String> kafkaProducerTemplate() {
return new KafkaTemplate<>(producerFactory());
}
And the method that does the transaction is:
#Transactional
public void send( final List<Record> records) {
logger.debug("sending {} records; batchSize={}; topic={}", records.size(),batchSize, kafkaTopic);
// Divide the record set into batches of size batchSize and send each batch with a kafka transaction:
for (int batchStartIndex = 0; batchStartIndex < records.size(); batchStartIndex += batchSize ) {
int batchEndIndex=Math.min(records.size()-1, batchStartIndex+batchSize-1);
List<Record> nextBatch = records.subList(batchStartIndex, batchEndIndex);
logger.debug("## batch is from " + batchStartIndex + " to " + batchEndIndex);
for (Record record : nextBatch) {
kafkaProducerTemplate.send( kafkaTopic, record.getKey().toString(), record.getData().toString());
logger.debug("Sending> " + record);
}
// I will put the DB writes here
}
This works fine for me no matter how many times I run it (but I have to run 3 broker instances on my local machine because transactions require that by default)...
#SpringBootApplication
#EnableTransactionManagement
public class So47817034Application {
public static void main(String[] args) {
SpringApplication.run(So47817034Application.class, args).close();
}
private final CountDownLatch latch = new CountDownLatch(2);
#Bean
public ApplicationRunner runner(Foo foo) {
return args -> {
foo.send("foo");
foo.send("bar");
this.latch.await(10, TimeUnit.SECONDS);
};
}
#Bean
public KafkaTransactionManager<Object, Object> KafkaTransactionManager(KafkaProperties properties) {
return new KafkaTransactionManager<Object, Object>(kafkaProducerFactory(properties));
}
#Bean
public ProducerFactory<Object, Object> kafkaProducerFactory(KafkaProperties properties) {
DefaultKafkaProducerFactory<Object, Object> factory =
new DefaultKafkaProducerFactory<Object, Object>(properties.buildProducerProperties());
factory.setTransactionIdPrefix("foo-");
return factory;
}
#KafkaListener(id = "foo", topics = "so47817034")
public void listen(String in) {
System.out.println(in);
this.latch.countDown();
}
#Component
public static class Foo {
#Autowired
private KafkaTemplate<Object, Object> template;
#Transactional
public void send(String go) {
this.template.send("so47817034", go);
}
}
}