i have a problem i made an apache kafka consumer in spring boot to consume 3 different topics. but I need to consume all the data from the first topic first and then consume the data from the following topics, is there any way to do that? or will you always read them the same way?
#Component
public class KafkaTestListener {
#KafkaListener(topics = "${message.topic.name}", groupId = "${message.group.name}")
public void listenTopic1(String message) {....}
#KafkaListener(topics = "${message.topic.name2}", groupId = "${message.group.name}")
public void listenTopic3(String message) {....}
#KafkaListener(topics = "${message.topic.name3}", groupId = "${message.group.name}")
public void listenTopic3(String message) {.....}
}
Give each listener an id; set autoStartup to false.
Set the container property idleEventInterval to some value.
Add an #EventListener method to receive ListenerContainerIdleEvents - see https://docs.spring.io/spring-kafka/docs/2.5.3.RELEASE/reference/html/#events and https://docs.spring.io/spring-kafka/docs/2.5.3.RELEASE/reference/html/#event-consumption
Use the KafkaListenerEndpointRegistry to start and stop the containers (via id) as needed.
Related
I would like to create a spring boot application that reads from several Kafka topics. I realise I can create a comma separated list of topics on my appliation.properties, however I would like the topic names to be listed separately for readability and so I can use each topic name to work out how to process the message.
I've found the following questions, but they all have the topics listed as a comma separated array:
Consume multiple topics in one listener in spring boot kafka
Using multiple topic names with KafkaListener annotation
Enabling #KafkaListener to take in variable topic names from application.yml file
Pass array list of topic names to #KafkaListener
The closest I've come is with the following:
application.properties
kafka.topic1=topic1
kafka.topic2=topic2
KafkaConsumer
#KafkaListener(topics = "#{'${kafka.topic1}'},#{'${kafka.topic2}'}")
public void receive(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
#Header(required = false, name= KafkaHeaders.RECEIVED_MESSAGE_KEY) String key,
#Payload(required = false) String payload) throws IOException {
}
This gives the error:
Caused by: org.apache.kafka.common.errors.InvalidTopicException: Invalid topics: [topic1,topic2]
I realise I need it to be {"topic1", "topic2} but I can't work out how.
Having the annotation #KafkaListener(topics = "#{'${kafka.topic1}'}") correctly subscribes to the first topic. And if I change it to #KafkaListener(topics = "#{'${kafka.topic2}'}") I can correctly subscribe to the second topic.
It's just the creating of the array of topics in the annotation that I can't fathom.
Any help would be wonderful
#KafkaListener(id = "so71497475", topics = { "${kafka.topic1}", "${kafka.topic2}" })
EDIT
And this is a more sophisticated technique which would allow you to add more topics without changing any code:
#SpringBootApplication
#EnableConfigurationProperties
public class So71497475Application {
public static void main(String[] args) {
SpringApplication.run(So71497475Application.class, args);
}
#KafkaListener(id = "so71497475", topics = "#{#myProps.kafkaTopics}")
void listen(String in) {
System.out.println(in);
}
#Bean // This will add the topics to the broker if not present
KafkaAdmin.NewTopics topics(MyProps props) {
return new KafkaAdmin.NewTopics(props.getTopics().stream()
.map(t -> TopicBuilder.name(t).partitions(1).replicas(1).build())
.toArray(size -> new NewTopic[size]));
}
}
#ConfigurationProperties("my.kafka")
#Component
class MyProps {
private List<String> topics = new ArrayList<>();
public List<String> getTopics() {
return this.topics;
}
public void setTopics(List<String> topics) {
this.topics = topics;
}
public String[] getKafkaTopics() {
return this.topics.toArray(new String[0]);
}
}
my.kafka.topics[0]=topic1
my.kafka.topics[1]=topic2
my.kafka.topics[2]=topic3
so71497475: partitions assigned: [topic1-0, topic2-0, topic3-0]
If you have your topics configured as comma seperated like:
kafka.topics = topic1,topic2
In this case you can simply use:
#KafkaListener(topics = "#{'${kafka.topics}'.split(',')}")
void listen(){}
I'm using SpringBoot along with #JmsListener to retrieve IBM MQ messages from multiple queues within the same QManager. So far I can get messages without any issues. But there could be scenarios, where I had to stop consuming msgs from one of these queues temporarily. It doesn't have to be dynamic.
I'm not using any custom ConnectionFactory methods. When needed, I would like to make config changes in application.properties to disable that particular Queue consumption and restart the process. Is this possible? Can't find any specific info for this scenario. Would appreciate any suggestions. TIA.
#Component
public class MyJmsListener {
#JmsListener(destination = "{ibm.mq.queue.queue01}")
public void handleQueue01(String message) {
System.out.println("received: "+message);
}
#JmsListener(destination = "{ibm.mq.queue.queue02}")
public void handleQueue02(String message) {
System.out.println("received: "+message);
}
}
application.properties
ibm.mq.queue.queue01=IBM.QUEUE01
ibm.mq.queue.queue02=IBM.QUEUE02
If you give each #JmsListener an id property, you can start and stop them individually using the JmsListenerEndpointRegistry bean.
registry.getListenerContainer(id).stop();
I have a spring-kafka microservice to which I recently added a dead letter to be able to send the various error messages
//some code..
#Component
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("myDeadLetter", message);
}
}
I would like to call the topic kafka of the dead letter as "messageTopic" + "_deadLetter", my main topic being "messageTopic". In my Consumer the topic name gives him the application.yml as follows:
#KafkaListener(topics = "${spring.kafka.topic.name}")
How can I set the same kafka topic by possibly inserting the "+ deadLetter" from the application.yml? I tried such a thing:
#Component
#KafkaListener(topics = "${spring.kafka.topic.name}"+"_deadLetter")
public class KafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send("messageTopic_deadLetter", message);
}
}
but it creates me two different topics with the same name. I am waiting for some advice, thanks for the help!
Kafka Listener accepts constant for the Topic name, we can't modify the TOPIC name here.
Ideally good to go with separate methods (Kafka listeners) for actual topic and dead letter topic, define two different properties in YAML to hold two topic names.
#KafkaListener(topics = "${spring.kafka.topic.name}")
public void listen(......){
}
#KafkaListener(topics = "${spring.kafka.deadletter.topic.name}")
public void listenDlt(......){
}
To refer topic name inside send(...) from yml or property file
#Component
#KafkaListener(topics = "${spring.kafka.deadletter.topic.name}")
public class KafkaProducer {
#Value("${spring.kafka.deadletter.topic.name}")
private String DLT_TOPIC_NAME;
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void sendDeadLetter(String message) {
kafkaTemplate.send(DLT_TOPIC_NAME, message);
}
}
You can construct the topic name with SpEL:
#KafkaListener(topics = "#{'${spring.kafka.topic.name}' + '_deadLetter'"})
Note the single quotes around the property placeholder and literal.
This example may not be relevant to your use case, but sharing in case it's helpful to someone.
If you are building a Kafka Stream application, variable sink topic names can be achieved with the following:
When producing to the sink topic, pass a lambda that has the context as argument and the method that will handle the name definition.
... /* precedent stream operations */
// terminal operation 'to'.
.to(
(k, v, ctx) -> sinkTopicNameGenerator(ctx),
Produced.with(Serdes, Serdes)
);
Implement the method that generates the sink topic names:
protected static String sinkTopicNameGenerator(RecordContext ctx) {
return ctx.topic().concat("_deadLetter");
}
The above example is simple enough to be simplified to (k, v, ctx) -> ctx.topic().concat("_deadLetter"), but I wanted to keep the separate method approach for cases where further transformations are required, i.e. when part of the topic name will be replaced by some constant or regex defined in the config file.
I want to send something to Kafka topic in producer-only (not in read-write process) transaction using output-channel.
I read documentation and another topic on StackOverflow (Spring cloud stream kafka transactions in producer side).
Problem is that i need to set unique transactionIdPrefix per node.
Any suggestion how to do it?
Here is one way...
#Component
class TxIdCustomizer implements EnvironmentAware {
#Override
public void setEnvironment(Environment environment) {
Properties properties = new Properties();
properties.setProperty("spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix",
UUID.randomUUID().toString());
((StandardEnvironment) environment).getPropertySources()
.addLast(new PropertiesPropertySource("txId", properties));
}
}
I have a Kafka topic with data, called "topic01"
I want to create a consumer that every time I start my Spring Boot 2 application, start reading that topic from the beginning.
I have the following code, that if I add something new to the topic if it reaches me, but when starting the first time, it won't read me from the beginning of the topic.
#KafkaListener(topics = "topic01")
public void listenTopic01(ConsumerRecord<String, MiDTO> consumerRecord) throws Exception {
logger.info("KafkaHandler");
logger.info(consumerRecord.value().toString());
logger.info(consumerRecord.key().toString());
latch.countDown();
}
application.properties:
spring.kafka.consumer.group-id=XXXXX
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
What configuration should I add, so that this #KafkaListener reads the topic from the beginning, every time I restart my application.
Either use a unique (random) group-id each time, or have your listener class implement ConsumerSeekAware and add
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
or
#KafkaListener(topics = "topic01",
groupId = "#{T(java.util.UUID).randomUUID().toString()}")