Kafka: Topic not present in metadata Exception - spring-boot

I use the Spring KafkaTemplate abilities to send message in Kafak-topic.
Configuration is:
#Bean
public KafkaAdmin createKafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:2181");
return new KafkaAdmin(configs);
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:2181");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
Then I try to send message:
#Autowire
private KafkaTemplate<String, String> kafkaTemplate;
ListenableFuture<SendResult<String, String>> future =
kafkaTemplate.send("waiting_for_ack",key, value);
But I receive the following exception:
TimeoutException: Topic waiting_for_ack not present in metadata after 60000 ms.
Target topic exist, in which was able to make sure, by:
./kafka-topics.sh --zookeeper localhost:2181 --list _consumer_offsets
waiting_for_ack
What am I do wrong, I what way to determine the cause of this exception?

You need to specify the broker urls instead of the zookeeper url in the BOOTSTRAP_SERVERS_CONFIG property. You can try checking for it in the
server.properties
available in /config folder under the kafka installation.Usually it would be
bootstrap.servers=localhost:9092

Related

Getting ProducerFencedException on producing record in Kafka listener thread

I am getting this exception when producing the message inside the kafka listener container.
javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-tx-group.topicA.1
org.apache.kafka.common.errors.ProducerFencedException: The producer has been rejected from the broker because it tried to use an old epoch with the transactionalId
My listener looks like this
#Transactional
#kafkaListener(...)
listener(topicA, message){
process(message)
produce(topicB, notification) // use Kafkatemplate to send the message
}
My configuration looks like this
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory(KafkaTransactionManager kafkaTransactionManager) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setTransactionManager(kafkaTransactionManager);
return factory;
}
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, enableIdempotence);
DefaultKafkaProducerFactory<String, Object> factory = new
DefaultKafkaProducerFactory<>(props);
factory.setTransactionIdPrefix(transactionIdPrefix);
return factory;
}
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
KafkaTemplate<String, Object> template = new KafkaTemplate<>(producerFactory());
return template;
}
#Bean
public KafkaTransactionManager kafkaTransactionManager() {
KafkaTransactionManager manager = new KafkaTransactionManager(producerFactory());
return manager;
}
I know when ProducerFencedException is thrown by Kafka, But what I am trying to figure out here where is the second producer with the same transaction.id.
If I set the unique transaction prefix in the Kafka template it works fine
#Bean
public KafkaTemplate<String, Object> kafkaTemplate() {
KafkaTemplate<String, Object> template = new KafkaTemplate<>(producerFactory());
template.setTransactionIdPrefix(MessageFormat.format("{0}-{1}", transactionIdPrefix, UUID.randomUUID().toString()));
return template;
}
But I am trying to understand the exception here, from where the other producer is being started with the same transaction id which follow this pattern for listener started transactions as per spring docs group.id/topic/partition
I am just trying this locally on single application instance.
I found the root cause, I was creating two producer instances here
public ProducerFactory<String, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, enableIdempotence);
DefaultKafkaProducerFactory<String, Object> factory = new
DefaultKafkaProducerFactory<>(props);
factory.setTransactionIdPrefix(transactionIdPrefix);
return factory;
}
I was missing Bean configuration.
Adding #Bean on producing factor and properly auto wiring it in template and TM fixed the issue.

#KafkaListener get all messages from particular kafka topic

I have a #KafkaListener method to get all messages in topic but I only get one message for each interval time that #Scheduled method works. How can I get all messages from topic in once?
Here's my class;
#Slf4j
#Service
public class KafkaConsumerServiceImpl implements KafkaConsumerService {
#Autowired
private SimpMessagingTemplate webSocket;
#Autowired
private KafkaListenerEndpointRegistry registry;
#Autowired
private BrokerProducerService brokerProducerService;
#Autowired
private GlobalConfig globalConfig;
#Override
#KafkaListener(id = "snapshotOfOutagesId", topics = Constants.KAFKA_TOPIC, groupId = "snapshotOfOutages", autoStartup = "false")
public void consumeToSnapshot(ConsumerRecord<String, OutageDTO> cr, #Payload String content) {
log.info("Received content from Kafka notification to notification-snapshot topic: {}", content);
MessageListenerContainer listenerContainer = registry.getListenerContainer("snapshotOfOutagesId");
JSONObject jsonObject= new JSONObject(content);
Map<String, Object> outageMap = jsonToMap(jsonObject);
brokerProducerService.sendMessage(globalConfig.getTopicProperties().getSnapshotTopicName(),
outageMap.get("outageId").toString(), toJson(outageMap));
listenerContainer.stop();
}
#Scheduled(initialDelayString = "${scheduler.kafka.snapshot.monitoring}",fixedRateString = "${scheduler.kafka.snapshot.monitoring}")
private void consumeWithScheduler() {
MessageListenerContainer listenerContainer = registry.getListenerContainer("snapshotOfOutagesId");
if (listenerContainer != null){
listenerContainer.start();
}
}
And here's my kafka properties in application.yml;
kafka:
streams:
common:
configs:
"[bootstrap.servers]": 192.168.99.100:9092
"[client.id]": event
"[producer.id]": event-producer
"[max.poll.interval.ms]": 300000
"[group.max.session.timeout.ms]": 300000
"[session.timeout.ms]": 200000
"[auto.commit.interval.ms]": 1000
"[auto.offset.reset]": latest
"[group.id]": event-consumer-group
"[max.poll.records]": 1
And also my KafkaConfiguration class;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>(globalConfig.getBrokerProperties().getConfigs());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs(), new StringDeserializer(), new StringDeserializer());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
What you're currently doing is:
Create a listener but don't start it yet (autoStartup = false)
When the scheduled job kicks in, start the container (will start consuming the first message from the topic)
When the first message is consumed, you stop the container (resulting in no messages being consumed anymore)
So indeed the behavior you are describing is not a surprise.
#KafkaListener doesn't need a scheduled task to have start consuming messages. I think you can remove autoStartup = false and remove the scheduled job, after which the listener will consume all messages on the topic one by one, and wait for new ones when they appear on the topic.
Also, some other things I noticed:
The properties are for Kafka Streams, for regular Spring Kafka you need the properties like so:
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
auto-offset-reset: earliest
...etc
Also: why use #Payload String content instead of the already serialized cr.getVaue()?

How to load Kafka Consumer lazily in Spring boot?

I want to provide group Id through command line argument but when I tried this I got following error.
Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is java.lang.IllegalStateException: No group.id found in consumer config, container properties, or #KafkaListener annotation; a group.id is required when group management is used.
That means while loading kafkalistener it required group_id. If I gave groupId in consumerConfig file then Its working properly.
So is there any way so that I can give group Id through command line and kafka listener loads lazily So that I will not require while program starting.
My ConsumerConfig :
#Configuration
class KafkaConsumerConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Autowired
private ArgumentModel argumentModel;
private Logger logger = LoggerFactory.getLogger(KafkaConsumerConfig.class);
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
logger.info("bootstrapServers : {}", bootstrapServers);
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, argumentModel.getKafkaGroupId());
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
#KafkaListener(... groupId = "${group.id}")
Then pass -Dgroup.id=myGroup on the command line.

#KafkaListener in Unit test case does not consume from the container factory

I wrote a JUnit test case to test the code in the "With Java Configuration" lesson in the Spring Kafka docs. (https://docs.spring.io/spring-kafka/reference/htmlsingle/#_with_java_configuration). The onedifference is that I am using an Embedded Kafka Server in the class, instead of a localhost server. I am using Spring Boot 2.0.2 and its Spring-Kafka dependency.
While running this test case, I see that the Consumer is not reading the message from the topic and the "assertTrue" check fails. There are no other errors.
#RunWith(SpringRunner.class)
public class SpringConfigSendReceiveMessage {
public static final String DEMO_TOPIC = "demo_topic";
#Autowired
private Listener listener;
#Test
public void testSimple() throws Exception {
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}
#Autowired
private KafkaTemplate<Integer, String> template;
#Configuration
#EnableKafka
public static class Config {
#Bean
public KafkaEmbedded kafkaEmbedded() {
return new KafkaEmbedded(1, true, 1, DEMO_TOPIC);
}
#Bean
public ConsumerFactory<Integer, String> createConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, IntegerDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(createConsumerFactory());
return factory;
}
#Bean
public Listener listener() {
return new Listener();
}
#Bean
public ProducerFactory<Integer, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaEmbedded().getBrokersAsString());
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(props);
}
#Bean
public KafkaTemplate<Integer, String> kafkaTemplate() {
return new KafkaTemplate<Integer, String>(producerFactory());
}
}
}
class Listener {
public final CountDownLatch latch = new CountDownLatch(1);
#KafkaListener(id = "foo", topics = DEMO_TOPIC)
public void listen1(String foo) {
this.latch.countDown();
}
}
I think that this is because the #KafkaListener is using some wrong/default setting when reading from the topic. I dont see any errors in the logs.
Is this unit test case correct? How can i find the object that is created for the KafkaListener annotation and see which Kafka broker it consumes from? Any inputs will be helpful. Thanks.
The message is sent before the consumer starts.
By default, new consumers start consuming at the end of the topic.
Add
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
The answer by #gary-russell is the best solution. Another way to resolve this issue was to delay the message send step by some time. This will enable the consumer to be ready. The following is also a correct solution.
Lesson learned - For unit testing Kafka consumers, either consume all the messages in the test case, or ensure that the Consumer is ready before Producer sends the message.
#Test
public void testSimple() throws Exception {
Thread.sleep(1000);
template.send(DEMO_TOPIC, 0, "foo");
template.flush();
assertTrue(this.listener.latch.await(60, TimeUnit.SECONDS));
}

The class is not in the trusted packages although appears in the list of trusted packages

I am trying to implement a simple Kafka communication between 2 different Spring Boot applications with out any special settings, this application has only one kafkalistener. My yml for the consumer is the following:
spring:
kafka:
bootstrap-servers: ip_here
topic:
json: topic_here
consumer:
group-id: group_id
auto-offset-reset: earliest
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring:
json:
trusted:
packages: 'com.example.kw.dtos.Classdata'
The error I am receiving is the following:
Caused by: java.lang.IllegalArgumentException: The class
'com.example.kw.dtos.Classdata' is not in the trusted packages:
[java.util, java.lang, com.example.kw.dtos.Classdata]. If you believe
this class is safe to deserialize, please provide its name. If the
serialization is only done by a trusted source, you can also enable
trust all (*).
The package is in the trusted packages but something is wrong.
My factory class:
#Configuration
#EnableKafka
public class MsgListener {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "json");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.example.kw.dtos.Classdata");
return props;
}
#Bean
public ConsumerFactory<String, Classdata> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(
consumerConfigs(),
new StringDeserializer(),
new JsonDeserializer<>(Classdata.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Classdata> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Classdata> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
It should be just the package com.example.kw.dtos
String packageName = ClassUtils.getPackageName(requestedType).replaceFirst("\\[L", "");
for (String trustedPackage : this.trustedPackages) {
if (packageName.equals(trustedPackage)) {
return true;
}
}
We had this issue while testing kafka.
We fixed it that way:
private static KafkaMessageListenerContainer<String, Data> createMessageListenerContainer() {
final Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("sender", "false", EMBEDDED_KAFKA);
final DefaultKafkaConsumerFactory<String, Data> consumerFactory = new DefaultKafkaConsumerFactory<>(consumerProps);
final JsonDeserializer<Data> valueDeserializer = new JsonDeserializer<>();
valueDeserializer.addTrustedPackages("path.to.package");
consumerFactory.setValueDeserializer(valueDeserializer);
consumerFactory.setKeyDeserializer(new StringDeserializer());
final ContainerProperties containerProperties = new ContainerProperties(SENDER_TOPIC);
return new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
}
The trick here is that you have to set it in two places
spring.json.trusted.packages - for any json deserializers created outside of kafka's influence
spring.kafka.consumer.properties.spring.json.trusted.packages - for kafka created deserializers
This was the only way i was able to make it work. Also, it does not accept wildcards, so it has to be exact package match

Resources