Unable to send/receive messages in Multi Node Kafka cluster - cluster-computing

I have a multi node kafka cluster, im able to create the topics
successfully, which is clear in the zookeeper logs. But i cant send/receive messages from some of the topics even though they are created.
Also i dont see the logs created for the topics for some of them in the /tmp/kafka-logs directory in any of the kafka brokers that are part of the 3 nodes.
For example: If i had created Topic1...Topic5. I'm able to send and receive messages for topic3,topic4. I have my producer & consumer running in node1.
Any idea if I'm doing anything wrong here?
On the producer side:
private Properties producerConfig() {
Properties props = new Properties();
props.put("bootstrap.servers", "host1:9092,host2:9092,host3:9092");
props.put("acks", "all");
props.put("retries", 0);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
return props;
}
On Consumer side:
private Properties createConsumerConfig(String zookeeper, String groupId) {
Properties props = new Properties();
props.put("zookeeper.connect", "host1:2181,host2:2181,host3:2181");
props.put("group.id", groupId);
props.put("auto.commit.enable", "false");
props.put("auto.offset.reset", "smallest");
return props;
}
Multi Node cluster Setup:
I have used the following instructions to setup the multi node cluster.
Host1 :: zk1,kafkabroker1
Host2 :: zk2,kafkabroker2
Host3 :: zk3,kafkabroker3
https://itblog.adrian.citu.name/2014/01/30/how-to-set-an-apache-kafka-multi-node-multi-broker-cluster/

Related

unable to disable topics auto create in spring kafka v 1.1.6

Im using springboot v1.5 and spring kafka v1.1.6 to publish a message to a kafka broker.
when it publishes the message to the topic, the topic is created in the broker by default if not present.
I do not want it to create topics if not present. I tried to disable it by adding the property spring.kafka.topic.properties.auto.create=false but it does not work.
below is my bean configuration
#Value("${kpi.kafka.bootstrap-servers}")
private String bootstrapServer;
#Bean
public ProducerFactory<String, CmsMonitoringMetrics> producerFactoryJson() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
configProps.put("allow.auto.create.topics", "false");
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, CmsMonitoringMetrics> kafkaTemplateJson() {
return new KafkaTemplate<>(producerFactoryJson());
}
in producer method im using the below code to publish
Message<CmsMonitoringMetrics> message = MessageBuilder.withPayload(data)
.setHeader(KafkaHeaders.TOPIC, topicName)
.build();
SendResult<String, CmsMonitoringMetrics> result = kafkaTemplate.send(message).get();
it still creates the topic. please help me disable it.
As per the documentation, auto.create.topics.enable is a broker configuration. That means that you have to set this property on the server side of Kafka, not on producer/consumer clients.

AWS SNS Queue Policy

I was trying to create a SNS queue programmatically via spring-cloud-starter-aws-messaging maven dependency. I could create the queue, send and receive messages
The next I would like to have is to set the queue policy so only particular user can consume the messages. This is for multi tenancy support, and each tenant will have its own queue. Just want to make sure the queue is not consumable by other tenants.
Here's the snippet to create a queue
AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_2)
.build();
Map<String, String> queueAttributes = new HashMap<>();
queueAttributes.put("FifoQueue", "true");
queueAttributes.put("ContentBasedDeduplication", "true");
CreateQueueRequest createFifoQueueRequest = new CreateQueueRequest(
queueName).withAttributes(queueAttributes);
CreateQueueResult createQueueResult = sqs.createQueue(createFifoQueueRequest);

Spring Kafka - Producer: TimeoutException when trying to send long message

I am trying to send base64 encoded strings which are usually 2mb in size through Kafka. I have configured Spring Kafka producer as below:
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 4194304);
configProps.put(ProducerConfig.BATCH_SIZE_CONFIG, 1);
configProps.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
return new DefaultKafkaProducerFactory<>(configProps);
}
I keep getting the following error:
Caused by: org.apache.kafka.clients.producer.BufferExhaustedException: Failed to allocate memory within the configured max blocking time 60000 ms.
Things I tried:
configProps.put(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, 90000);
also
configProps.put(ProducerConfig.BATCH_SIZE_CONFIG, 10);
also
configProps.put(ProducerConfig.BATCH_SIZE_CONFIG, 0);
Connection to kafka broker works as the topic is being auto-created. The error is persistent even after trying various combinations of the above fixes.
Try setting this property as well:
configProps.put(ProducerConfig.MAX_BLOCK_MS_CONFIG, 120000);
This should work well as the size of message is higher for current config, you need to extend it more.

spring kafka how to configure two jaas of different kerberos

I have spring boot application, which is recives data over rest, based on some business logic I need to forwaard the data to two different kafka cluster which have their own kerberos keys menttioned jaas file.
I have written two different producer Instance with below properties in their different object instances.
#Service
public class EventProducer {
private Logger logger = LoggerFactory.getLogger(EventProducer.class);
Producer<String, String> kafkaProducer = null;
#Autowired
public Producer<String, String> createProducer() {
if (kafkaProducer == null) {
Properties props = getKafkaConfig();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "Cluste_1_hostaddress:9092");
props.put(ProducerConfig.CLIENT_ID_CONFIG,"usertest");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, "3");
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 1600);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "true");
System.setProperty("java.security.auth.login.config", "/home/user/clusrter_1_jaas.conf);
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("kafka.cluster.SecurityProtocol",PLAINTEXTSASL);
props.put("sasl.kerberos.service.name", "kafka");
props.put("sasl.kerberos", "sasl.kerberos.service.namekafka");
props.put("security.inter.broker.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism.inter.broker.protocol", "PLAIN");
props.put("sasl.enabled.mechanisms", "PLAIN");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
kafkaProducer = new KafkaProducer<String, String>(props);
}
return kafkaProducer;
}
}
Second producer
#Service
public class MovementProducer {
private Logger logger = LoggerFactory.getLogger(MovementProducer.class);
Producer<String, String> kafkaProducer = null;
#Autowired
public Producer<String, String> createProducer() {
if (kafkaProducer == null) {
Properties props = getKafkaConfig();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "Cluste_2_hostaddress:9092");
props.put(ProducerConfig.CLIENT_ID_CONFIG,"usertest");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, "3");
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, 1600);
System.setProperty("javax.security.auth.useSubjectCredsOnly", "true");
System.setProperty("java.security.auth.login.config", "/home/user/clusrter_2_jaas.conf);
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("kafka.cluster.SecurityProtocol",PLAINTEXTSASL);
props.put("sasl.kerberos.service.name", "kafka");
props.put("sasl.kerberos", "sasl.kerberos.service.namekafka");
props.put("security.inter.broker.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism.inter.broker.protocol", "PLAIN");
props.put("sasl.enabled.mechanisms", "PLAIN");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
kafkaProducer = new KafkaProducer<String, String>(props);
}
return kafkaProducer;
}
}
When I start this as two service with enabling only producer instance it works, but when I enable both instance in a single jar only one producer works and other gets authentication issues.
I feel this is due to System.setProperty("java.security.auth.login.config","") , since it is global system variable so it overrides when i use both in single process, so only one works.
So is there any way to solve this issue other than starting a two process. I have only one spring service and should be able to produce to both different kafka cluster ..
Kafka client's latest version has given options for multiple JAAS confs for different clients.
For example, if your instance wants to connect two clusters with different JAAS conf we can override on different producer and consumer levels. Just create 2 separate producer factories and set sasl.jaas.conig
clustera.java.security.auth.login.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="test.keytab" \
principal="test#domain.com";
clusterb.java.security.auth.login.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="testb.keytab" \
principal="testb#domain.com"

Kafka: Multiple instances in the same consumer group listening to the same partition inside for topic

I have two instances of kafka consumer, configured with the same consumer group and listening to partition 0 in the same topic. The problem is when I send a message to the topic. The message is consumed by both instances which supposed not to happen as they are in the same group.
I am using Spring Boot configuration class to configure them.
Here is the configuration:
#Bean
ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);
return factory;
}
#Bean
public ConsumerFactory<Integer, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupId);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, keyDeserializer);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, valueDeserializer);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
Here is the listener:
#KafkaListener(topicPartitions = {#TopicPartition(topic = "${kafka.topic.orders}", partitions = "0")})
public void consume(ConsumerRecord<String, String> record, Acknowledgment acknowledgment) {
log.info("message received at " + orderTopic + "at partition 0");
processRecord(record, acknowledgment);
}
Kafka doesn't work like that; when you manually assign partitions like that (#TopicPartition) you are explicitly telling Kafka you want to receive messages from that partition - the consumer assign() s the partitions to itself.
In other words, with manual assignment, you are taking responsibility for distributing the partitions.
You need use group management, and let Kafka assign the topics to the instances.
use topics = "..." and Kafka will do the assignment. If you don't have enough topics, instances will be idle. You need at least as many partitions as instances to have all instances participate.

Resources