Unable to send messages from Kafka using KafkaTemplate - spring

We have an existing Spring MVC application(which is SAP Hybris) wherein we want to integrate Kafka using KafkaTemplate. I have configured the kafka template in xml as below
<bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg ref="producerFactory"/>
</bean>
<bean id="producerFactory" class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value-type="java.lang.String" value="${spring.kafka.bootstrap-servers}" />
<entry key="key.serializer" value-type="java.lang.Class" value="org.apache.kafka.common.serialization.StringSerializer" />
<entry key="value.serializer" value-type="java.lang.Class" value="org.apache.kafka.common.serialization.StringSerializer" />
</map>
</constructor-arg>
</bean>
spring.kafka.bootstrap-servers is configured as localhost:9092.
Please note that, I cannot use Spring Boot or Annotation based configuration, I have to use xml based configs only
This is my sample code from controller
#RequestMapping(method = RequestMethod.GET)
public String doRegister(final Model model) throws CMSItemNotFoundException
{
String message = "Dummy Message: "+Math.random();
String topic = "Dummy_Topic";
kafkaTemplate.send(topic,message);
return "pages/register";
}
I am able to send and receive messages from command line client with my local Kafka setup, but I get below error when I am trying to send messages from my Spring MVC application.
ERROR [hybrisHTTP16] [LoggingProducerListener] Exception thrown when sending a message with key='null' and payload='Dummy Message: 0.03670242785185063' to topic Dummy_Topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
ERROR [hybrisHTTP16] [LoggingProducerListener] Exception thrown when sending a message with key='null' and payload='Dummy Message: 0.03670242785185063' to topic Dummy_Topic:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
I recieve no messages on command line client which is listening to that topic.
Here are my configs of Kafka:
acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
++++++ EDIT +++++++
My server.properties:
broker.id=0
listeners=PLAINTEXT://:9092
host.name=localhost
advertised.host.name= localhost
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
log.dir=D:/kafka_2.11-0.9.0.0/kafka_2.11-0.9.0.0/data
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
+++++ EDIT END ++++
I am unable to find out why it's not working through my application but it's works fine for command line kafka producer.
Please help me in this regard.
Thanks.

Related

Azure Kafka Bootstrap broker disconnected

I try to create an Apache Kafka connection to Azure event hubs with reactor kafka in a spring boot application. At first I followed the official Azure tutorial to set up Azure event hubs and the spring backend: https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-kafka-azure-event-hub
Everything worked fine and I created some more advanced services.
However when trying to get reactor kafka working with Azure event hubs, it doesn't work. When the consumer is triggered it cannot consume any messages and the following is logged:
com.test.Application : Started Application in 10.442 seconds (JVM running for 10.771)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Discovered group coordinator mynamespacename.servicebus.windows.net:9093 (id: 2147483647 rack: null)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] (Re-)joining group
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully joined group with generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Finished assignment for group at generation 30: {mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e=Assignment(partitions=[my-event-hub-0])}
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully synced group in generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Notifying assignor about the new Assignment(partitions=[my-event-hub-0])
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Adding newly assigned partitions: my-event-hub-0
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Setting offset for partition my-event-hub-0 to the committed offset FetchPosition{offset=17, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[mynamespacename.servicebus.windows.net:9093 (id: 0 rack: null)], epoch=absent}}
o.s.k.l.KafkaMessageListenerContainer : $Default: partitions assigned: [my-event-hub-0]
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [mynamespacename.servicebus.windows.net:9093]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = test-client
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = $Default
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.7.1
o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 61dbce85d0d41457
o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1629919378494
r.k.r.internals.ConsumerEventLoop : SubscribeEvent
o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=test-client, groupId=$Default] Subscribed to topic(s): my-event-hub
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
The following is the code:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;
import reactor.kafka.receiver.KafkaReceiver;
import reactor.kafka.receiver.ReceiverOptions;
import reactor.kafka.receiver.ReceiverRecord;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
#Slf4j
#Service
public class StreamConsumer {
public Flux<Object> consumeMessages() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mynamespacename.servicebus.windows.net:9093");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "test-client");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "$Default");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.springframework.kafka.support.serializer.JsonDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(props);
ReceiverOptions<String, Object> options = receiverOptions.subscription(Collections.singleton(KafkaConstants.KAFKA_TOPIC))
.addAssignListener(partitions -> log.debug("onPartitionsAssigned {}", partitions))
.addRevokeListener(partitions -> log.debug("onPartitionsRevoked {}", partitions));
Flux<ReceiverRecord<String, Object>> kafkaFlux = KafkaReceiver.create(options).receive();
return kafkaFlux.map(x -> "Test");
}
}
The reactor code uses the same topic and group id constants. As the client logs a broken connection:
[Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
I assume that there is a configuration missing to connect the consumer properly to Azure event hubs.

Unable to connect to kafka brokers by ssl - Spring Boot

I'm trying to start new project with spring.boot version: 2.1.5.RELEASE and Kafka version: 2.0.1 and faced with problem, that I can't connect to my kafka remote broker by SSL.
At the same time my old project with spring.boot version: 2.0.1.RELEASE and Kafka version: 1.0.1 works fine.
application.yml
spring:
cloud:
stream:
kafka:
binder:
brokers: my-kafka:9093
autoCreateTopics: false
bindings:
customers-in:
destination: customers
contentType: application/json
customers-out:
destination: customers
contentType: application/json
kafka:
ssl:
protocol: SSL
trust-store-location: guest.truststore
trust-store-password: 123456
key-password: 123456
key-store-location: guest.keystore
key-store-password: 123456
Got such error message:
2019-06-10 17:59:03.636 ERROR 30220 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Old project properties (everesing works fine)
spring.kafka.bootstrap-servers=my-kafka:9093
spring.kafka.consumer.properties.[group.id]=group_23_spring-kafka
# SSL
spring.kafka.properties.[security.protocol]=SSL
spring.kafka.ssl.trust-store-location=guest.truststore
spring.kafka.ssl.trust-store-password=123456
spring.kafka.ssl.key-store-password=123456
spring.kafka.ssl.key-store-location=guest.keystore
spring.kafka.ssl.key-password=123456
Also if I update my working project with
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.0.1</version>
</dependency>
it produces the same error
My kafka INFO log Producer config for new and old versions looks almost the same
new config kafka version: 2.0.1 spring-boot version: 2.1.5.REALIZE
acks = 1
batch.size = 16384
bootstrap.servers = [kafka-sbox.epm-eco.projects.epam.com:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm =
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = guest.keystore
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =guest.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
old config kafka version: 1.0.1 spring-boot version: 2.0.1.REALIZE
acks = 1
batch.size = 16384
bootstrap.servers = [kafka-sbox.epm-eco.projects.epam.com:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = guest.keystore
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = guest.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
Also I asked the same question at spring-cloud-stream-binder-kafka github but we didn't found conclusion
Has anyone encountered the same problem? Or maybe know how I have to configure application.yml to be able to connect to my kafka broker by SSL?
Thanks

Spring App Not Connecting to Kafka with SSL

I have a Spring boot app with a very simple kafka producer. Everything works great if I connect to a kafka cluster without encryption. But times out if I try to connect to a kafka cluster with SSL. Is there some other configuration I need in the producer or some other property I need to define to allow spring to correctly use all of the configurations?
I have the following properties set:
spring.kafka.producer.bootstrap-servers=broker1.kafka.poc.com:9093,broker3.kafka.poc.com:9093,broker4.kafka.poc.com:9093,broker5.kafka.poc.com:9093
spring.kafka.ssl.key-store-type=jks
spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
spring.kafka.ssl.trust-store-password=test1234
spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
spring.kafka.ssl.key-store-password=test1234
logging.level.org.apache.kafka=debug
server.ssl.key-password=test1234
spring.kafka.ssl.key-password=test1234
spring.kafka.producer.client-id=sym
spring.kafka.admin.ssl.protocol=ssl
With the following result printing as the ProducerConfig when the app starts up:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
buffer.memory = 33554432
client.dns.lookup = default
client.id = sym
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /home/ec2-user/keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = jks
ssl.protocol = ssl
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /home/ec2-user/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
My producer is extremely simple:
#Service
public class Producer {
private final KafkaTemplate<String, String> kafkaTemplate;
public Producer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
void sendMessage(String topic, String message) {
this.kafkaTemplate.send(topic, message);
}
void sendMessage(String topic, String key, String message) {
this.kafkaTemplate.send(topic, key, message);
}
}
Connecting to kafka with SSL gets a TimeoutException saying Topic symbols not present in metadata after 60000 ms.
If I turn on debug logs, I get this repeatedly, looping all of my brokers.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating API versions fetch from node -4.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-sent
2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-received
2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.latency
2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Connection with /10.25.75.151 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Node -1 disconnected.
2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.
In producer config security.protocol should be set to SSL. You could also try setting ssl.endpoint.identification.algirithm = "" to disable hostname validation of the certificate in case that's the issue. Other than that, would be useful to see the Kafka broker config.

Default values for ProducerConfigs in Spring-Boot Apache Kafka

I have an Spring Boot project up and running with Kafka being implemented .
I configured a Producer and sent an message with topic "Demo" .
But i didn't set any of the properties like
bootstrap.servers = [localhost:9092]
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
which we normally set while running kafka project-Producer
My Project ran succesfully and i was able to consume the message
My question was how does Spring-Boot know of these ProducerConfig properties ?? Are there any default values
Also i saw in logs
acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
Where are these properties being set ?
Look at org.apache.kafka.clients.producer.ProducerConfig.
There you can find that some of the properties have default values
define(BATCH_SIZE_CONFIG, Type.INT, 16384, atLeast(0), Importance.MEDIUM, BATCH_SIZE_DOC)
See the Boot documentation.
Scroll down to Kafka and click the link to KafkaProperties.

Spring boot spring.kafka.bootstrap-servers not getting picked up by consumer config

I'm trying to use connect a spring boot project to kafka .
In my application.properties file I have the following configs:
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.consumer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.template.default-topic=my_default_topic
spring.kafka.admin.fail-fast=true
spring.kafka.consumer.auto-offset-reset=latest
spring.kafka.consumer.group-id=my_group_id
spring.kafka.listener.concurrency=10
spring.kafka.bootstrap-servers=kafka:9092
But in the logs I see that only some of these values are making it into the consume configs. Configs from the logs are here:
2018-08-01 15:20:34.640 WARN 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refr[3/5548]
mpt: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to cons[2/5548]
fka consumer [1/5548]
isolation.level = read_uncommitted [46/5548]
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [45/5548]
max.partition.fetch.bytes = 1048576 [44/5548]
max.poll.interval.ms = 300000 [43/5548]
check.crcs = true [58/5548]
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
Specifically bootstrap.servers and group.id are missing. I then get an exception
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is or
g.apache.kafka.common.KafkaException: Failed to construct kafka consumer
because it can't find the bootstrap servers. I know I can manually pass the properties into a DefaultKafkaConsumerFactory bean but I was hoping Spring Boot could handle that automatically. Is there any way to do that?
Thank you!
EDIT:
I am trying to consume the messages using a #KafkaListener like so...
#KafkaListener(topics = "${app-name.inputTopic}")
public void handleMessage(CustomMessage message){
//my code
}
I realized there were #Configuration beans elsewhere in the project that were manually setting the properties. I removed them and everything worked.
check out KafkaProperties object. It contains standard properties. Acts like a factory.
#Bean
public KafkaTemplate<Integer, String> createTemplate(KafkaProperties properties)
{
Map<String, Object> props = properties.buildProducerProperties();
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(props));
return template;
}
Original answer: https://stackoverflow.com/a/60961582/5740547

Resources