Spring App Not Connecting to Kafka with SSL - spring

I have a Spring boot app with a very simple kafka producer. Everything works great if I connect to a kafka cluster without encryption. But times out if I try to connect to a kafka cluster with SSL. Is there some other configuration I need in the producer or some other property I need to define to allow spring to correctly use all of the configurations?
I have the following properties set:
spring.kafka.producer.bootstrap-servers=broker1.kafka.poc.com:9093,broker3.kafka.poc.com:9093,broker4.kafka.poc.com:9093,broker5.kafka.poc.com:9093
spring.kafka.ssl.key-store-type=jks
spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
spring.kafka.ssl.trust-store-password=test1234
spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
spring.kafka.ssl.key-store-password=test1234
logging.level.org.apache.kafka=debug
server.ssl.key-password=test1234
spring.kafka.ssl.key-password=test1234
spring.kafka.producer.client-id=sym
spring.kafka.admin.ssl.protocol=ssl
With the following result printing as the ProducerConfig when the app starts up:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
buffer.memory = 33554432
client.dns.lookup = default
client.id = sym
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /home/ec2-user/keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = jks
ssl.protocol = ssl
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /home/ec2-user/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
My producer is extremely simple:
#Service
public class Producer {
private final KafkaTemplate<String, String> kafkaTemplate;
public Producer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
void sendMessage(String topic, String message) {
this.kafkaTemplate.send(topic, message);
}
void sendMessage(String topic, String key, String message) {
this.kafkaTemplate.send(topic, key, message);
}
}
Connecting to kafka with SSL gets a TimeoutException saying Topic symbols not present in metadata after 60000 ms.
If I turn on debug logs, I get this repeatedly, looping all of my brokers.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating API versions fetch from node -4.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-sent
2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-received
2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.latency
2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Connection with /10.25.75.151 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Node -1 disconnected.
2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.

In producer config security.protocol should be set to SSL. You could also try setting ssl.endpoint.identification.algirithm = "" to disable hostname validation of the certificate in case that's the issue. Other than that, would be useful to see the Kafka broker config.

Related

Azure Kafka Bootstrap broker disconnected

I try to create an Apache Kafka connection to Azure event hubs with reactor kafka in a spring boot application. At first I followed the official Azure tutorial to set up Azure event hubs and the spring backend: https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-kafka-azure-event-hub
Everything worked fine and I created some more advanced services.
However when trying to get reactor kafka working with Azure event hubs, it doesn't work. When the consumer is triggered it cannot consume any messages and the following is logged:
com.test.Application : Started Application in 10.442 seconds (JVM running for 10.771)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Discovered group coordinator mynamespacename.servicebus.windows.net:9093 (id: 2147483647 rack: null)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] (Re-)joining group
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully joined group with generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Finished assignment for group at generation 30: {mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e=Assignment(partitions=[my-event-hub-0])}
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully synced group in generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Notifying assignor about the new Assignment(partitions=[my-event-hub-0])
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Adding newly assigned partitions: my-event-hub-0
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Setting offset for partition my-event-hub-0 to the committed offset FetchPosition{offset=17, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[mynamespacename.servicebus.windows.net:9093 (id: 0 rack: null)], epoch=absent}}
o.s.k.l.KafkaMessageListenerContainer : $Default: partitions assigned: [my-event-hub-0]
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [mynamespacename.servicebus.windows.net:9093]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = test-client
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = $Default
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.7.1
o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 61dbce85d0d41457
o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1629919378494
r.k.r.internals.ConsumerEventLoop : SubscribeEvent
o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=test-client, groupId=$Default] Subscribed to topic(s): my-event-hub
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
The following is the code:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;
import reactor.kafka.receiver.KafkaReceiver;
import reactor.kafka.receiver.ReceiverOptions;
import reactor.kafka.receiver.ReceiverRecord;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
#Slf4j
#Service
public class StreamConsumer {
public Flux<Object> consumeMessages() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mynamespacename.servicebus.windows.net:9093");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "test-client");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "$Default");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.springframework.kafka.support.serializer.JsonDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(props);
ReceiverOptions<String, Object> options = receiverOptions.subscription(Collections.singleton(KafkaConstants.KAFKA_TOPIC))
.addAssignListener(partitions -> log.debug("onPartitionsAssigned {}", partitions))
.addRevokeListener(partitions -> log.debug("onPartitionsRevoked {}", partitions));
Flux<ReceiverRecord<String, Object>> kafkaFlux = KafkaReceiver.create(options).receive();
return kafkaFlux.map(x -> "Test");
}
}
The reactor code uses the same topic and group id constants. As the client logs a broken connection:
[Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
I assume that there is a configuration missing to connect the consumer properly to Azure event hubs.

Unable to connect to kafka brokers by ssl - Spring Boot

I'm trying to start new project with spring.boot version: 2.1.5.RELEASE and Kafka version: 2.0.1 and faced with problem, that I can't connect to my kafka remote broker by SSL.
At the same time my old project with spring.boot version: 2.0.1.RELEASE and Kafka version: 1.0.1 works fine.
application.yml
spring:
cloud:
stream:
kafka:
binder:
brokers: my-kafka:9093
autoCreateTopics: false
bindings:
customers-in:
destination: customers
contentType: application/json
customers-out:
destination: customers
contentType: application/json
kafka:
ssl:
protocol: SSL
trust-store-location: guest.truststore
trust-store-password: 123456
key-password: 123456
key-store-location: guest.keystore
key-store-password: 123456
Got such error message:
2019-06-10 17:59:03.636 ERROR 30220 --- [ main] o.s.c.s.b.k.p.KafkaTopicProvisioner : Failed to obtain partition information
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Old project properties (everesing works fine)
spring.kafka.bootstrap-servers=my-kafka:9093
spring.kafka.consumer.properties.[group.id]=group_23_spring-kafka
# SSL
spring.kafka.properties.[security.protocol]=SSL
spring.kafka.ssl.trust-store-location=guest.truststore
spring.kafka.ssl.trust-store-password=123456
spring.kafka.ssl.key-store-password=123456
spring.kafka.ssl.key-store-location=guest.keystore
spring.kafka.ssl.key-password=123456
Also if I update my working project with
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.0.1</version>
</dependency>
it produces the same error
My kafka INFO log Producer config for new and old versions looks almost the same
new config kafka version: 2.0.1 spring-boot version: 2.1.5.REALIZE
acks = 1
batch.size = 16384
bootstrap.servers = [kafka-sbox.epm-eco.projects.epam.com:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm =
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = guest.keystore
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location =guest.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
old config kafka version: 1.0.1 spring-boot version: 2.0.1.REALIZE
acks = 1
batch.size = 16384
bootstrap.servers = [kafka-sbox.epm-eco.projects.epam.com:9093]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = SSL
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = guest.keystore
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = guest.truststore
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
Also I asked the same question at spring-cloud-stream-binder-kafka github but we didn't found conclusion
Has anyone encountered the same problem? Or maybe know how I have to configure application.yml to be able to connect to my kafka broker by SSL?
Thanks

Cannot connect to kafka through SpringBoot (docker) application

Locally started the kafka and I wrote a sample Spring-boot producer. When I run this application it works fine. But when I start the application via docker container, I'm getting below logs "Connection to node 0 could not be established. Broker may not be available."
2019-03-20 06:06:56.023 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.u.AppInfoParser : Kafka version : 1.0.1
2019-03-20 06:06:56.023 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.u.AppInfoParser : Kafka commitId : c0518aa65f25317e
2019-03-20 06:06:56.224 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.263 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.355 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.594 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.919 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:57.877 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
Please find the ProducerConfig values below based on the log
2019-03-20 06:06:55.953 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.p.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.0.64:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
My ProducerConfig as below
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.64:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
Is there any additional configuration required when connecting through docker?
Probably you connect to the wrong port. Do a docker ps:
e.g.
2ca7f0cdddd confluentinc/cp-enterprise-kafka:5.1.2 "/etc/confluent/dockā€¦" 2 weeks ago Up 50 seconds 0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp broker
and use the later broker port: 29092 in the above example.
also usually from your laptop you can access the docker network at localhost.

Spring boot spring.kafka.bootstrap-servers not getting picked up by consumer config

I'm trying to use connect a spring boot project to kafka .
In my application.properties file I have the following configs:
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.consumer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.template.default-topic=my_default_topic
spring.kafka.admin.fail-fast=true
spring.kafka.consumer.auto-offset-reset=latest
spring.kafka.consumer.group-id=my_group_id
spring.kafka.listener.concurrency=10
spring.kafka.bootstrap-servers=kafka:9092
But in the logs I see that only some of these values are making it into the consume configs. Configs from the logs are here:
2018-08-01 15:20:34.640 WARN 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refr[3/5548]
mpt: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to cons[2/5548]
fka consumer [1/5548]
isolation.level = read_uncommitted [46/5548]
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [45/5548]
max.partition.fetch.bytes = 1048576 [44/5548]
max.poll.interval.ms = 300000 [43/5548]
check.crcs = true [58/5548]
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
Specifically bootstrap.servers and group.id are missing. I then get an exception
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is or
g.apache.kafka.common.KafkaException: Failed to construct kafka consumer
because it can't find the bootstrap servers. I know I can manually pass the properties into a DefaultKafkaConsumerFactory bean but I was hoping Spring Boot could handle that automatically. Is there any way to do that?
Thank you!
EDIT:
I am trying to consume the messages using a #KafkaListener like so...
#KafkaListener(topics = "${app-name.inputTopic}")
public void handleMessage(CustomMessage message){
//my code
}
I realized there were #Configuration beans elsewhere in the project that were manually setting the properties. I removed them and everything worked.
check out KafkaProperties object. It contains standard properties. Acts like a factory.
#Bean
public KafkaTemplate<Integer, String> createTemplate(KafkaProperties properties)
{
Map<String, Object> props = properties.buildProducerProperties();
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(props));
return template;
}
Original answer: https://stackoverflow.com/a/60961582/5740547

Spring Boot: I have a kafka broker code that makes the application get stuck

Spring Boot: I have a kafka broker code that is in the application which listens to a topic. I have a controller which has api endpoints. My kafka polls to the topic. When i start the application i see kafka getting started but the problem is my endpoints are not working. Even the application is not up on the port mentioned. I need to test my endpoints but i cannot. I see my kafka getting started and working fine, but i dont see the prompt saying that the application started at port XXXX and the endpoints are not working.
Logs on console:
2018-04-17 17:18:59.447 INFO 59720 --- [ main]
c.f.s.e.EventAggregationApplication : Starting
EventAggregationApplication on FPTECHS48s-MacBook-Pro.local with PID
59720 (/Users/fptechs48/IdeaProjects/event-aggregation/target/classes
started by fptechs48 in
/Users/fptechs48/IdeaProjects/event-aggregation) 2018-04-17
17:18:59.450 DEBUG 59720 --- [ main]
c.f.s.e.EventAggregationApplication : Running with Spring Boot
v1.5.9.RELEASE, Spring v4.3.13.RELEASE 2018-04-17 17:18:59.450 INFO
59720 --- [ main] c.f.s.e.EventAggregationApplication :
The following profiles are active: dev 2018-04-17 17:18:59.487 INFO
59720 --- [ main] s.c.a.AnnotationConfigApplicationContext :
Refreshing
org.springframework.context.annotation.AnnotationConfigApplicationContext#1198b989:
startup date [Tue Apr 17 17:18:59 IST 2018]; root of context hierarchy
2018-04-17 17:18:59.894 INFO 59720 --- [ main]
trationDelegate$BeanPostProcessorChecker : Bean
'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of
type
[org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$b36fb556]
is not eligible for getting processed by all BeanPostProcessors (for
example: not eligible for auto-proxying) 2018-04-17 17:19:00.521 INFO
59720 --- [ main] o.a.k.clients.admin.AdminClientConfig :
AdminClientConfig values: bootstrap.servers = [13.126.200.243:9092]
client.id = connections.max.idle.ms = 300000 metadata.max.age.ms =
300000 metric.reporters = [] metrics.num.samples = 2
metrics.recording.level = INFO metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5
retry.backoff.ms = 100 sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter
= 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1,
TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password
= null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS
ssl.protocol = TLS ssl.provider = null
ssl.secure.random.implementation = null ssl.trustmanager.algorithm =
PKIX ssl.truststore.location = null ssl.truststore.password = null
ssl.truststore.type = JKS
2018-04-17 17:19:00.553 INFO 59720 --- [ main]
o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
2018-04-17 17:19:00.553 INFO 59720 --- [ main]
o.a.kafka.common.utils.AppInfoParser : Kafka commitId :
aaa7af6d4a11b29d
After which kafka starts listening to the topic but if i hit the endpoint written in my application its not working.
Even the console doesn't say me Tomcat started on port XXXX

Resources