Spring Cloud Kafka Stream Unable to create Producer Config Error - spring-boot

I have two Spring boot project with Kafka-stream dependencies, they have exactly same dependencies in gradle and exactly same configurations, yet one of the project when started logs error as below
11:35:37.974 [restartedMain] INFO o.a.k.c.admin.AdminClientConfig - AdminClientConfig values:
bootstrap.servers = [192.169.0.109:6667]
client.id = client
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
11:35:38.017 [restartedMain] INFO o.a.kafka.common.utils.AppInfoParser - Kafka version : 1.0.0
11:35:38.017 [restartedMain] INFO o.a.kafka.common.utils.AppInfoParser - Kafka commitId : aaa7af6d4a11b29d
11:35:38.136 [restartedMain] INFO o.s.c.s.b.k.p.KafkaTopicProvisioner - Using kafka topic for outbound: createschedule
11:36:08.147 [restartedMain] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:243)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:126)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:71)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:140)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:77)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:136)
at org.springframework.cloud.stream.binding.BindingService.doBindProducer(BindingService.java:244)
at org.springframework.cloud.stream.binding.BindingService.bindProducer(BindingService.java:221)
at org.springframework.cloud.stream.binding.BindableProxyFactory.bindOutputs(BindableProxyFactory.java:252)
at org.springframework.cloud.stream.binding.OutputBindingLifecycle.doStartWithBindable(OutputBindingLifecycle.java:46)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
at org.springframework.cloud.stream.binding.AbstractBindingLifecycle.start(AbstractBindingLifecycle.java:47)
at org.springframework.cloud.stream.binding.OutputBindingLifecycle.start(OutputBindingLifecycle.java:29)
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181)
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:52)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356)
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:157)
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:121)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:884)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:161)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234)
at com.und.ClientPanelApplicationKt.main(ClientPanelApplication.kt:21)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:271)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:251)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:236)
... 34 common frames omitted
I am using kotlin 2.30, and Spring Release 2.0.0.0
Relevant part of my build gradle is below
ext {
kotlinVersion = '1.2.30'
springBootVersion = '2.0.0.RELEASE'
springCloudVersion = 'Finchley.M8'
}
dependencies{
compile('org.springframework.cloud:spring-cloud-starter-stream-kafka')
}
I have defined one OutPut channel as below
interface EventStream {
#Output("createschedule")
fun outputEvent(): MessageChannel
}
Below is relvant section of my code where i have defined sender and listener
#Service
class TestService {
#Autowired
private lateinit var eventStream: EventStream
fun test() {
processSchedule("Hello")
}
#SendTo("createschedule")
fun processSchedule(campaign: String): String {
return campaign
}
#StreamListener("createschedule")
fun listenSchedule(campaign: String) {
println(campaign)
//return campaign
}
}
Below is relevant section of my application.yaml
spring:
cloud:
stream:
kafka:
binder:
brokers: ${KAFKA_IP}
defaultBrokerPort: ${KAFKA_PORT}
zkNodes: ${ZK_IP}
defaultZkPort: ${ZK_PORT}
bindings:
createschedule:
group: createscheduleGroup
destination: createschedule
consumer:
group: createscheduleGroup
kafka:
admin:
properties:
#security.protocol: SSL
client.id: client
As i already stated it throw error while starting and after it starts it keep throwing below error in logs
11:54:38.231 [pool-3-thread-1] INFO o.s.c.s.b.k.p.KafkaTopicProvisioner - Using kafka topic for outbound: createschedule
11:54:59.070 [kafka-admin-client-thread | client-panel] WARN o.apache.kafka.clients.NetworkClient - [AdminClient clientId=client-panel] Connection to node -1 could not be established. Broker may not be available.
11:55:08.234 [pool-3-thread-1] ERROR o.s.c.stream.binding.BindingService - Failed to create producer binding; retrying in 30 seconds
org.springframework.cloud.stream.provisioning.ProvisioningException: provisioning exception; nested exception is java.util.concurrent.TimeoutException
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:243)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:126)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.provisionProducerDestination(KafkaTopicProvisioner.java:71)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:140)
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder.doBindProducer(AbstractMessageChannelBinder.java:77)
at org.springframework.cloud.stream.binder.AbstractBinder.bindProducer(AbstractBinder.java:136)
at org.springframework.cloud.stream.binding.BindingService.lambda$rescheduleProducerBinding$2(BindingService.java:262)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at java.util.concurrent.Executors$RunnableAdapter.call$$$capture(Executors.java:511)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java)
at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
at java.util.concurrent.FutureTask.run(FutureTask.java)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: null
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicAndPartitions(KafkaTopicProvisioner.java:271)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopicIfNecessary(KafkaTopicProvisioner.java:251)
at org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner.createTopic(KafkaTopicProvisioner.java:236)
... 16 common frames omitted

It is the problem with kafka.
If you are using docker cointainers, then stop kafka, remove kafka and start kafka cointainer.

Related

Trying to connect a containerized spring boot app with an containerized kafka server

I have a Spring Boot application which worked fine with Kafka in a container but when I containerize the Spring Boot application it won't work.
This is the docker-compose file with which I created
version: "3.4"
services:
zookeeper:
image: bitnami/zookeeper
restart: always
container_name: "zookeeper"
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
ports:
- "9092:9092"
restart: always
container_name: "kafka"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
The application.yml of the Spring Boot application
server:
port: 5001
spring:
jpa:
database-platform: org.hibernate.dialect.MySQL8Dialect
show-sql: true
hibernate:
ddl-auto: update
datasource:
url: jdbc:mysql://mysql-container:3306/craproject?autoReconnect=true&useSSL=false&useSSL=false&serverTimezone=UTC&createDatabaseIfNotExist=true
username: ***
password: ****
data:
mongodb:
host: mongo-container
port: 27017
database: craprojet
kafka:
bootstrap-servers:
- kafka:9092
consumer:
group-id: project-group
enable-auto-commit: false
auto-offset-reset: latest
isolation-level: read_committed
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
The dockerfile with which I created the image of the app:
FROM openjdk:11
COPY target/project-service-1.jar project-service-1.jar
EXPOSE 5001
ENTRYPOINT ["java", "-jar" , "project-service-1.jar"]
The containers of kafka and data bases are running fine :
This is the command which i use to run the spring boot app container :
docker run --name project-service\
--network techbankNet\
-p 5001:5001\
--link mysql-container:mysql\
--link mongo-container:mongo\
--link adminer:adminer\
--link kafka:kafka project-service
the log:
2022-08-23 19:05:05.170 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-project-group-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = project-group
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_committed
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-08-23 19:05:15.315 WARN 1 --- [ main] org.apache.kafka.clients.ClientUtils : Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-23 19:05:15.317 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-23 19:05:15.319 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-project-group-1 unregistered
2022-08-23 19:05:15.319 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextExcepti
on: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
2022-08-23 19:05:15.340 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-23 19:05:15.343 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-08-23 19:05:15.365 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2022-08-23 19:05:15.368 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-08-23 19:05:15.389 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-23 19:05:15.420 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct
kafka consumer
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.9.jar!/:5.3.9]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.3.jar!/:2.5.3]
at com.project.CQRS.ProjectServiceApplication.main(ProjectServiceApplication.java:16) ~[classes!/:1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[project-service-1.jar:1]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[project-service-1.jar:1]
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.1.jar!/:na]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:715) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:272) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.9.jar!/:5.3.9]
... 22 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:728) ~[kafka-clients-2.7.1.jar!/:na]
You should put project-service (and adminer, mongo, and mysql) in the same Docker Compose file as Kafka and not use docker run.
This will create a default bridge network where the containers can talk to each other.
Or you need to attach techbankNet Docker network to the Kafka service in the compose file.
https://docs.docker.com/compose/networking/
Also see Connect to Kafka running in Docker

Azure Kafka Bootstrap broker disconnected

I try to create an Apache Kafka connection to Azure event hubs with reactor kafka in a spring boot application. At first I followed the official Azure tutorial to set up Azure event hubs and the spring backend: https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-kafka-azure-event-hub
Everything worked fine and I created some more advanced services.
However when trying to get reactor kafka working with Azure event hubs, it doesn't work. When the consumer is triggered it cannot consume any messages and the following is logged:
com.test.Application : Started Application in 10.442 seconds (JVM running for 10.771)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Discovered group coordinator mynamespacename.servicebus.windows.net:9093 (id: 2147483647 rack: null)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] (Re-)joining group
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully joined group with generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Finished assignment for group at generation 30: {mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e=Assignment(partitions=[my-event-hub-0])}
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully synced group in generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Notifying assignor about the new Assignment(partitions=[my-event-hub-0])
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Adding newly assigned partitions: my-event-hub-0
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Setting offset for partition my-event-hub-0 to the committed offset FetchPosition{offset=17, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[mynamespacename.servicebus.windows.net:9093 (id: 0 rack: null)], epoch=absent}}
o.s.k.l.KafkaMessageListenerContainer : $Default: partitions assigned: [my-event-hub-0]
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [mynamespacename.servicebus.windows.net:9093]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = test-client
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = $Default
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.7.1
o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 61dbce85d0d41457
o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1629919378494
r.k.r.internals.ConsumerEventLoop : SubscribeEvent
o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=test-client, groupId=$Default] Subscribed to topic(s): my-event-hub
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
The following is the code:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;
import reactor.kafka.receiver.KafkaReceiver;
import reactor.kafka.receiver.ReceiverOptions;
import reactor.kafka.receiver.ReceiverRecord;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
#Slf4j
#Service
public class StreamConsumer {
public Flux<Object> consumeMessages() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mynamespacename.servicebus.windows.net:9093");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "test-client");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "$Default");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.springframework.kafka.support.serializer.JsonDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(props);
ReceiverOptions<String, Object> options = receiverOptions.subscription(Collections.singleton(KafkaConstants.KAFKA_TOPIC))
.addAssignListener(partitions -> log.debug("onPartitionsAssigned {}", partitions))
.addRevokeListener(partitions -> log.debug("onPartitionsRevoked {}", partitions));
Flux<ReceiverRecord<String, Object>> kafkaFlux = KafkaReceiver.create(options).receive();
return kafkaFlux.map(x -> "Test");
}
}
The reactor code uses the same topic and group id constants. As the client logs a broken connection:
[Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
I assume that there is a configuration missing to connect the consumer properly to Azure event hubs.

Spring App Not Connecting to Kafka with SSL

I have a Spring boot app with a very simple kafka producer. Everything works great if I connect to a kafka cluster without encryption. But times out if I try to connect to a kafka cluster with SSL. Is there some other configuration I need in the producer or some other property I need to define to allow spring to correctly use all of the configurations?
I have the following properties set:
spring.kafka.producer.bootstrap-servers=broker1.kafka.poc.com:9093,broker3.kafka.poc.com:9093,broker4.kafka.poc.com:9093,broker5.kafka.poc.com:9093
spring.kafka.ssl.key-store-type=jks
spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
spring.kafka.ssl.trust-store-password=test1234
spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
spring.kafka.ssl.key-store-password=test1234
logging.level.org.apache.kafka=debug
server.ssl.key-password=test1234
spring.kafka.ssl.key-password=test1234
spring.kafka.producer.client-id=sym
spring.kafka.admin.ssl.protocol=ssl
With the following result printing as the ProducerConfig when the app starts up:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
buffer.memory = 33554432
client.dns.lookup = default
client.id = sym
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /home/ec2-user/keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = jks
ssl.protocol = ssl
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /home/ec2-user/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
My producer is extremely simple:
#Service
public class Producer {
private final KafkaTemplate<String, String> kafkaTemplate;
public Producer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
void sendMessage(String topic, String message) {
this.kafkaTemplate.send(topic, message);
}
void sendMessage(String topic, String key, String message) {
this.kafkaTemplate.send(topic, key, message);
}
}
Connecting to kafka with SSL gets a TimeoutException saying Topic symbols not present in metadata after 60000 ms.
If I turn on debug logs, I get this repeatedly, looping all of my brokers.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating API versions fetch from node -4.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-sent
2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-received
2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.latency
2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Connection with /10.25.75.151 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Node -1 disconnected.
2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.
In producer config security.protocol should be set to SSL. You could also try setting ssl.endpoint.identification.algirithm = "" to disable hostname validation of the certificate in case that's the issue. Other than that, would be useful to see the Kafka broker config.

Cannot connect to kafka through SpringBoot (docker) application

Locally started the kafka and I wrote a sample Spring-boot producer. When I run this application it works fine. But when I start the application via docker container, I'm getting below logs "Connection to node 0 could not be established. Broker may not be available."
2019-03-20 06:06:56.023 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.u.AppInfoParser : Kafka version : 1.0.1
2019-03-20 06:06:56.023 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.u.AppInfoParser : Kafka commitId : c0518aa65f25317e
2019-03-20 06:06:56.224 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.263 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.355 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.594 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.919 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:57.877 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
Please find the ProducerConfig values below based on the log
2019-03-20 06:06:55.953 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.p.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.0.64:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
My ProducerConfig as below
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.64:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
Is there any additional configuration required when connecting through docker?
Probably you connect to the wrong port. Do a docker ps:
e.g.
2ca7f0cdddd confluentinc/cp-enterprise-kafka:5.1.2 "/etc/confluent/dockā€¦" 2 weeks ago Up 50 seconds 0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp broker
and use the later broker port: 29092 in the above example.
also usually from your laptop you can access the docker network at localhost.

Spring boot spring.kafka.bootstrap-servers not getting picked up by consumer config

I'm trying to use connect a spring boot project to kafka .
In my application.properties file I have the following configs:
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.consumer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.template.default-topic=my_default_topic
spring.kafka.admin.fail-fast=true
spring.kafka.consumer.auto-offset-reset=latest
spring.kafka.consumer.group-id=my_group_id
spring.kafka.listener.concurrency=10
spring.kafka.bootstrap-servers=kafka:9092
But in the logs I see that only some of these values are making it into the consume configs. Configs from the logs are here:
2018-08-01 15:20:34.640 WARN 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Exception encountered during context initialization - cancelling refr[3/5548]
mpt: org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to cons[2/5548]
fka consumer [1/5548]
isolation.level = read_uncommitted [46/5548]
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer [45/5548]
max.partition.fetch.bytes = 1048576 [44/5548]
max.poll.interval.ms = 300000 [43/5548]
check.crcs = true [58/5548]
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = null
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
Specifically bootstrap.servers and group.id are missing. I then get an exception
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is or
g.apache.kafka.common.KafkaException: Failed to construct kafka consumer
because it can't find the bootstrap servers. I know I can manually pass the properties into a DefaultKafkaConsumerFactory bean but I was hoping Spring Boot could handle that automatically. Is there any way to do that?
Thank you!
EDIT:
I am trying to consume the messages using a #KafkaListener like so...
#KafkaListener(topics = "${app-name.inputTopic}")
public void handleMessage(CustomMessage message){
//my code
}
I realized there were #Configuration beans elsewhere in the project that were manually setting the properties. I removed them and everything worked.
check out KafkaProperties object. It contains standard properties. Acts like a factory.
#Bean
public KafkaTemplate<Integer, String> createTemplate(KafkaProperties properties)
{
Map<String, Object> props = properties.buildProducerProperties();
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
KafkaTemplate<Integer, String> template = new KafkaTemplate<>(new DefaultKafkaProducerFactory<>(props));
return template;
}
Original answer: https://stackoverflow.com/a/60961582/5740547

Resources