I have a Spring Boot application which worked fine with Kafka in a container but when I containerize the Spring Boot application it won't work.
This is the docker-compose file with which I created
version: "3.4"
services:
zookeeper:
image: bitnami/zookeeper
restart: always
container_name: "zookeeper"
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
ports:
- "9092:9092"
restart: always
container_name: "kafka"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
The application.yml of the Spring Boot application
server:
port: 5001
spring:
jpa:
database-platform: org.hibernate.dialect.MySQL8Dialect
show-sql: true
hibernate:
ddl-auto: update
datasource:
url: jdbc:mysql://mysql-container:3306/craproject?autoReconnect=true&useSSL=false&useSSL=false&serverTimezone=UTC&createDatabaseIfNotExist=true
username: ***
password: ****
data:
mongodb:
host: mongo-container
port: 27017
database: craprojet
kafka:
bootstrap-servers:
- kafka:9092
consumer:
group-id: project-group
enable-auto-commit: false
auto-offset-reset: latest
isolation-level: read_committed
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
The dockerfile with which I created the image of the app:
FROM openjdk:11
COPY target/project-service-1.jar project-service-1.jar
EXPOSE 5001
ENTRYPOINT ["java", "-jar" , "project-service-1.jar"]
The containers of kafka and data bases are running fine :
This is the command which i use to run the spring boot app container :
docker run --name project-service\
--network techbankNet\
-p 5001:5001\
--link mysql-container:mysql\
--link mongo-container:mongo\
--link adminer:adminer\
--link kafka:kafka project-service
the log:
2022-08-23 19:05:05.170 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-project-group-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = project-group
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_committed
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-08-23 19:05:15.315 WARN 1 --- [ main] org.apache.kafka.clients.ClientUtils : Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-23 19:05:15.317 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-23 19:05:15.319 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-project-group-1 unregistered
2022-08-23 19:05:15.319 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextExcepti
on: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
2022-08-23 19:05:15.340 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-23 19:05:15.343 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-08-23 19:05:15.365 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2022-08-23 19:05:15.368 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-08-23 19:05:15.389 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-23 19:05:15.420 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct
kafka consumer
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.9.jar!/:5.3.9]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.3.jar!/:2.5.3]
at com.project.CQRS.ProjectServiceApplication.main(ProjectServiceApplication.java:16) ~[classes!/:1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[project-service-1.jar:1]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[project-service-1.jar:1]
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.1.jar!/:na]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:715) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:272) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.9.jar!/:5.3.9]
... 22 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:728) ~[kafka-clients-2.7.1.jar!/:na]
You should put project-service (and adminer, mongo, and mysql) in the same Docker Compose file as Kafka and not use docker run.
This will create a default bridge network where the containers can talk to each other.
Or you need to attach techbankNet Docker network to the Kafka service in the compose file.
https://docs.docker.com/compose/networking/
Also see Connect to Kafka running in Docker
I'm trying to experiment a bit with Spring-Kafka within a Spring Boot application.
I have a very minimal configuration as shown in my application.properties
spring.kafka.consumer.bootstrap-servers=127.0.0.1:29092
spring.kafka.consumer.group-id=myGroup
And Kafka running in a container.
Now, the connection seems to work fine and I can dispatch messages, however something weird happens.
As soon as I introduce:
(it's in Kotlin)
#KafkaListener(topics = arrayOf("kotlinTestTopic"))
fun listenAsObject(#Payload data : String) {
println(data)
}
NONE of my web controllers works anymore and I get a "connection refused", as if the Spring Boot startup did not complete.
And as soon as I comment it out, the controllers are back.
Any hint on what I'm doing wrong?
EDIT
Here are the startup logs:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.2.RELEASE)
2019-12-11 14:24:50.552 INFO 10659 --- [ main] t.g.com.testboot.TestbootApplicationKt : Starting TestbootApplicationKt on theirish-ThinkPad-L390 with PID 10659 (/home/theirish/Documenti/programming/testboot/build/classes/kotlin/main started by theirish in /home/theirish/Documenti/programming/testboot)
2019-12-11 14:24:50.554 INFO 10659 --- [ main] t.g.com.testboot.TestbootApplicationKt : No active profile set, falling back to default profiles: default
2019-12-11 14:24:50.651 WARN 10659 --- [kground-preinit] o.s.h.c.j.Jackson2ObjectMapperBuilder : For Jackson Kotlin classes support please add "com.fasterxml.jackson.module:jackson-module-kotlin" to the classpath
2019-12-11 14:24:51.031 INFO 10659 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2019-12-11 14:24:51.112 INFO 10659 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 40ms. Found 4 JPA repository interfaces.
2019-12-11 14:24:51.476 INFO 10659 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-12-11 14:24:51.680 INFO 10659 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-12-11 14:24:51.689 INFO 10659 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-12-11 14:24:51.689 INFO 10659 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.29]
2019-12-11 14:24:51.767 INFO 10659 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-12-11 14:24:51.767 INFO 10659 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1172 ms
2019-12-11 14:24:51.894 INFO 10659 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2019-12-11 14:24:51.936 INFO 10659 --- [ main] org.hibernate.Version : HHH000412: Hibernate Core {5.4.9.Final}
2019-12-11 14:24:52.065 INFO 10659 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.0.Final}
2019-12-11 14:24:52.129 INFO 10659 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2019-12-11 14:24:52.235 INFO 10659 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2019-12-11 14:24:52.245 INFO 10659 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
2019-12-11 14:24:52.778 INFO 10659 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2019-12-11 14:24:52.783 INFO 10659 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2019-12-11 14:24:53.366 WARN 10659 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2019-12-11 14:24:53.483 INFO 10659 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-12-11 14:24:53.739 INFO 10659 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [127.0.0.1:29092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2019-12-11 14:24:53.803 INFO 10659 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.3.1
2019-12-11 14:24:53.803 INFO 10659 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 18a913733fb71c01
2019-12-11 14:24:53.803 INFO 10659 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1576070693802
I feel incredibly dumb now.
Apparently, during my experiments, I introduced the wrong Kafka port in the Spring-Kafka configuration. As a side effect, the whole Spring Boot application hung, which is quite unexpected...
I have a Spring boot app with a very simple kafka producer. Everything works great if I connect to a kafka cluster without encryption. But times out if I try to connect to a kafka cluster with SSL. Is there some other configuration I need in the producer or some other property I need to define to allow spring to correctly use all of the configurations?
I have the following properties set:
spring.kafka.producer.bootstrap-servers=broker1.kafka.poc.com:9093,broker3.kafka.poc.com:9093,broker4.kafka.poc.com:9093,broker5.kafka.poc.com:9093
spring.kafka.ssl.key-store-type=jks
spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
spring.kafka.ssl.trust-store-password=test1234
spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
spring.kafka.ssl.key-store-password=test1234
logging.level.org.apache.kafka=debug
server.ssl.key-password=test1234
spring.kafka.ssl.key-password=test1234
spring.kafka.producer.client-id=sym
spring.kafka.admin.ssl.protocol=ssl
With the following result printing as the ProducerConfig when the app starts up:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
buffer.memory = 33554432
client.dns.lookup = default
client.id = sym
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /home/ec2-user/keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = jks
ssl.protocol = ssl
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /home/ec2-user/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
My producer is extremely simple:
#Service
public class Producer {
private final KafkaTemplate<String, String> kafkaTemplate;
public Producer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
void sendMessage(String topic, String message) {
this.kafkaTemplate.send(topic, message);
}
void sendMessage(String topic, String key, String message) {
this.kafkaTemplate.send(topic, key, message);
}
}
Connecting to kafka with SSL gets a TimeoutException saying Topic symbols not present in metadata after 60000 ms.
If I turn on debug logs, I get this repeatedly, looping all of my brokers.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating API versions fetch from node -4.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-sent
2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-received
2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.latency
2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Connection with /10.25.75.151 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Node -1 disconnected.
2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.
In producer config security.protocol should be set to SSL. You could also try setting ssl.endpoint.identification.algirithm = "" to disable hostname validation of the certificate in case that's the issue. Other than that, would be useful to see the Kafka broker config.
Locally started the kafka and I wrote a sample Spring-boot producer. When I run this application it works fine. But when I start the application via docker container, I'm getting below logs "Connection to node 0 could not be established. Broker may not be available."
2019-03-20 06:06:56.023 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.u.AppInfoParser : Kafka version : 1.0.1
2019-03-20 06:06:56.023 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.u.AppInfoParser : Kafka commitId : c0518aa65f25317e
2019-03-20 06:06:56.224 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.263 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.355 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.594 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:56.919 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-03-20 06:06:57.877 WARN 1 --- [ad | producer-1] o.a.k.c.NetworkClient : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
Please find the ProducerConfig values below based on the log
2019-03-20 06:06:55.953 INFO 1 --- [ XNIO-2 task-1] o.a.k.c.p.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [192.168.0.64:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
My ProducerConfig as below
#Bean
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.64:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return props;
}
Is there any additional configuration required when connecting through docker?
Probably you connect to the wrong port. Do a docker ps:
e.g.
2ca7f0cdddd confluentinc/cp-enterprise-kafka:5.1.2 "/etc/confluent/dockā¦" 2 weeks ago Up 50 seconds 0.0.0.0:9092->9092/tcp, 0.0.0.0:29092->29092/tcp broker
and use the later broker port: 29092 in the above example.
also usually from your laptop you can access the docker network at localhost.
I am working on a Spring Boot 2.1.1, which need to communicate with Kafka
As it stands, the app starts up, connects to Kafka, reads / writes a few messages, and then exits.
The goal is to keep the app running, and listening on some Kafka topics, without ever exiting.
Here is the main app file:
package com.example.springbootstarter;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.annotation.TopicPartition;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.SendResult;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.messaging.handler.annotation.Payload;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
#SpringBootApplication
public class SpringBootStarterApplication {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(SpringBootStarterApplication.class, args);
MessageProducer producer = context.getBean(MessageProducer.class);
MessageListener listener = context.getBean(MessageListener.class);
/*
* Sending a Hello World message to topic 'baeldung'.
* Must be recieved by both listeners with group foo
* and bar with containerFactory fooKafkaListenerContainerFactory
* and barKafkaListenerContainerFactory respectively.
* It will also be recieved by the listener with
* headersKafkaListenerContainerFactory as container factory
*/
producer.sendMessage("Hello, World!");
listener.latch.await(10, TimeUnit.SECONDS);
/*
* Sending message to a topic with 5 partition,
* each message to a different partition. But as per
* listener configuration, only the messages from
* partition 0 and 3 will be consumed.
*/
for (int i = 0; i < 5; i++) {
producer.sendMessageToPartion("Hello To Partioned Topic!", i);
}
listener.partitionLatch.await(10, TimeUnit.SECONDS);
/*
* Sending message to 'filtered' topic. As per listener
* configuration, all messages with char sequence
* 'World' will be discarded.
*/
producer.sendMessageToFiltered("Hello Baeldung!");
producer.sendMessageToFiltered("Hello World!");
listener.filterLatch.await(10, TimeUnit.SECONDS);
/*
* Sending message to 'greeting' topic. This will send
* and recieved a java object with the help of
* greetingKafkaListenerContainerFactory.
*/
producer.sendGreetingMessage(new Greeting("Greetings", "World!"));
listener.greetingLatch.await(10, TimeUnit.SECONDS);
context.close();
}
#Bean
public MessageProducer messageProducer() {
return new MessageProducer();
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
public static class MessageProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired
private KafkaTemplate<String, Greeting> greetingKafkaTemplate;
#Value(value = "${message.topic.name}")
private String topicName;
#Value(value = "${partitioned.topic.name}")
private String partionedTopicName;
#Value(value = "${filtered.topic.name}")
private String filteredTopicName;
#Value(value = "${greeting.topic.name}")
private String greetingTopicName;
public void sendMessage(String message) {
ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
System.out.println("Sent message=[" + message + "] with offset=[" + result.getRecordMetadata().offset() + "]");
}
#Override
public void onFailure(Throwable ex) {
System.out.println("Unable to send message=[" + message + "] due to : " + ex.getMessage());
}
});
}
public void sendMessageToPartion(String message, int partition) {
kafkaTemplate.send(partionedTopicName, partition, null, message);
}
public void sendMessageToFiltered(String message) {
kafkaTemplate.send(filteredTopicName, message);
}
public void sendGreetingMessage(Greeting greeting) {
greetingKafkaTemplate.send(greetingTopicName, greeting);
}
}
public static class MessageListener {
private CountDownLatch latch = new CountDownLatch(3);
private CountDownLatch partitionLatch = new CountDownLatch(2);
private CountDownLatch filterLatch = new CountDownLatch(2);
private CountDownLatch greetingLatch = new CountDownLatch(1);
#KafkaListener(topics = "${message.topic.name}", groupId = "foo", containerFactory = "fooKafkaListenerContainerFactory")
public void listenGroupFoo(String message) {
System.out.println("Received Messasge in group 'foo': " + message);
latch.countDown();
}
#KafkaListener(topics = "${message.topic.name}", groupId = "bar", containerFactory = "barKafkaListenerContainerFactory")
public void listenGroupBar(String message) {
System.out.println("Received Messasge in group 'bar': " + message);
latch.countDown();
}
#KafkaListener(topics = "${message.topic.name}", containerFactory = "headersKafkaListenerContainerFactory")
public void listenWithHeaders(#Payload String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Messasge: " + message + " from partition: " + partition);
latch.countDown();
}
#KafkaListener(topicPartitions = #TopicPartition(topic = "${partitioned.topic.name}", partitions = { "0", "3" }))
public void listenToParition(#Payload String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Message: " + message + " from partition: " + partition);
this.partitionLatch.countDown();
}
#KafkaListener(topics = "${filtered.topic.name}", containerFactory = "filterKafkaListenerContainerFactory")
public void listenWithFilter(String message) {
System.out.println("Recieved Message in filtered listener: " + message);
this.filterLatch.countDown();
}
#KafkaListener(topics = "${greeting.topic.name}", containerFactory = "greetingKafkaListenerContainerFactory")
public void greetingListener(Greeting greeting) {
System.out.println("Recieved greeting message: " + greeting);
this.greetingLatch.countDown();
}
}
}
And here is the abridged log output of an app run:
019-01-11 13:51:51.728 INFO 40885 --- [ main] c.e.s.SpringBootStarterApplication : Starting SpringBootStarterApplication on CHIMAC11592-2.local with PID 40885 (/Users/e602684/Documents/dev/spring-boot-mongo-crud/target/classes started by e602684 in /Users/e602684/Documents/dev/spring-boot-mongo-crud)
2019-01-11 13:51:51.731 INFO 40885 --- [ main] c.e.s.SpringBootStarterApplication : No active profile set, falling back to default profiles: default
2019-01-11 13:51:52.181 INFO 40885 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode.
2019-01-11 13:51:52.223 INFO 40885 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 39ms. Found 1 repository interfaces.
2019-01-11 13:51:52.389 INFO 40885 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$6ef5e3a3] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-11 13:51:52.666 INFO 40885 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-01-11 13:51:52.684 INFO 40885 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-01-11 13:51:52.684 INFO 40885 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/9.0.13
2019-01-11 13:51:52.689 INFO 40885 --- [ main] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/e602684/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
2019-01-11 13:51:52.773 INFO 40885 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-01-11 13:51:52.774 INFO 40885 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1012 ms
2019-01-11 13:51:52.993 INFO 40885 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2019-01-11 13:51:52.993 INFO 40885 --- [ main] org.mongodb.driver.cluster : Adding discovered server localhost:27017 to client view of cluster
2019-01-11 13:51:53.031 INFO 40885 --- [localhost:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:12}] to localhost:27017
2019-01-11 13:51:53.034 INFO 40885 --- [localhost:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 4]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1632137}
2019-01-11 13:51:53.035 INFO 40885 --- [localhost:27017] org.mongodb.driver.cluster : Discovered cluster type of STANDALONE
2019-01-11 13:51:53.412 INFO 40885 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-01-11 13:51:53.574 INFO 40885 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [127.0.0.1:9092]
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2019-01-11 13:51:53.968 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-01-11 13:51:53.968 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-01-11 13:51:53.973 INFO 40885 --- [ad | producer-1] org.apache.kafka.clients.Metadata : Cluster ID: QW5A9DYxTlSZVC2M8pxAsg
Sent message=[Hello, World!] with offset=[9]
2019-01-11 13:51:56.858 INFO 40885 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=headers] Successfully joined group with generation 9
2019-01-11 13:51:56.859 INFO 40885 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=headers] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.862 INFO 40885 --- [ntainer#2-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
Received Messasge: Hello, World! from partition: 0
2019-01-11 13:51:56.895 INFO 40885 --- [ntainer#4-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-6, groupId=filter] Successfully joined group with generation 9
2019-01-11 13:51:56.896 INFO 40885 --- [ntainer#4-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-6, groupId=filter] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.898 INFO 40885 --- [ntainer#4-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
2019-01-11 13:51:56.915 INFO 40885 --- [ntainer#5-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-8, groupId=greeting] Successfully joined group with generation 9
2019-01-11 13:51:56.916 INFO 40885 --- [ntainer#5-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-8, groupId=greeting] Setting newly assigned partitions [greetings-0]
2019-01-11 13:51:56.919 INFO 40885 --- [ntainer#5-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [greetings-0]
2019-01-11 13:51:56.930 INFO 40885 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-10, groupId=foo] Successfully joined group with generation 9
2019-01-11 13:51:56.931 INFO 40885 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-10, groupId=foo] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.933 INFO 40885 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
Recieved greeting message: Greetings, World!!
Received Messasge in group 'foo': Hello, World!
2019-01-11 13:51:56.944 INFO 40885 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-12, groupId=bar] Successfully joined group with generation 9
2019-01-11 13:51:56.944 INFO 40885 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-12, groupId=bar] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.946 INFO 40885 --- [ntainer#1-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
Received Messasge in group 'bar': Hello, World!
Received Message: Hello To Partioned Topic! from partition: 0
Received Message: Hello To Partioned Topic! from partition: 3
Received Messasge: Hello Baeldung! from partition: 0
Received Messasge: Hello World! from partition: 0
Received Messasge in group 'bar': Hello Baeldung!
Received Messasge in group 'bar': Hello World!
Received Messasge in group 'foo': Hello Baeldung!
Received Messasge in group 'foo': Hello World!
Recieved Message in filtered listener: Hello Baeldung!
2019-01-11 13:52:06.968 INFO 40885 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [127.0.0.1:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
2019-01-11 13:52:06.971 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-01-11 13:52:06.971 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-01-11 13:52:06.984 INFO 40885 --- [ad | producer-2] org.apache.kafka.clients.Metadata : Cluster ID: QW5A9DYxTlSZVC2M8pxAsg
2019-01-11 13:52:07.002 INFO 40885 --- [ntainer#3-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#0-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#2-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#1-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#4-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.005 INFO 40885 --- [ntainer#5-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.007 INFO 40885 --- [ntainer#3-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.014 INFO 40885 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.014 INFO 40885 --- [ntainer#2-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021 INFO 40885 --- [ntainer#1-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021 INFO 40885 --- [ntainer#4-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021 INFO 40885 --- [ntainer#5-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.022 INFO 40885 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-01-11 13:52:07.022 INFO 40885 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-2] Closing the Kafka producer with timeoutMillis = 30000 ms.
2019-01-11 13:52:07.024 INFO 40885 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.catalina.loader.WebappClassLoaderBase (file:/Users/e602684/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.13/tomcat-embed-core-9.0.13.jar) to field java.io.ObjectStreamClass$Caches.localDescs
WARNING: Please consider reporting this to the maintainers of org.apache.catalina.loader.WebappClassLoaderBase
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Disconnected from the target VM, address: '127.0.0.1:57175', transport: 'socket'
Process finished with exit code 0
What should I change in the main app file, to keep this app from exiting?
Because I have the context assignment as a part of my run statement:
ConfigurableApplicationContext context =
SpringApplication.run(SpringBootStarterApplication.class, args);
All I had to do, to keep the app from shutting down was to comment out context.close():
//context.close();