I have a Spring Boot application which worked fine with Kafka in a container but when I containerize the Spring Boot application it won't work.
This is the docker-compose file with which I created
version: "3.4"
services:
zookeeper:
image: bitnami/zookeeper
restart: always
container_name: "zookeeper"
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
ports:
- "9092:9092"
restart: always
container_name: "kafka"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
The application.yml of the Spring Boot application
server:
port: 5001
spring:
jpa:
database-platform: org.hibernate.dialect.MySQL8Dialect
show-sql: true
hibernate:
ddl-auto: update
datasource:
url: jdbc:mysql://mysql-container:3306/craproject?autoReconnect=true&useSSL=false&useSSL=false&serverTimezone=UTC&createDatabaseIfNotExist=true
username: ***
password: ****
data:
mongodb:
host: mongo-container
port: 27017
database: craprojet
kafka:
bootstrap-servers:
- kafka:9092
consumer:
group-id: project-group
enable-auto-commit: false
auto-offset-reset: latest
isolation-level: read_committed
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
The dockerfile with which I created the image of the app:
FROM openjdk:11
COPY target/project-service-1.jar project-service-1.jar
EXPOSE 5001
ENTRYPOINT ["java", "-jar" , "project-service-1.jar"]
The containers of kafka and data bases are running fine :
This is the command which i use to run the spring boot app container :
docker run --name project-service\
--network techbankNet\
-p 5001:5001\
--link mysql-container:mysql\
--link mongo-container:mongo\
--link adminer:adminer\
--link kafka:kafka project-service
the log:
2022-08-23 19:05:05.170 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-project-group-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = project-group
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_committed
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-08-23 19:05:15.315 WARN 1 --- [ main] org.apache.kafka.clients.ClientUtils : Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-23 19:05:15.317 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-23 19:05:15.319 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-project-group-1 unregistered
2022-08-23 19:05:15.319 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextExcepti
on: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
2022-08-23 19:05:15.340 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-23 19:05:15.343 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-08-23 19:05:15.365 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2022-08-23 19:05:15.368 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-08-23 19:05:15.389 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-23 19:05:15.420 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct
kafka consumer
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.9.jar!/:5.3.9]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.3.jar!/:2.5.3]
at com.project.CQRS.ProjectServiceApplication.main(ProjectServiceApplication.java:16) ~[classes!/:1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[project-service-1.jar:1]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[project-service-1.jar:1]
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.1.jar!/:na]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:715) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:272) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.9.jar!/:5.3.9]
... 22 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:728) ~[kafka-clients-2.7.1.jar!/:na]
You should put project-service (and adminer, mongo, and mysql) in the same Docker Compose file as Kafka and not use docker run.
This will create a default bridge network where the containers can talk to each other.
Or you need to attach techbankNet Docker network to the Kafka service in the compose file.
https://docs.docker.com/compose/networking/
Also see Connect to Kafka running in Docker
I try to create an Apache Kafka connection to Azure event hubs with reactor kafka in a spring boot application. At first I followed the official Azure tutorial to set up Azure event hubs and the spring backend: https://learn.microsoft.com/en-us/azure/developer/java/spring-framework/configure-spring-cloud-stream-binder-java-app-kafka-azure-event-hub
Everything worked fine and I created some more advanced services.
However when trying to get reactor kafka working with Azure event hubs, it doesn't work. When the consumer is triggered it cannot consume any messages and the following is logged:
com.test.Application : Started Application in 10.442 seconds (JVM running for 10.771)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Discovered group coordinator mynamespacename.servicebus.windows.net:9093 (id: 2147483647 rack: null)
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] (Re-)joining group
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully joined group with generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Finished assignment for group at generation 30: {mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e=Assignment(partitions=[my-event-hub-0])}
o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Successfully synced group in generation Generation{generationId=30, memberId='mynamespacename.servicebus.windows.net:c:$default:I:test-client-0-33016d4334614aa8b9b7bf3fd5e1023e', protocol='range'}
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Notifying assignor about the new Assignment(partitions=[my-event-hub-0])
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Adding newly assigned partitions: my-event-hub-0
o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=test-client-0, groupId=$Default] Setting offset for partition my-event-hub-0 to the committed offset FetchPosition{offset=17, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[mynamespacename.servicebus.windows.net:9093 (id: 0 rack: null)], epoch=absent}}
o.s.k.l.KafkaMessageListenerContainer : $Default: partitions assigned: [my-event-hub-0]
o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [mynamespacename.servicebus.windows.net:9093]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = test-client
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = $Default
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.7.1
o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 61dbce85d0d41457
o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1629919378494
r.k.r.internals.ConsumerEventLoop : SubscribeEvent
o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=test-client, groupId=$Default] Subscribed to topic(s): my-event-hub
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
org.apache.kafka.clients.NetworkClient : [Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
The following is the code:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.stereotype.Service;
import reactor.core.publisher.Flux;
import reactor.kafka.receiver.KafkaReceiver;
import reactor.kafka.receiver.ReceiverOptions;
import reactor.kafka.receiver.ReceiverRecord;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
#Slf4j
#Service
public class StreamConsumer {
public Flux<Object> consumeMessages() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "mynamespacename.servicebus.windows.net:9093");
props.put(ConsumerConfig.CLIENT_ID_CONFIG, "test-client");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "$Default");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.springframework.kafka.support.serializer.JsonDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(props);
ReceiverOptions<String, Object> options = receiverOptions.subscription(Collections.singleton(KafkaConstants.KAFKA_TOPIC))
.addAssignListener(partitions -> log.debug("onPartitionsAssigned {}", partitions))
.addRevokeListener(partitions -> log.debug("onPartitionsRevoked {}", partitions));
Flux<ReceiverRecord<String, Object>> kafkaFlux = KafkaReceiver.create(options).receive();
return kafkaFlux.map(x -> "Test");
}
}
The reactor code uses the same topic and group id constants. As the client logs a broken connection:
[Consumer clientId=test-client, groupId=$Default] Bootstrap broker mynamespacename.servicebus.windows.net:9093 (id: -1 rack: null) disconnected
I assume that there is a configuration missing to connect the consumer properly to Azure event hubs.
I have a Spring boot app with a very simple kafka producer. Everything works great if I connect to a kafka cluster without encryption. But times out if I try to connect to a kafka cluster with SSL. Is there some other configuration I need in the producer or some other property I need to define to allow spring to correctly use all of the configurations?
I have the following properties set:
spring.kafka.producer.bootstrap-servers=broker1.kafka.poc.com:9093,broker3.kafka.poc.com:9093,broker4.kafka.poc.com:9093,broker5.kafka.poc.com:9093
spring.kafka.ssl.key-store-type=jks
spring.kafka.ssl.trust-store-location=file:/home/ec2-user/truststore.jks
spring.kafka.ssl.trust-store-password=test1234
spring.kafka.ssl.key-store-location=file:/home/ec2-user/keystore.jks
spring.kafka.ssl.key-store-password=test1234
logging.level.org.apache.kafka=debug
server.ssl.key-password=test1234
spring.kafka.ssl.key-password=test1234
spring.kafka.producer.client-id=sym
spring.kafka.admin.ssl.protocol=ssl
With the following result printing as the ProducerConfig when the app starts up:
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [broker1.kafka.allypoc.com:9093, broker3.kafka.allypoc.com:9093, broker4.kafka.allypoc.com:9093, broker5.kafka.allypoc.com:9093]
buffer.memory = 33554432
client.dns.lookup = default
client.id = sym
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = /home/ec2-user/keystore.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = jks
ssl.protocol = ssl
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = /home/ec2-user/truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
My producer is extremely simple:
#Service
public class Producer {
private final KafkaTemplate<String, String> kafkaTemplate;
public Producer(KafkaTemplate<String, String> kafkaTemplate) {
this.kafkaTemplate = kafkaTemplate;
}
void sendMessage(String topic, String message) {
this.kafkaTemplate.send(topic, message);
}
void sendMessage(String topic, String key, String message) {
this.kafkaTemplate.send(topic, key, message);
}
}
Connecting to kafka with SSL gets a TimeoutException saying Topic symbols not present in metadata after 60000 ms.
If I turn on debug logs, I get this repeatedly, looping all of my brokers.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -4. Fetching API versions.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating API versions fetch from node -4.
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initialize connection to node 10.25.77.13:9093 (id: -3 rack: null) for sending metadata request
2019-05-29 20:10:25.768 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Initiating connection to node 10.25.77.13:9093 (id: -3 rack: null) using address /10.25.77.13
2019-05-29 20:10:25.994 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-sent
2019-05-29 20:10:25.996 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.bytes-received
2019-05-29 20:10:25.997 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.common.metrics.Metrics : Added sensor with name node--3.latency
2019-05-29 20:10:25.998 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -3
2019-05-29 20:10:26.107 DEBUG 1381 --- [rk-thread | sym] o.apache.kafka.common.network.Selector : [Producer clientId=sym] Connection with /10.25.75.151 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:467) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar!/:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:835) ~[na:na]
2019-05-29 20:10:26.108 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Node -1 disconnected.
2019-05-29 20:10:26.110 DEBUG 1381 --- [rk-thread | sym] org.apache.kafka.clients.NetworkClient : [Producer clientId=sym] Completed connection to node -3. Fetching API versions.
In producer config security.protocol should be set to SSL. You could also try setting ssl.endpoint.identification.algirithm = "" to disable hostname validation of the certificate in case that's the issue. Other than that, would be useful to see the Kafka broker config.
I am working on a Spring Boot 2.1.1, which need to communicate with Kafka
As it stands, the app starts up, connects to Kafka, reads / writes a few messages, and then exits.
The goal is to keep the app running, and listening on some Kafka topics, without ever exiting.
Here is the main app file:
package com.example.springbootstarter;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.annotation.TopicPartition;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.SendResult;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.messaging.handler.annotation.Payload;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;
#SpringBootApplication
public class SpringBootStarterApplication {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(SpringBootStarterApplication.class, args);
MessageProducer producer = context.getBean(MessageProducer.class);
MessageListener listener = context.getBean(MessageListener.class);
/*
* Sending a Hello World message to topic 'baeldung'.
* Must be recieved by both listeners with group foo
* and bar with containerFactory fooKafkaListenerContainerFactory
* and barKafkaListenerContainerFactory respectively.
* It will also be recieved by the listener with
* headersKafkaListenerContainerFactory as container factory
*/
producer.sendMessage("Hello, World!");
listener.latch.await(10, TimeUnit.SECONDS);
/*
* Sending message to a topic with 5 partition,
* each message to a different partition. But as per
* listener configuration, only the messages from
* partition 0 and 3 will be consumed.
*/
for (int i = 0; i < 5; i++) {
producer.sendMessageToPartion("Hello To Partioned Topic!", i);
}
listener.partitionLatch.await(10, TimeUnit.SECONDS);
/*
* Sending message to 'filtered' topic. As per listener
* configuration, all messages with char sequence
* 'World' will be discarded.
*/
producer.sendMessageToFiltered("Hello Baeldung!");
producer.sendMessageToFiltered("Hello World!");
listener.filterLatch.await(10, TimeUnit.SECONDS);
/*
* Sending message to 'greeting' topic. This will send
* and recieved a java object with the help of
* greetingKafkaListenerContainerFactory.
*/
producer.sendGreetingMessage(new Greeting("Greetings", "World!"));
listener.greetingLatch.await(10, TimeUnit.SECONDS);
context.close();
}
#Bean
public MessageProducer messageProducer() {
return new MessageProducer();
}
#Bean
public MessageListener messageListener() {
return new MessageListener();
}
public static class MessageProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Autowired
private KafkaTemplate<String, Greeting> greetingKafkaTemplate;
#Value(value = "${message.topic.name}")
private String topicName;
#Value(value = "${partitioned.topic.name}")
private String partionedTopicName;
#Value(value = "${filtered.topic.name}")
private String filteredTopicName;
#Value(value = "${greeting.topic.name}")
private String greetingTopicName;
public void sendMessage(String message) {
ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(topicName, message);
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
System.out.println("Sent message=[" + message + "] with offset=[" + result.getRecordMetadata().offset() + "]");
}
#Override
public void onFailure(Throwable ex) {
System.out.println("Unable to send message=[" + message + "] due to : " + ex.getMessage());
}
});
}
public void sendMessageToPartion(String message, int partition) {
kafkaTemplate.send(partionedTopicName, partition, null, message);
}
public void sendMessageToFiltered(String message) {
kafkaTemplate.send(filteredTopicName, message);
}
public void sendGreetingMessage(Greeting greeting) {
greetingKafkaTemplate.send(greetingTopicName, greeting);
}
}
public static class MessageListener {
private CountDownLatch latch = new CountDownLatch(3);
private CountDownLatch partitionLatch = new CountDownLatch(2);
private CountDownLatch filterLatch = new CountDownLatch(2);
private CountDownLatch greetingLatch = new CountDownLatch(1);
#KafkaListener(topics = "${message.topic.name}", groupId = "foo", containerFactory = "fooKafkaListenerContainerFactory")
public void listenGroupFoo(String message) {
System.out.println("Received Messasge in group 'foo': " + message);
latch.countDown();
}
#KafkaListener(topics = "${message.topic.name}", groupId = "bar", containerFactory = "barKafkaListenerContainerFactory")
public void listenGroupBar(String message) {
System.out.println("Received Messasge in group 'bar': " + message);
latch.countDown();
}
#KafkaListener(topics = "${message.topic.name}", containerFactory = "headersKafkaListenerContainerFactory")
public void listenWithHeaders(#Payload String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Messasge: " + message + " from partition: " + partition);
latch.countDown();
}
#KafkaListener(topicPartitions = #TopicPartition(topic = "${partitioned.topic.name}", partitions = { "0", "3" }))
public void listenToParition(#Payload String message, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Message: " + message + " from partition: " + partition);
this.partitionLatch.countDown();
}
#KafkaListener(topics = "${filtered.topic.name}", containerFactory = "filterKafkaListenerContainerFactory")
public void listenWithFilter(String message) {
System.out.println("Recieved Message in filtered listener: " + message);
this.filterLatch.countDown();
}
#KafkaListener(topics = "${greeting.topic.name}", containerFactory = "greetingKafkaListenerContainerFactory")
public void greetingListener(Greeting greeting) {
System.out.println("Recieved greeting message: " + greeting);
this.greetingLatch.countDown();
}
}
}
And here is the abridged log output of an app run:
019-01-11 13:51:51.728 INFO 40885 --- [ main] c.e.s.SpringBootStarterApplication : Starting SpringBootStarterApplication on CHIMAC11592-2.local with PID 40885 (/Users/e602684/Documents/dev/spring-boot-mongo-crud/target/classes started by e602684 in /Users/e602684/Documents/dev/spring-boot-mongo-crud)
2019-01-11 13:51:51.731 INFO 40885 --- [ main] c.e.s.SpringBootStarterApplication : No active profile set, falling back to default profiles: default
2019-01-11 13:51:52.181 INFO 40885 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode.
2019-01-11 13:51:52.223 INFO 40885 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 39ms. Found 1 repository interfaces.
2019-01-11 13:51:52.389 INFO 40885 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$6ef5e3a3] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-11 13:51:52.666 INFO 40885 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-01-11 13:51:52.684 INFO 40885 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-01-11 13:51:52.684 INFO 40885 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/9.0.13
2019-01-11 13:51:52.689 INFO 40885 --- [ main] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/Users/e602684/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
2019-01-11 13:51:52.773 INFO 40885 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-01-11 13:51:52.774 INFO 40885 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1012 ms
2019-01-11 13:51:52.993 INFO 40885 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:27017], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
2019-01-11 13:51:52.993 INFO 40885 --- [ main] org.mongodb.driver.cluster : Adding discovered server localhost:27017 to client view of cluster
2019-01-11 13:51:53.031 INFO 40885 --- [localhost:27017] org.mongodb.driver.connection : Opened connection [connectionId{localValue:1, serverValue:12}] to localhost:27017
2019-01-11 13:51:53.034 INFO 40885 --- [localhost:27017] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 0, 4]}, minWireVersion=0, maxWireVersion=7, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=1632137}
2019-01-11 13:51:53.035 INFO 40885 --- [localhost:27017] org.mongodb.driver.cluster : Discovered cluster type of STANDALONE
2019-01-11 13:51:53.412 INFO 40885 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-01-11 13:51:53.574 INFO 40885 --- [ main] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [127.0.0.1:9092]
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2019-01-11 13:51:53.968 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-01-11 13:51:53.968 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-01-11 13:51:53.973 INFO 40885 --- [ad | producer-1] org.apache.kafka.clients.Metadata : Cluster ID: QW5A9DYxTlSZVC2M8pxAsg
Sent message=[Hello, World!] with offset=[9]
2019-01-11 13:51:56.858 INFO 40885 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=headers] Successfully joined group with generation 9
2019-01-11 13:51:56.859 INFO 40885 --- [ntainer#2-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=headers] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.862 INFO 40885 --- [ntainer#2-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
Received Messasge: Hello, World! from partition: 0
2019-01-11 13:51:56.895 INFO 40885 --- [ntainer#4-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-6, groupId=filter] Successfully joined group with generation 9
2019-01-11 13:51:56.896 INFO 40885 --- [ntainer#4-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-6, groupId=filter] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.898 INFO 40885 --- [ntainer#4-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
2019-01-11 13:51:56.915 INFO 40885 --- [ntainer#5-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-8, groupId=greeting] Successfully joined group with generation 9
2019-01-11 13:51:56.916 INFO 40885 --- [ntainer#5-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-8, groupId=greeting] Setting newly assigned partitions [greetings-0]
2019-01-11 13:51:56.919 INFO 40885 --- [ntainer#5-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [greetings-0]
2019-01-11 13:51:56.930 INFO 40885 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-10, groupId=foo] Successfully joined group with generation 9
2019-01-11 13:51:56.931 INFO 40885 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-10, groupId=foo] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.933 INFO 40885 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
Recieved greeting message: Greetings, World!!
Received Messasge in group 'foo': Hello, World!
2019-01-11 13:51:56.944 INFO 40885 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-12, groupId=bar] Successfully joined group with generation 9
2019-01-11 13:51:56.944 INFO 40885 --- [ntainer#1-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-12, groupId=bar] Setting newly assigned partitions [test-0]
2019-01-11 13:51:56.946 INFO 40885 --- [ntainer#1-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [test-0]
Received Messasge in group 'bar': Hello, World!
Received Message: Hello To Partioned Topic! from partition: 0
Received Message: Hello To Partioned Topic! from partition: 3
Received Messasge: Hello Baeldung! from partition: 0
Received Messasge: Hello World! from partition: 0
Received Messasge in group 'bar': Hello Baeldung!
Received Messasge in group 'bar': Hello World!
Received Messasge in group 'foo': Hello Baeldung!
Received Messasge in group 'foo': Hello World!
Recieved Message in filtered listener: Hello Baeldung!
2019-01-11 13:52:06.968 INFO 40885 --- [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [127.0.0.1:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
2019-01-11 13:52:06.971 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 2.0.1
2019-01-11 13:52:06.971 INFO 40885 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : fa14705e51bd2ce5
2019-01-11 13:52:06.984 INFO 40885 --- [ad | producer-2] org.apache.kafka.clients.Metadata : Cluster ID: QW5A9DYxTlSZVC2M8pxAsg
2019-01-11 13:52:07.002 INFO 40885 --- [ntainer#3-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#0-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#2-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#1-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.003 INFO 40885 --- [ntainer#4-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.005 INFO 40885 --- [ntainer#5-0-C-1] o.s.s.c.ThreadPoolTaskScheduler : Shutting down ExecutorService
2019-01-11 13:52:07.007 INFO 40885 --- [ntainer#3-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.014 INFO 40885 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.014 INFO 40885 --- [ntainer#2-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021 INFO 40885 --- [ntainer#1-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021 INFO 40885 --- [ntainer#4-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.021 INFO 40885 --- [ntainer#5-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
2019-01-11 13:52:07.022 INFO 40885 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-01-11 13:52:07.022 INFO 40885 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-2] Closing the Kafka producer with timeoutMillis = 30000 ms.
2019-01-11 13:52:07.024 INFO 40885 --- [ main] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 30000 ms.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.catalina.loader.WebappClassLoaderBase (file:/Users/e602684/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.13/tomcat-embed-core-9.0.13.jar) to field java.io.ObjectStreamClass$Caches.localDescs
WARNING: Please consider reporting this to the maintainers of org.apache.catalina.loader.WebappClassLoaderBase
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Disconnected from the target VM, address: '127.0.0.1:57175', transport: 'socket'
Process finished with exit code 0
What should I change in the main app file, to keep this app from exiting?
Because I have the context assignment as a part of my run statement:
ConfigurableApplicationContext context =
SpringApplication.run(SpringBootStarterApplication.class, args);
All I had to do, to keep the app from shutting down was to comment out context.close():
//context.close();
Spring Boot: I have a kafka broker code that is in the application which listens to a topic. I have a controller which has api endpoints. My kafka polls to the topic. When i start the application i see kafka getting started but the problem is my endpoints are not working. Even the application is not up on the port mentioned. I need to test my endpoints but i cannot. I see my kafka getting started and working fine, but i dont see the prompt saying that the application started at port XXXX and the endpoints are not working.
Logs on console:
2018-04-17 17:18:59.447 INFO 59720 --- [ main]
c.f.s.e.EventAggregationApplication : Starting
EventAggregationApplication on FPTECHS48s-MacBook-Pro.local with PID
59720 (/Users/fptechs48/IdeaProjects/event-aggregation/target/classes
started by fptechs48 in
/Users/fptechs48/IdeaProjects/event-aggregation) 2018-04-17
17:18:59.450 DEBUG 59720 --- [ main]
c.f.s.e.EventAggregationApplication : Running with Spring Boot
v1.5.9.RELEASE, Spring v4.3.13.RELEASE 2018-04-17 17:18:59.450 INFO
59720 --- [ main] c.f.s.e.EventAggregationApplication :
The following profiles are active: dev 2018-04-17 17:18:59.487 INFO
59720 --- [ main] s.c.a.AnnotationConfigApplicationContext :
Refreshing
org.springframework.context.annotation.AnnotationConfigApplicationContext#1198b989:
startup date [Tue Apr 17 17:18:59 IST 2018]; root of context hierarchy
2018-04-17 17:18:59.894 INFO 59720 --- [ main]
trationDelegate$BeanPostProcessorChecker : Bean
'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of
type
[org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$b36fb556]
is not eligible for getting processed by all BeanPostProcessors (for
example: not eligible for auto-proxying) 2018-04-17 17:19:00.521 INFO
59720 --- [ main] o.a.k.clients.admin.AdminClientConfig :
AdminClientConfig values: bootstrap.servers = [13.126.200.243:9092]
client.id = connections.max.idle.ms = 300000 metadata.max.age.ms =
300000 metric.reporters = [] metrics.num.samples = 2
metrics.recording.level = INFO metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5
retry.backoff.ms = 100 sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter
= 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1,
TLSv1] ssl.endpoint.identification.algorithm = null ssl.key.password
= null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS
ssl.protocol = TLS ssl.provider = null
ssl.secure.random.implementation = null ssl.trustmanager.algorithm =
PKIX ssl.truststore.location = null ssl.truststore.password = null
ssl.truststore.type = JKS
2018-04-17 17:19:00.553 INFO 59720 --- [ main]
o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.0
2018-04-17 17:19:00.553 INFO 59720 --- [ main]
o.a.kafka.common.utils.AppInfoParser : Kafka commitId :
aaa7af6d4a11b29d
After which kafka starts listening to the topic but if i hit the endpoint written in my application its not working.
Even the console doesn't say me Tomcat started on port XXXX