kafka SASL_PLAIN SCRAM is fail in spring boot consumer - spring-boot

I try kafka authentication SASL_PLAINTEXT / SCRAM but Authentication failed in spring boot.
i try change SASL_PLAINTEXT / PLAIN and it's working. but SCRAM is Authentication failed SHA-512 and SHA-256
did many diffrent things but it's not working....
how can i fix it?
broker log
broker1 | [2020-12-31 02:57:37,831] INFO [SocketServer brokerId=1] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
broker2 | [2020-12-31 02:57:37,891] INFO [SocketServer brokerId=2] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
Spring boot log
2020-12-31 11:57:37.438 INFO 82416 --- [ restartedMain] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2020-12-31 11:57:37.497 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1609383457495
2020-12-31 11:57:37.502 INFO 82416 --- [ restartedMain] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Subscribed to topic(s): test
2020-12-31 11:57:37.508 INFO 82416 --- [ restartedMain] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2020-12-31 11:57:37.528 INFO 82416 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-12-31 11:57:37.546 INFO 82416 --- [ restartedMain] i.m.k.p.KafkaProducerScramApplication : Started KafkaProducerScramApplication in 2.325 seconds (JVM running for 3.263)
2020-12-31 11:57:37.833 INFO 82416 --- [ntainer#0-0-C-1] o.apache.kafka.common.network.Selector : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Failed authentication with localhost/127.0.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512)
2020-12-31 11:57:37.836 ERROR 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Connection to node -1 (localhost/127.0.0.1:9091) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
2020-12-31 11:57:37.837 WARN 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Bootstrap broker localhost:9091 (id: -1 rack: null) disconnected
2020-12-31 11:57:37.842 ERROR 82416 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.common.errors.SaslAuthenticationException's; no record information is available
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:151) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1425) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1122) ~[spring-kafka-2.6.4.jar:2.6.4]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
my docker-compose.yml
...
...
zookeeper3:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper3
container_name: zookeeper3
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper3:2183
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2183
ZOOKEEPER_CLIENT_PORT: 2183
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker1:
image: confluentinc/cp-kafka:6.0.1
hostname: broker1
container_name: broker1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9091:9091"
- "9101:9101"
- "29091:29091"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29091,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker1:29090,OUTSIDE://localhost:29091,SASL_PLAINHOST://localhost:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker2:
image: confluentinc/cp-kafka:6.0.1
hostname: broker2
container_name: broker2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9092:9092"
- "9102:9102"
- "29092:29092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29092,SASL_PLAINHOST://:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker2:29090,OUTSIDE://localhost:29092,SASL_PLAINHOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9102
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
kaka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
ConsumerConfig.java
private static final String BOOTSTRAP_ADDRESS = "localhost:9091,localhost:9092";
private static final String JAAS_TEMPLATE = "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"%s\" password=\"%s\";";
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
String jaasCfg = String.format(JAAS_TEMPLATE, "client", "password");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer");
props.put("sasl.jaas.config", jaasCfg);
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("security.protocol", "SASL_PLAINTEXT");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}

solved.
because i didn't add user information in zookeeper.
add this code.
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper1:2181 120 && \
cub zk-ready zookeeper2:2182 120 && \
cub zk-ready zookeeper3:2183 120 && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- /home/mysend/dev/docker/kafka/sasl:/etc/kafka/secrets/sasl
kafka SASL_PLAIN SCRAM
if don't use docker can use command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

Related

Testcontainers SchemaRegistry can't connect to Kafka container

I want to run integration tests that test my kafka listener and avro serialization. this requires a Kafka and a Schema regsitry (transitively also a Zookeeper).
When testing I currently have to a docker-compose.yml, but I want to reduce user error by building the required containers via testcontainers. The Kafka and Zookeeper instances get started neatly and seem to work just fine - my application can create the required topics and the listener is subscribed as well, I can even send messages via kafka console producer.
What does not work is the SchemaRegistry. The container starts, apparently connects to the ZK but cannot establish a connection to the broker. It retries connecting for some time until it times out and subsequently the container is stopped. I therefore cannot register and read my avro schematas for (De-)Serialization in my test which fail because of this.
I can't find the reason why the SR can apparently connect to the ZK but cant find my broker.
Did someone run into this problem as well? Did you manage to get this running? If so, how so?
I need Kafka and the Schema Registry testcontainers to be fully available for my tests, so omitting any of them is not an option.
I could also keep using the docker-compose.yml but I would really like to setup my test environment fully programmatically.
The schema registry container logs the following:
2023-02-08 16:56:09 [2023-02-08 15:56:09,556] INFO Session establishment complete on server zookeeper/192.168.144.2:2181, session id = 0x1000085b81e0003, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn)
2023-02-08 16:56:09 [2023-02-08 15:56:09,696] INFO Session: 0x1000085b81e0003 closed (org.apache.zookeeper.ZooKeeper)
2023-02-08 16:56:09 [2023-02-08 15:56:09,696] INFO EventThread shut down for session: 0x1000085b81e0003 (org.apache.zookeeper.ClientCnxn)
2023-02-08 16:56:09 [2023-02-08 15:56:09,787] INFO AdminClientConfig values:
/* Omitted for brevity */
(org.apache.kafka.clients.admin.AdminClientConfig)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka version: 7.3.1-ccs (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka commitId: 8628b0341c3c4676 (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka startTimeMs: 1675871770281 (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,308] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:10 [2023-02-08 15:56:10,313] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
/* These lines repeat a few times until the container times out and exits. */
2023-02-08 16:56:50 [2023-02-08 15:56:50,144] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:50 [2023-02-08 15:56:50,144] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:50 [2023-02-08 15:56:50,298] ERROR Error while getting broker list. (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:50 java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2023-02-08 16:56:50 at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
2023-02-08 16:56:50 at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
2023-02-08 16:56:50 at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
2023-02-08 16:56:50 at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:147)
2023-02-08 16:56:50 at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
2023-02-08 16:56:50 Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2023-02-08 16:56:51 [2023-02-08 15:56:51,103] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:51 [2023-02-08 15:56:51,103] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:51 [2023-02-08 15:56:51,300] INFO Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ... (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:51 [2023-02-08 15:56:51,300] ERROR Expected 1 brokers but found only 0. Brokers found []. (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:51 Using log4j config /etc/schema-registry/log4j.properties
My base test class. ITs that need Kafka extend this class
#Testcontainers
#SpringBootTest
#Slf4j
public class AbstractIT {
private static final Network network = Network.newNetwork();
protected static GenericContainer ZOOKEEPER = new GenericContainer<>(
DockerImageName.parse("confluentinc/cp-zookeeper:7.2.0"))
.withNetwork(network)
.withNetworkAliases("zookeeper")
.withEnv(Map.of(
"ZOOKEEPER_CLIENT_PORT", "2181",
"ZOOKEEPER_TICK_TIME", "2000"));
protected static final KafkaContainer KAFKA = new KafkaContainer(
DockerImageName.parse("confluentinc/cp-kafka"))
.withExternalZookeeper("zookeeper:2181")
.dependsOn(ZOOKEEPER)
.withNetwork(network)
.withNetworkAliases("broker");
protected static final GenericContainer SCHEMAREGSISTRY = new GenericContainer<>(
DockerImageName.parse("confluentinc/cp-schema-registry"))
.dependsOn(ZOOKEEPER, KAFKA)
.withEnv(Map.of(
"SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL", "zookeeper:2181",
"SCHEMA_REGISTRY_HOST_NAME", "schemaregistry",
"SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8085",
"SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "broker:9092"))
.withNetwork(network)
.withNetworkAliases("schemaregistry");
#DynamicPropertySource
static void registerPgProperties(DynamicPropertyRegistry registry) {
registry.add("bootstrap.servers", KAFKA::getBootstrapServers);
registry.add("spring.kafka.bootstrap-servers", KAFKA::getBootstrapServers);
registry.add("spring.kafka.consumer.auto-offset-reset", () -> "earliest");
registry.add("spring.data.mongodb.uri", MONGODB::getConnectionString);
registry.add("spring.data.mongodb.database", () ->"test");
}
//container startup, shutdown as well as topic creation omitted for brevity
}
My docker-compose.yml that I want to replicate with testcontainers
version: "3.5"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.2.0
hostname: broker
container_name: broker
restart: always
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_SCHEMA_REGISTRY_URL: "schemaregistry:8085"
schemaregistry:
container_name: schemaregistry
hostname: schemaregistry
image: confluentinc/cp-schema-registry:5.1.2
restart: always
depends_on:
- zookeeper
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
SCHEMA_REGISTRY_HOST_NAME: schemaregistry
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8085"
ports:
- "8085:8085"
volumes:
- "./src/main/avro/:/etc/schema"
Here is working setup of Kafka & Schema Registry for Testcontainers. It could help to find the issue in your setup
Schema Registry
public class SchemaRegistryContainer extends GenericContainer<SchemaRegistryContainer> {
public static final String SCHEMA_REGISTRY_IMAGE =
"confluentinc/cp-schema-registry";
public static final int SCHEMA_REGISTRY_PORT = 8081;
public SchemaRegistryContainer() {
this(CONFLUENT_PLATFORM_VERSION);
}
public SchemaRegistryContainer(String version) {
super(SCHEMA_REGISTRY_IMAGE + ":" + version);
waitingFor(Wait.forHttp("/subjects").forStatusCode(200));
withExposedPorts(SCHEMA_REGISTRY_PORT);
}
public SchemaRegistryContainer withKafka(KafkaContainer kafka) {
return withKafka(kafka.getNetwork(), kafka.getNetworkAliases().get(0) + ":9092");
}
public SchemaRegistryContainer withKafka(Network network, String bootstrapServers) {
withNetwork(network);
withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry");
withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081");
withEnv("SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "PLAINTEXT://" + bootstrapServers);
return self();
}
}
Kafka + Schema Registry
public static final String CONFLUENT_PLATFORM_VERSION = "5.5.1";
private static final Network KAFKA_NETWORK = Network.newNetwork();
private static final DockerImageName KAFKA_IMAGE = DockerImageName.parse("confluentinc/cp-kafka")
.withTag(CONFLUENT_PLATFORM_VERSION);
private static final KafkaContainer KAFKA = new KafkaContainer(KAFKA_IMAGE)
.withNetwork(KAFKA_NETWORK)
.withEnv("KAFKA_TRANSACTION_STATE_LOG_MIN_ISR", "1")
.withEnv("KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR", "1");
private static final SchemaRegistryContainer SCHEMA_REGISTRY =
new SchemaRegistryContainer(CONFLUENT_PLATFORM_VERSION);
#BeforeAll
static void startKafkaContainer() {
KAFKA.start();
SCHEMA_REGISTRY.withKafka(KAFKA).start();
// init kafka properties for consumer or producer
....
kafkaProperties.setBootstrapServers(KAFKA.getBootstrapServers());
kafkaProperties.setSchemaRegistryUrl("http://" + SCHEMA_REGISTRY.getHost() + ":" + SCHEMA_REGISTRY.getFirstMappedPort());
}

spring kafka: Transactions in consumer Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn

We have a transactional producer. And there are no issue there.
For the consumer, we see the following in the logs. Question is why is a transaction being started here while we are consuming the message (and there is this resulting exception)?
2022-12-28 18:02:05.986 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Creating new transaction with name [null]: PROPAGATION_REQUIRED,ISOLATION_DEFAULT
2022-12-28 18:02:10.437 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Created Kafka transaction on producer [CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]]
2022-12-28 18:02:10.438 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[[S] Consuming message] data[record = ConsumerRecord(topic =XXXXXXX)))]
2022-12-28 18:02:10.438 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[Processed message]
2022-12-28 18:03:10.439 INFO [qa] 85474 --- [tainer#0-51-C-1] abc.abc.common.Generic : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[Throwable t] data[exception = [Ljava.lang.StackTraceElement;#6d314e1d] ex[Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn] sts[org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting AddOffsetsToTxn
]
2022-12-28 18:03:10.439 DEBUG [qa] 85474 --- [tainer#0-51-C-1] abc.abc.kafka : NORMAL uid[n/a] cid[uPRwhWZikfcYpGfIqh7TxXFnm6VmZWkf] m[[E] Consuming message]
2022-12-28 18:03:10.439 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : Initiating transaction commit
2022-12-28 18:04:10.444 ERROR [qa] 85474 --- [tainer#0-51-C-1] o.s.k.core.DefaultKafkaProducerFactory : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
commitTransaction failed: CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]
2022-12-28 18:04:10.444 DEBUG [qa] 85474 --- [tainer#0-51-C-1] o.s.k.t.KafkaTransactionManager : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
Initiating transaction rollback after commit exception
2022-12-28 18:04:10.445 WARN [qa] 85474 --- [tainer#0-51-C-1] o.s.k.core.DefaultKafkaProducerFactory : Error during some operation; producer removed from cache: CloseSafeProducer [delegate=brave.kafka.clients.TracingProducer#43e18a26]
2022-12-28 18:04:12.436 ERROR [qa] 85474 --- [tainer#0-51-C-1] o.s.k.l.KafkaMessageListenerContainer : org.apache.kafka.common.errors.TimeoutException: Timeout expired after 60000 milliseconds while awaiting EndTxn(true)
Transaction rolled back
This is the configuration:
spring:
data:
mongodb:
uri: indb-prop
auto-index-creation: false
kafka:
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
retries: 2
client-id: ${spring.application.name}-${info.cluster}-${IP_ADDRESS}PP
transaction-id-prefix: tx-${spring.kafka.producer.client-id}-
properties:
enable.idempotence: true
spring.json.add.type.headers: false
bootstrap-servers: ${kafka_bootstrap_servers_2}
# listener:
# missing-topics-fatal: true
# type: batch
# concurrency: 15
# ack-mode: manual_immediate
# poll-timeout: 1s
consumer:
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
group-id: ${spring.application.name}-${info.cluster}-111111111112
client-id: ${spring.application.name}-${info.cluster}-${IP_ADDRESS}
# client-id: ${spring.kafka.consumer.group-id}-${IP_ADDRESS}
auto-offset-reset: earliest
enable-auto-commit: false
isolation-level: read_committed
# max-poll-records: 3
fetch-max-wait: 5s
properties:
max.poll.interval.ms: 20000000
spring.json.trusted.packages: '*'
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.value.default.type: net.abc.abc.EventData
bootstrap-servers: ${kafka_bootstrap_servers}
And the consumer:
#KafkaListener(
topics = "SOME_TOPIC",
autoStartup = "true")
// #Transactional(transactionManager = "mongoTransactionManager", propagation = Propagation.REQUIRED)
#Override
public void onMessage(ConsumerRecord<String, EventData> data) {
if (data == null || data.value() == null) {
return;
}
logger.debug(m -> m.event(KAFKA)
.msg("[S] Consuming message")
.with("record", data));
try {
logger.debug(m -> m.event(KAFKA)
.msg("Processed message"));
}
finally {
logger.debug(m -> m.event(KAFKA)
.msg("[E] Consuming message"));
}
The springBoot version: '2.6.6'
How many broker nodes are you using? If 1, have you overridden the broker params KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR and KAFKA_TRANSACTION_STATE_LOG_MIN_ISR to 1?
Similar sounding issue here:
https://github.com/testcontainers/testcontainers-java/issues/1816
Rob.

Trying to connect a containerized spring boot app with an containerized kafka server

I have a Spring Boot application which worked fine with Kafka in a container but when I containerize the Spring Boot application it won't work.
This is the docker-compose file with which I created
version: "3.4"
services:
zookeeper:
image: bitnami/zookeeper
restart: always
container_name: "zookeeper"
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/bitnami"
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
ports:
- "9092:9092"
restart: always
container_name: "kafka"
volumes:
- "kafka_data:/bitnami"
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
depends_on:
- zookeeper
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
The application.yml of the Spring Boot application
server:
port: 5001
spring:
jpa:
database-platform: org.hibernate.dialect.MySQL8Dialect
show-sql: true
hibernate:
ddl-auto: update
datasource:
url: jdbc:mysql://mysql-container:3306/craproject?autoReconnect=true&useSSL=false&useSSL=false&serverTimezone=UTC&createDatabaseIfNotExist=true
username: ***
password: ****
data:
mongodb:
host: mongo-container
port: 27017
database: craprojet
kafka:
bootstrap-servers:
- kafka:9092
consumer:
group-id: project-group
enable-auto-commit: false
auto-offset-reset: latest
isolation-level: read_committed
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
The dockerfile with which I created the image of the app:
FROM openjdk:11
COPY target/project-service-1.jar project-service-1.jar
EXPOSE 5001
ENTRYPOINT ["java", "-jar" , "project-service-1.jar"]
The containers of kafka and data bases are running fine :
This is the command which i use to run the spring boot app container :
docker run --name project-service\
--network techbankNet\
-p 5001:5001\
--link mysql-container:mysql\
--link mongo-container:mongo\
--link adminer:adminer\
--link kafka:kafka project-service
the log:
2022-08-23 19:05:05.170 INFO 1 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [kafka:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-project-group-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = project-group
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_committed
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.springframework.kafka.support.serializer.JsonDeserializer
2022-08-23 19:05:15.315 WARN 1 --- [ main] org.apache.kafka.clients.ClientUtils : Couldn't resolve server kafka:9092 from bootstrap.servers as DNS resolution failed for kafka
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics scheduler closed
2022-08-23 19:05:15.316 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Closing reporter org.apache.kafka.common.metrics.JmxReporter
2022-08-23 19:05:15.317 INFO 1 --- [ main] org.apache.kafka.common.metrics.Metrics : Metrics reporters closed
2022-08-23 19:05:15.319 INFO 1 --- [ main] o.a.kafka.common.utils.AppInfoParser : App info kafka.consumer for consumer-project-group-1 unregistered
2022-08-23 19:05:15.319 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextExcepti
on: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
2022-08-23 19:05:15.340 INFO 1 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2022-08-23 19:05:15.343 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2022-08-23 19:05:15.365 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2022-08-23 19:05:15.368 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2022-08-23 19:05:15.389 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2022-08-23 19:05:15.420 ERROR 1 --- [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.context.ApplicationContextException: Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.KafkaException: Failed to construct
kafka consumer
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:181) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:54) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:356) ~[spring-context-5.3.9.jar!/:5.3.9]
at java.base/java.lang.Iterable.forEach(Iterable.java:75) ~[na:na]
at org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:155) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:123) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:935) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586) ~[spring-context-5.3.9.jar!/:5.3.9]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343) ~[spring-boot-2.5.3.jar!/:2.5.3]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332) ~[spring-boot-2.5.3.jar!/:2.5.3]
at com.project.CQRS.ProjectServiceApplication.main(ProjectServiceApplication.java:16) ~[classes!/:1]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[project-service-1.jar:1]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[project-service-1.jar:1]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[project-service-1.jar:1]
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:819) ~[kafka-clients-2.7.1.jar!/:na]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createRawConsumer(DefaultKafkaConsumerFactory.java:366) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:334) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumerWithAdjustedProperties(DefaultKafkaConsumerFactory.java:310) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createKafkaConsumer(DefaultKafkaConsumerFactory.java:277) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.core.DefaultKafkaConsumerFactory.createConsumer(DefaultKafkaConsumerFactory.java:254) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.<init>(KafkaMessageListenerContainer.java:715) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer.doStart(KafkaMessageListenerContainer.java:320) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.ConcurrentMessageListenerContainer.doStart(ConcurrentMessageListenerContainer.java:205) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.listener.AbstractMessageListenerContainer.start(AbstractMessageListenerContainer.java:397) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.startIfNecessary(KafkaListenerEndpointRegistry.java:327) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.kafka.config.KafkaListenerEndpointRegistry.start(KafkaListenerEndpointRegistry.java:272) ~[spring-kafka-2.7.4.jar!/:2.7.4]
at org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:178) ~[spring-context-5.3.9.jar!/:5.3.9]
... 22 common frames omitted
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-2.7.1.jar!/:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:728) ~[kafka-clients-2.7.1.jar!/:na]
You should put project-service (and adminer, mongo, and mysql) in the same Docker Compose file as Kafka and not use docker run.
This will create a default bridge network where the containers can talk to each other.
Or you need to attach techbankNet Docker network to the Kafka service in the compose file.
https://docs.docker.com/compose/networking/
Also see Connect to Kafka running in Docker

Exception opening socket - MongoDB, Docker

I have Maven Multi-Module application(Spring Boot + MySql + MongoDB) with using docker image, but i can't get connection to MongoDB.
Thing is when local mongo instance "MongoDB Server on Windows Services" is turned on and use spring.data.mongodb.host=localhost all is working fine.
But when I turn off mongo instance and try to go with:
spring.data.mongodb.host=$(MONGO_HOST) to use it with docker I'm getting error "Exception opening socket"
I start my application with execute command:
mvn install and tryed with docker-compose up --build
console
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.5.1)
2021-09-25 12:19:22.635 INFO 8588 --- [ main] c.o.o.n.controller.NoteControllerTest : Starting NoteControllerTest using Java 16.0.1 on AntonioPC with PID 8588 (started by ajago in D:\Programming\Eclipse-workspace\OC_Project10\Notes)
2021-09-25 12:19:22.642 INFO 8588 --- [ main] c.o.o.n.controller.NoteControllerTest : No active profile set, falling back to default profiles: default
2021-09-25 12:19:23.776 INFO 8588 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode!
2021-09-25 12:19:23.777 INFO 8588 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2021-09-25 12:19:23.824 INFO 8588 --- [ main] .RepositoryConfigurationExtensionSupport : Spring Data JPA - Could not safely identify store assignment for repository candidate interface com.openclassrooms.ocproject10.notes.repository.NoteRepository. If you want this repository to be a JPA repository, consider annotating your entities with one of these annotations: javax.persistence.Entity, javax.persistence.MappedSuperclass (preferred), or consider extending one of the following types with your repository: org.springframework.data.jpa.repository.JpaRepository.
2021-09-25 12:19:23.826 INFO 8588 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 36 ms. Found 0 JPA repository interfaces.
2021-09-25 12:19:24.093 INFO 8588 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Multiple Spring Data modules found, entering strict repository configuration mode!
2021-09-25 12:19:24.094 INFO 8588 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
2021-09-25 12:19:24.126 INFO 8588 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 31 ms. Found 1 MongoDB repository interfaces.
2021-09-25 12:19:24.876 INFO 8588 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2021-09-25 12:19:25.458 INFO 8588 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed.
2021-09-25 12:19:25.596 INFO 8588 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2021-09-25 12:19:25.685 INFO 8588 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.4.32.Final
2021-09-25 12:19:25.882 INFO 8588 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.2.Final}
2021-09-25 12:19:26.112 INFO 8588 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.H2Dialect
2021-09-25 12:19:27.051 INFO 8588 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform]
2021-09-25 12:19:27.065 INFO 8588 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2021-09-25 12:19:27.486 INFO 8588 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[localhost:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'}
2021-09-25 12:19:27.586 INFO 8588 --- [localhost:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server localhost:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-4.2.3.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:143) ~[mongodb-driver-core-4.2.3.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:188) ~[mongodb-driver-core-4.2.3.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144) ~[mongodb-driver-core-4.2.3.jar:na]
at java.base/java.lang.Thread.run(Thread.java:831) ~[na:na]
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[na:na]
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:669) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542) ~[na:na]
at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597) ~[na:na]
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:333) ~[na:na]
at java.base/java.net.Socket.connect(Socket.java:645) ~[na:na]
at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:107) ~[mongodb-driver-core-4.2.3.jar:na]
at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-4.2.3.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-4.2.3.jar:na]
... 4 common frames omitted
2021-09-25 12:19:28.302 WARN 8588 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2021-09-25 12:19:28.988 INFO 8588 --- [ main] o.s.b.t.m.w.SpringBootMockServletContext : Initializing Spring TestDispatcherServlet ''
2021-09-25 12:19:28.988 INFO 8588 --- [ main] o.s.t.web.servlet.TestDispatcherServlet : Initializing Servlet ''
2021-09-25 12:19:28.990 INFO 8588 --- [ main] o.s.t.web.servlet.TestDispatcherServlet : Completed initialization in 2 ms
2021-09-25 12:19:29.018 INFO 8588 --- [ main] c.o.o.n.controller.NoteControllerTest : Started NoteControllerTest in 7.33 seconds (JVM running for 9.77)
2021-09-25 12:19:29.380 INFO 8588 --- [ main] org.mongodb.driver.cluster : Cluster description not yet available. Waiting for 30000 ms before timing out
docker-compose.yml
version: '2'
services:
reports:
image: 'reports:latest'
build:
context: ./reports
container_name: reports
ports:
- 8080:8080
environment:
- PATIENTS_URL=http://gps:8081
- NOTES_URL=http://rewards:8082
patients:
image: 'patients:latest'
build:
context: ./patients
container_name: patients
ports:
- 8081:8081
environment:
- MYSQL_HOST=db-mysql
depends_on:
- db-mysql
notes:
image: 'notes:latest'
build:
context: ./notes
container_name: notes
ports:
- 8082:8082
environment:
- MONGO_HOST=mongodb
depends_on:
- mongodb
db-mysql:
image: mysql:latest
container_name: mysql
ports:
- 3306:3306
command: --innodb --init-file /data/application/init.sql
volumes:
- ./init.sql:/data/application/init.sql
environment:
MYSQL_DATABASE: project_10
MYSQL_ROOT_USER: root
MYSQL_ROOT_PASSWORD: Musapa1990..
mongodb:
image: mongo:latest
container_name: mongodb
network_mode: host
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_DATABASE: project_10
application.properties
# App config
logging.level.org.springframework=INFO
spring.application.name=P10_notes
server.port=8082
################### MongoDB Configuration ##########################
spring.data.mongodb.database=project_10
spring.data.mongodb.port=27017
#spring.data.mongodb.host=localhost
spring.data.mongodb.host=$(MONGO_HOST)
spring.data.mongodb.authentication-database=admin
# thymeleaf configurations for Auto reload Thymeleaf templates without restart
spring.thymeleaf.cache=false
spring.thymeleaf.prefix=file:src/main/resources/templates/
I read similar articles here and tried different things, but nothing helped me.
In the end I messed up something when installing mongodb. When I reinstalled the mongo it worked. I did not make any crucial changes in the code.
Only what I added was:
################### MongoDB Configuration ##########################
spring.data.mongodb.database=project_10
spring.data.mongodb.port=27017
#spring.data.mongodb.host=localhost
spring.data.mongodb.host=${MONGO_HOST}
spring.data.mongodb.authentication-database=admin
I added that on all 3 module on application.properties because each of modules are using MongoDB.
If that not work then you can try that
change localhost and 127.0.0.1 it worked for me
like this
#spring.data.mongodb.host=localhost
spring.data.mongodb.host=127.0.0.1

Docker-compose, spring app + mongoDB on non-default port

I have a problem with connecting to mongodb from my app.
Here is the docker-compose file:
version: "3"
services:
olx-crawler:
container_name: olx-crawler
image: myimage:v1
ports:
- "8099:8099"
depends_on:
- olx-mongo
environment:
SPRING_DATA_MONGODB_HOST: olx-mongo
olx-mongo:
container_name: olx-mongo
image: mongo
ports:
- "27777:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: biafra
MONGO_INITDB_ROOT_PASSWORD: password
And here is my application.yaml:
spring:
data:
mongodb:
host: localhost
port: 27777
username: biafra
password: password
authentication-database: admin
logging:
level:
org.springframework.data.mongodb.core.MongoTemplate: DEBUG
server:
port: 8099
Now i have done a similar project to this (docker-compose -> spring app + mongodb) and it worked correctly, but it was with the default mongo port 27017.
And i know you have to use mongo container name instead of localhost, this is what this:
SPRING_DATA_MONGODB_HOST: olx-mongo
is for it replaces "localhost" in application.yml with olx-mongo, as you can see in app logs:
Exception in monitor thread while connecting to server olx-mongo:27777
Here are some logs:
olx-mongo | 2020-04-15T18:00:15.170+0000 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version: <unsharded>
olx-mongo | 2020-04-15T18:00:15.174+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version: <unsharded>
olx-mongo | 2020-04-15T18:00:15.175+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock
olx-mongo | 2020-04-15T18:00:15.175+0000 I NETWORK [listener] Listening on 0.0.0.0
olx-mongo | 2020-04-15T18:00:15.175+0000 I NETWORK [listener] waiting for connections on port 27017
olx-crawler | 2020-04-15 18:00:15.436 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.
olx-crawler | 2020-04-15 18:00:15.486 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 45ms. Found 1 MongoDB repository interfaces.
olx-mongo | 2020-04-15T18:00:16.000+0000 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version: <unsharded>
olx-crawler | 2020-04-15 18:00:16.037 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8099 (http)
olx-crawler | 2020-04-15 18:00:16.050 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
olx-crawler | 2020-04-15 18:00:16.052 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.33]
olx-crawler | 2020-04-15 18:00:16.116 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
olx-crawler | 2020-04-15 18:00:16.117 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1487 ms
olx-crawler | 2020-04-15 18:00:16.468 INFO 1 --- [ main] org.mongodb.driver.cluster : Cluster created with settings {hosts=[olx-mongo:27777], mode=MULTIPLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize
=500}
olx-crawler | 2020-04-15 18:00:16.469 INFO 1 --- [ main] org.mongodb.driver.cluster : Adding discovered server olx-mongo:27777 to client view of cluster
olx-crawler | 2020-04-15 18:00:16.550 INFO 1 --- [olx-mongo:27777] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server olx-mongo:27777
olx-crawler |
olx-crawler | com.mongodb.MongoSocketOpenException: Exception opening socket
olx-crawler | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at java.base/java.lang.Thread.run(Thread.java:844) [na:na]
olx-crawler | Caused by: java.net.ConnectException: Connection refused (Connection refused)
olx-crawler | at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:na]
olx-crawler | at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:400) ~[na:na]
olx-crawler | at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:243) ~[na:na]
olx-crawler | at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:225) ~[na:na]
olx-crawler | at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:402) ~[na:na]
olx-crawler | at java.base/java.net.Socket.connect(Socket.java:591) ~[na:na]
olx-crawler | at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-3.11.2.jar!/:na]
olx-crawler | ... 3 common frames omitted
olx-crawler |
olx-crawler | 2020-04-15 18:00:17.096 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
olx-crawler | 2020-04-15 18:00:17.229 INFO 1 --- [ main] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService 'taskScheduler'
olx-crawler | 2020-04-15 18:00:17.306 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8099 (http) with context path ''
olx-crawler | 2020-04-15 18:00:17.315 INFO 1 --- [ main] c.e.olxcrawler.OlxCrawlerApplication : Started OlxCrawlerApplication in 3.944 seconds (JVM running for 4.977)
Any help?
Well you wrote
And i know you have to use mongo container name instead of localhost, but it still does not work.
but you have
spring:
data:
mongodb:
host: localhost
port: 27777
Problem is that with this config you are not able to connect to mongo from within spring boot container. It's just configuration for "outside world" of container. For example you can connect to it from your locally running spring boot application which doesn't run inside docker.
To connect to mongo from within dockerized spring boot app, change host to olx-mongo and port to 27017.

Resources