Testcontainers SchemaRegistry can't connect to Kafka container - spring-boot

I want to run integration tests that test my kafka listener and avro serialization. this requires a Kafka and a Schema regsitry (transitively also a Zookeeper).
When testing I currently have to a docker-compose.yml, but I want to reduce user error by building the required containers via testcontainers. The Kafka and Zookeeper instances get started neatly and seem to work just fine - my application can create the required topics and the listener is subscribed as well, I can even send messages via kafka console producer.
What does not work is the SchemaRegistry. The container starts, apparently connects to the ZK but cannot establish a connection to the broker. It retries connecting for some time until it times out and subsequently the container is stopped. I therefore cannot register and read my avro schematas for (De-)Serialization in my test which fail because of this.
I can't find the reason why the SR can apparently connect to the ZK but cant find my broker.
Did someone run into this problem as well? Did you manage to get this running? If so, how so?
I need Kafka and the Schema Registry testcontainers to be fully available for my tests, so omitting any of them is not an option.
I could also keep using the docker-compose.yml but I would really like to setup my test environment fully programmatically.
The schema registry container logs the following:
2023-02-08 16:56:09 [2023-02-08 15:56:09,556] INFO Session establishment complete on server zookeeper/192.168.144.2:2181, session id = 0x1000085b81e0003, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn)
2023-02-08 16:56:09 [2023-02-08 15:56:09,696] INFO Session: 0x1000085b81e0003 closed (org.apache.zookeeper.ZooKeeper)
2023-02-08 16:56:09 [2023-02-08 15:56:09,696] INFO EventThread shut down for session: 0x1000085b81e0003 (org.apache.zookeeper.ClientCnxn)
2023-02-08 16:56:09 [2023-02-08 15:56:09,787] INFO AdminClientConfig values:
/* Omitted for brevity */
(org.apache.kafka.clients.admin.AdminClientConfig)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka version: 7.3.1-ccs (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka commitId: 8628b0341c3c4676 (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,284] INFO Kafka startTimeMs: 1675871770281 (org.apache.kafka.common.utils.AppInfoParser)
2023-02-08 16:56:10 [2023-02-08 15:56:10,308] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:10 [2023-02-08 15:56:10,313] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
/* These lines repeat a few times until the container times out and exits. */
2023-02-08 16:56:50 [2023-02-08 15:56:50,144] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:50 [2023-02-08 15:56:50,144] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:50 [2023-02-08 15:56:50,298] ERROR Error while getting broker list. (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:50 java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2023-02-08 16:56:50 at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
2023-02-08 16:56:50 at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
2023-02-08 16:56:50 at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
2023-02-08 16:56:50 at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:147)
2023-02-08 16:56:50 at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
2023-02-08 16:56:50 Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2023-02-08 16:56:51 [2023-02-08 15:56:51,103] INFO [AdminClient clientId=adminclient-1] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:51 [2023-02-08 15:56:51,103] WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:54776) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
2023-02-08 16:56:51 [2023-02-08 15:56:51,300] INFO Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ... (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:51 [2023-02-08 15:56:51,300] ERROR Expected 1 brokers but found only 0. Brokers found []. (io.confluent.admin.utils.ClusterStatus)
2023-02-08 16:56:51 Using log4j config /etc/schema-registry/log4j.properties
My base test class. ITs that need Kafka extend this class
#Testcontainers
#SpringBootTest
#Slf4j
public class AbstractIT {
private static final Network network = Network.newNetwork();
protected static GenericContainer ZOOKEEPER = new GenericContainer<>(
DockerImageName.parse("confluentinc/cp-zookeeper:7.2.0"))
.withNetwork(network)
.withNetworkAliases("zookeeper")
.withEnv(Map.of(
"ZOOKEEPER_CLIENT_PORT", "2181",
"ZOOKEEPER_TICK_TIME", "2000"));
protected static final KafkaContainer KAFKA = new KafkaContainer(
DockerImageName.parse("confluentinc/cp-kafka"))
.withExternalZookeeper("zookeeper:2181")
.dependsOn(ZOOKEEPER)
.withNetwork(network)
.withNetworkAliases("broker");
protected static final GenericContainer SCHEMAREGSISTRY = new GenericContainer<>(
DockerImageName.parse("confluentinc/cp-schema-registry"))
.dependsOn(ZOOKEEPER, KAFKA)
.withEnv(Map.of(
"SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL", "zookeeper:2181",
"SCHEMA_REGISTRY_HOST_NAME", "schemaregistry",
"SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8085",
"SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "broker:9092"))
.withNetwork(network)
.withNetworkAliases("schemaregistry");
#DynamicPropertySource
static void registerPgProperties(DynamicPropertyRegistry registry) {
registry.add("bootstrap.servers", KAFKA::getBootstrapServers);
registry.add("spring.kafka.bootstrap-servers", KAFKA::getBootstrapServers);
registry.add("spring.kafka.consumer.auto-offset-reset", () -> "earliest");
registry.add("spring.data.mongodb.uri", MONGODB::getConnectionString);
registry.add("spring.data.mongodb.database", () ->"test");
}
//container startup, shutdown as well as topic creation omitted for brevity
}
My docker-compose.yml that I want to replicate with testcontainers
version: "3.5"
services:
zookeeper:
image: confluentinc/cp-zookeeper:7.2.0
hostname: zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
broker:
image: confluentinc/cp-kafka:7.2.0
hostname: broker
container_name: broker
restart: always
depends_on:
- zookeeper
ports:
- "29092:29092"
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_SCHEMA_REGISTRY_URL: "schemaregistry:8085"
schemaregistry:
container_name: schemaregistry
hostname: schemaregistry
image: confluentinc/cp-schema-registry:5.1.2
restart: always
depends_on:
- zookeeper
environment:
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: "zookeeper:2181"
SCHEMA_REGISTRY_HOST_NAME: schemaregistry
SCHEMA_REGISTRY_LISTENERS: "http://0.0.0.0:8085"
ports:
- "8085:8085"
volumes:
- "./src/main/avro/:/etc/schema"

Here is working setup of Kafka & Schema Registry for Testcontainers. It could help to find the issue in your setup
Schema Registry
public class SchemaRegistryContainer extends GenericContainer<SchemaRegistryContainer> {
public static final String SCHEMA_REGISTRY_IMAGE =
"confluentinc/cp-schema-registry";
public static final int SCHEMA_REGISTRY_PORT = 8081;
public SchemaRegistryContainer() {
this(CONFLUENT_PLATFORM_VERSION);
}
public SchemaRegistryContainer(String version) {
super(SCHEMA_REGISTRY_IMAGE + ":" + version);
waitingFor(Wait.forHttp("/subjects").forStatusCode(200));
withExposedPorts(SCHEMA_REGISTRY_PORT);
}
public SchemaRegistryContainer withKafka(KafkaContainer kafka) {
return withKafka(kafka.getNetwork(), kafka.getNetworkAliases().get(0) + ":9092");
}
public SchemaRegistryContainer withKafka(Network network, String bootstrapServers) {
withNetwork(network);
withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry");
withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081");
withEnv("SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS", "PLAINTEXT://" + bootstrapServers);
return self();
}
}
Kafka + Schema Registry
public static final String CONFLUENT_PLATFORM_VERSION = "5.5.1";
private static final Network KAFKA_NETWORK = Network.newNetwork();
private static final DockerImageName KAFKA_IMAGE = DockerImageName.parse("confluentinc/cp-kafka")
.withTag(CONFLUENT_PLATFORM_VERSION);
private static final KafkaContainer KAFKA = new KafkaContainer(KAFKA_IMAGE)
.withNetwork(KAFKA_NETWORK)
.withEnv("KAFKA_TRANSACTION_STATE_LOG_MIN_ISR", "1")
.withEnv("KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR", "1");
private static final SchemaRegistryContainer SCHEMA_REGISTRY =
new SchemaRegistryContainer(CONFLUENT_PLATFORM_VERSION);
#BeforeAll
static void startKafkaContainer() {
KAFKA.start();
SCHEMA_REGISTRY.withKafka(KAFKA).start();
// init kafka properties for consumer or producer
....
kafkaProperties.setBootstrapServers(KAFKA.getBootstrapServers());
kafkaProperties.setSchemaRegistryUrl("http://" + SCHEMA_REGISTRY.getHost() + ":" + SCHEMA_REGISTRY.getFirstMappedPort());
}

Related

spring boot kafka failed with "Broker may not be available" while using testcontainers with kafka, zookeeper, schema registry

I've run testcontainers like below in the test code.
#Testcontainers
public class TestEnvironmentSupport {
static String version = "5.4.0";
static DockerImageName kafkaImage = DockerImageName.parse("confluentinc/cp-server").withTag(version);
static DockerImageName zookeeperImage = DockerImageName.parse("confluentinc/cp-zookeeper").withTag(version);
static DockerImageName schemaRegistryImage = DockerImageName.parse("confluentinc/cp-schema-registry").withTag(version);
static Network network = Network.newNetwork();
#Container
static GenericContainer zookeeper = new GenericContainer<>(zookeeperImage)
.withNetwork(network)
.withCreateContainerCmdModifier(cmd -> cmd.withHostName("zookeeper"))
.withExposedPorts(2181)
.withEnv("ZOOKEEPER_CLIENT_PORT", "2181")
.withEnv("ZOOKEEPER_TICK_TIME", "2000");
#Container
static GenericContainer kafka = new GenericContainer<>(kafkaImage)
.withNetwork(network)
.withCreateContainerCmdModifier(cmd -> cmd.withHostName("kafka"))
.withExposedPorts(9092)
.dependsOn(zookeeper)
.withEnv("KAFKA_BROKER_ID", "1")
.withEnv("KAFKA_ZOOKEEPER_CONNECT", "zookeeper:2181")
.withEnv("KAFKA_LISTENER_SECURITY_PROTOCOL_MAP", "PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT")
.withEnv("KAFKA_ADVERTISED_LISTENERS", "PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092")
.withEnv("KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL", "schema-registry:8081");
#Container
static GenericContainer schemaRegistry = new GenericContainer<>(schemaRegistryImage)
.withNetwork(network)
.withCreateContainerCmdModifier(cmd -> cmd.withHostName("schema-registry"))
.withExposedPorts(8081)
.dependsOn(zookeeper, kafka)
.withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry")
.withEnv("SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL", "zookeeper:2181");
#Test
void test() {
assertTrue(zookeeper.isRunning());
assertTrue(kafka.isRunning());
assertTrue(schemaRegistry.isRunning());
}
}
and it works totally fine.
But the problem occurred when I tried to run spring boot test with the above testcontainer configuration because testcontainer dynamically generates broker ports but NetworkClient continuously accessing the broker with localhost:9092 even though I dynamically override properties on #SpringBootTest code like below
#DynamicPropertySource
static void testcontainerProperties(final DynamicPropertyRegistry registry) {
var bootstrapServers = kafka.getHost() + ":" + kafka.getMappedPort(9092);
var schemaRegistryUrl = "http://" + schemaRegistry.getHost() + ":" + schemaRegistry.getMappedPort(8081);
registry.add("spring.cloud.stream.kafka.binder.brokers", () -> bootstrapServers);
registry.add("bootstrap.servers", () -> bootstrapServers);
registry.add("schema.registry.url", () -> schemaRegistryUrl);
registry.add("spring.cloud.stream.kafka.default.consumer.configuration.schema.registry.url", () -> schemaRegistryUrl);
}
Below is AdminClientConfig log on startup time, and it shows that bootstrap.servers = [localhost:56001] on which the port is dynamically binded by testcontainer.
2021-02-21 20:28:52.291 INFO 78241 --- [ Test worker] o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
bootstrap.servers = [localhost:56013]
client.dns.lookup = use_all_dns_ips
Even though I set like these it keeps trying to connect to localhost:9092 like below.
2021-02-21 20:28:52.457 INFO 78241 --- [ Test worker] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1613906932454
2021-02-21 20:28:53.095 WARN 78241 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-02-21 20:28:53.202 WARN 78241 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2021-02-21 20:28:53.407 WARN 78241 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
And below is the result of docker ps while running spring boot test.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e340d9e15fe4 confluentinc/cp-schema-registry:5.4.0 "/etc/confluent/dock…" 46 seconds ago Up 46 seconds 0.0.0.0:56014->8081/tcp optimistic_joliot
ad3bf06df4b3 confluentinc/cp-server:5.4.0 "/etc/confluent/dock…" 55 seconds ago Up 54 seconds 0.0.0.0:56013->9092/tcp infallible_brown
f7fa5f4ae23c confluentinc/cp-zookeeper:5.4.0 "/etc/confluent/dock…" About a minute ago Up 59 seconds 0.0.0.0:56012->2181/tcp, 0.0.0.0:56011->2888/tcp, 0.0.0.0:56010->3888/tcp agitated_leavitt
b1c036cdf00b testcontainers/ryuk:0.3.0 "/app" About a minute ago Up About a minute 0.0.0.0:56009->8080/tcp testcontainers-ryuk-68190eaa-8513-4dd8-ab67-175275f15a82
I tried run testcontainers with docker compose module but it has the same issue.
What am I doing wrong?
Please help.
It's trying to connect to localhost:9092 because you've tried to connect to the advertised PLAINTEXT_HOST port, and that's the address it'll return. You shouldn't need to advertise two listeners for tests, so try using kafka:29092 directly instead of calling the mapped port method. Also, unless you have a specific need for server-side schema validation, you only need confluentinc/cp-kafka image.
spring-kafka embedded broker should work your tests, too, so you wouldn't need testcontainers

kafka SASL_PLAIN SCRAM is fail in spring boot consumer

I try kafka authentication SASL_PLAINTEXT / SCRAM but Authentication failed in spring boot.
i try change SASL_PLAINTEXT / PLAIN and it's working. but SCRAM is Authentication failed SHA-512 and SHA-256
did many diffrent things but it's not working....
how can i fix it?
broker log
broker1 | [2020-12-31 02:57:37,831] INFO [SocketServer brokerId=1] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
broker2 | [2020-12-31 02:57:37,891] INFO [SocketServer brokerId=2] Failed authentication with /172.29.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512) (org.apache.kafka.common.network.Selector)
Spring boot log
2020-12-31 11:57:37.438 INFO 82416 --- [ restartedMain] o.a.k.c.s.authenticator.AbstractLogin : Successfully logged in.
2020-12-31 11:57:37.497 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.6.0
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 62abe01bee039651
2020-12-31 11:57:37.499 INFO 82416 --- [ restartedMain] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1609383457495
2020-12-31 11:57:37.502 INFO 82416 --- [ restartedMain] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Subscribed to topic(s): test
2020-12-31 11:57:37.508 INFO 82416 --- [ restartedMain] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2020-12-31 11:57:37.528 INFO 82416 --- [ restartedMain] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-12-31 11:57:37.546 INFO 82416 --- [ restartedMain] i.m.k.p.KafkaProducerScramApplication : Started KafkaProducerScramApplication in 2.325 seconds (JVM running for 3.263)
2020-12-31 11:57:37.833 INFO 82416 --- [ntainer#0-0-C-1] o.apache.kafka.common.network.Selector : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Failed authentication with localhost/127.0.0.1 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512)
2020-12-31 11:57:37.836 ERROR 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Connection to node -1 (localhost/127.0.0.1:9091) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
2020-12-31 11:57:37.837 WARN 82416 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-Test-Consumer-1, groupId=Test-Consumer] Bootstrap broker localhost:9091 (id: -1 rack: null) disconnected
2020-12-31 11:57:37.842 ERROR 82416 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer exception
java.lang.IllegalStateException: This error handler cannot process 'org.apache.kafka.common.errors.SaslAuthenticationException's; no record information is available
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:151) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.handleConsumerException(KafkaMessageListenerContainer.java:1425) ~[spring-kafka-2.6.4.jar:2.6.4]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1122) ~[spring-kafka-2.6.4.jar:2.6.4]
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na]
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[na:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512
my docker-compose.yml
...
...
zookeeper3:
image: confluentinc/cp-zookeeper:6.0.1
hostname: zookeeper3
container_name: zookeeper3
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper3:2183
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://zookeeper:2183
ZOOKEEPER_CLIENT_PORT: 2183
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_SERVER_ID: 3
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/sasl/zookeeper_jaas.conf \
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider \
-Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider \
-Dquorum.auth.enableSasl=true \
-Dquorum.auth.learnerRequireSasl=true \
-Dquorum.auth.serverRequireSasl=true \
-Dquorum.auth.learner.saslLoginContext=QuorumLearner \
-Dquorum.auth.server.saslLoginContext=QuorumServer \
-Dquorum.cnxn.threads.size=20 \
-DrequireClientAuthScheme=sasl"
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker1:
image: confluentinc/cp-kafka:6.0.1
hostname: broker1
container_name: broker1
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9091:9091"
- "9101:9101"
- "29091:29091"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29091,SASL_PLAINHOST://:9091
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker1:29090,OUTSIDE://localhost:29091,SASL_PLAINHOST://localhost:9091
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
broker2:
image: confluentinc/cp-kafka:6.0.1
hostname: broker2
container_name: broker2
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
ports:
- "9092:9092"
- "9102:9102"
- "29092:29092"
expose:
- "29090"
environment:
KAFKA_OPTS: "-Dzookeeper.sasl.client=true -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf"
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper1:2181,zookeeper2:2182,zookeeper3:2183'
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT,SASL_PLAINHOST:SASL_PLAINTEXT
KAFKA_LISTENERS: INSIDE://:29090,OUTSIDE://:29092,SASL_PLAINHOST://:9092
KAFKA_ADVERTISED_LISTENERS: INSIDE://broker2:29090,OUTSIDE://localhost:29092,SASL_PLAINHOST://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9102
KAFKA_JMX_HOSTNAME: localhost
KAFKA_SECURITY_INTER_BROKER_PROTOCAL: SASL_PLAINTEXT
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAINTEXT
volumes:
- /etc/kafka/secrets/sasl:/etc/kafka/secrets/sasl
kaka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="password"
user_admin="password"
user_client="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="client"
password="password";
};
zookeeper_jaas.conf
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_admin="password";
};
QuorumServer {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
QuorumLearner {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="admin"
password="password";
};
ConsumerConfig.java
private static final String BOOTSTRAP_ADDRESS = "localhost:9091,localhost:9092";
private static final String JAAS_TEMPLATE = "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"%s\" password=\"%s\";";
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
String jaasCfg = String.format(JAAS_TEMPLATE, "client", "password");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer");
props.put("sasl.jaas.config", jaasCfg);
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("security.protocol", "SASL_PLAINTEXT");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
solved.
because i didn't add user information in zookeeper.
add this code.
zookeeper-add-kafka-users:
image: confluentinc/cp-kafka:6.0.1
container_name: "zookeeper-add-kafka-users"
depends_on:
- zookeeper1
- zookeeper2
- zookeeper3
command: "bash -c 'echo Waiting for Zookeeper to be ready... && \
cub zk-ready zookeeper1:2181 120 && \
cub zk-ready zookeeper2:2182 120 && \
cub zk-ready zookeeper3:2183 120 && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name admin && \
kafka-configs --zookeeper zookeeper1:2181 --alter --add-config 'SCRAM-SHA-512=[iterations=4096,password=password]' --entity-type users --entity-name client '"
environment:
KAFKA_BROKER_ID: ignored
KAFKA_ZOOKEEPER_CONNECT: ignored
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/sasl/kafka_server_jaas.conf
volumes:
- /home/mysend/dev/docker/kafka/sasl:/etc/kafka/secrets/sasl
kafka SASL_PLAIN SCRAM
if don't use docker can use command
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

connect Kafka aws instance from Java API

I was trying to connect kafka aws instance through local Spring Boot API.
I am able to connect it but while listening to the topic, it's throwing an below exception but the new topics were created successfully by Spring Boot API
I am unable to publish any message as well.
java.io.IOException: Can't resolve address: ip-xxx-xx-xx-xx.ec2.internal:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.1.jar:na]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_192]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_192]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.1.jar:na]
... 30 common frames omitted
2019-07-17 15:36:13.581 WARN 3709 --- [ main] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-1, groupId=group_id] Error connecting to node ip-172-31-80-50.ec2.internal:9092 (id: 0 rack: null)
I allowed this port as well
Custom TCP Rule
TCP
2181
0.0.0.0/0
Custom TCP Rule
TCP
9092
0.0.0.0/0
server:
port: 8081
spring:
kafka:
consumer:
bootstrap-servers: xx.xx.xx.xx:9092
group-id: group_id
auto-offset-reset: earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
bootstrap-servers: xx.xx.xx.xx:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
#KafkaListener(topics = "ConsumerTest", groupId = "group_id")
public void consume(String message) throws IOException {
logger.info(String.format("#### -> Consumed message -> %s", message));
}
java.io.IOException: Can't resolve address: ip-xxx-xx-xx-xx.ec2.internal:9092
Error connecting to node ip-172-31-80-50.ec2.internal:9092
When consumers connect to the broker they get back the metadata of the broker for the partition from which they're reading data. What your client is getting back here is the advertised.listener of the Kafka broker. So whilst you connect to the broker on the public address of the broker, it returns to your client the internal address of the machine.
To fix this, you need to set up your listeners correctly on your brokers. See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details.

Camel jms route to IBM MQ immediately shuts down after successful test connection

I have used Camel a few times now but this problem is over my head and I have no clue what I'm doing wrong. It is a new application that should fetch messages from IBM MQ and place files to disk. The route is very simple:
String fromString = "message-queue:XXX.FUNDORDER.YYY.OUT?testConnectionOnStartup=true";
from(fromString)
.autoStartup(true)
.routeId("RouteReceiver")
.log("Processing message ${exchangeId}")
.to("file:/in?fileName=${exchangeId}.txt");
Here's the code used when creating the Connection factory:
public JmsConnectionFactory websphereConnectionFactory() throws JMSException
{
Logger logger = Logger.getLogger(Config.class);
logger.info("ibmMqChannel = " + ibmMqChannel);
logger.info("ibmMqHost = " + ibmMqHost);
logger.info("ibmMqPort = " + ibmMqPort);
logger.info("ibmMqQueueManagerName = " + ibmMqQueueManagerName);
/*
* Create MQConnectionFactory
*/
final MQConnectionFactory connectionFactory = new MQConnectionFactory();
connectionFactory.setHostName(ibmMqHost);
connectionFactory.setPort(ibmMqPort);
connectionFactory.setChannel(ibmMqChannel);
connectionFactory.setQueueManager(ibmMqQueueManagerName);
connectionFactory.setIntProperty(CommonConstants.WMQ_CONNECTION_MODE, CommonConstants.WMQ_CM_CLIENT);
logger.debug("connectionFactory=" + connectionFactory.toString());
/*
* Add ConnectionFactory to JmsConfiguration
*/
final JmsConfiguration jmsConfiguration = new JmsConfiguration();
jmsConfiguration.setConnectionFactory(connectionFactory);
/*
* Add JmsConfiguration to JmsComponent
*/
final JmsComponent jmsComponent = new JmsComponent();
jmsComponent.setConfiguration(jmsConfiguration);
jmsComponent.setAcknowledgementModeName("AUTO_ACKNOWLEDGE");
/*
* Add JmsComponent to camelContext
*/
camelContext.addComponent("message-queue", jmsComponent);
return connectionFactory;
}
Right after the application is started and connection is tested it immediately starts shutting down. No exceptions are thrown.
When testing locally I'm using a different profile and connects to an ActiveMQ server instead. This runs fine and does what it is supposed to.
Any help would be greatly appreciated!
/Katarina
Here's an extract of the log right before the route is shut down:
2018-01-23 12:13:05:501 o.a.camel.component.jms.JmsConsumer DEBUG - Successfully tested JMS Connection on startup for destination: MY.QUEUE.NAME
2018-01-23 12:13:05:501 o.a.camel.component.jms.JmsConsumer TRACE - Starting listener container org.apache.camel.component.jms.DefaultJmsMessageListenerContainer#7b02881e on destination MY.QUEUE.NAME
2018-01-23 12:13:05:532 o.a.c.c.j.DefaultJmsMessageListenerContainer DEBUG - Established shared JMS Connection
2018-01-23 12:13:05:532 o.a.c.u.c.CamelThreadFactory TRACE - Created thread[Camel (Client Robur messaging service) thread #1 - JmsConsumer[MY.QUEUE.NAME]] -> Thread[Camel (Client Robur messaging service) thread #1 - JmsConsumer[MY.QUEUE.NAME],5,main]
2018-01-23 12:13:05:547 o.a.c.c.j.DefaultJmsMessageListenerContainer DEBUG - Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#503d687a
2018-01-23 12:13:05:547 o.a.camel.component.jms.JmsConsumer DEBUG - Started listener container org.apache.camel.component.jms.DefaultJmsMessageListenerContainer#7b02881e on destination MY.QUEUE.NAME
2018-01-23 12:13:05:547 o.a.camel.spring.SpringCamelContext INFO - Route: RouteReceiver started and consuming from: message-queue://MY.QUEUE.NAME?errorHandlerLoggingLevel=TRACE&jmsMessageType=Text&testConnectionOnStartup=true
2018-01-23 12:13:05:547 o.a.camel.support.ServiceSupport TRACE - Starting service
2018-01-23 12:13:05:547 o.a.camel.spring.SpringCamelContext INFO - Total 1 routes, of which 1 are started
2018-01-23 12:13:05:547 o.a.camel.spring.SpringCamelContext INFO - Apache Camel 2.20.1 (CamelContext: Client Robur messaging service) started in 5.974 seconds
2018-01-23 12:13:05:547 o.a.camel.spring.SpringCamelContext DEBUG - start() took 5974 millis
2018-01-23 12:13:05:563 o.a.camel.spring.SpringCamelContext DEBUG - onApplicationEvent: org.springframework.boot.context.event.ApplicationReadyEvent[source=org.springframework.boot.SpringApplication#53976f5c]
2018-01-23 12:13:05:563 se.tradechannel.IbmMqApplication INFO - Started IbmMqApplication in 37.347 seconds (JVM running for 39.742)
2018-01-23 12:13:05:563 se.tradechannel.IbmMqApplication INFO - Message Queue application started.
2018-01-23 12:13:05:563 o.a.c.c.j.DefaultJmsMessageListenerContainer TRACE - runningAllowed() -> true
2018-01-23 12:13:05:766 o.a.camel.spring.SpringCamelContext DEBUG - onApplicationEvent: org.springframework.context.event.ContextClosedEvent[source=org.springframework.context.annotation.AnnotationConfigApplicationContext#621be5d1: startup date [Tue Jan 23 12:12:29 CET 2018]; root of context hierarchy]
2018-01-23 12:13:05:766 o.a.camel.spring.SpringCamelContext INFO - Apache Camel 2.20.1 (CamelContext: Client Robur messaging service) is shutting down
2018-01-23 12:13:05:766 org.apache.camel.util.ServiceHelper TRACE - Stopping service org.apache.camel.impl.DefaultRouteController#35e2d654
2018-01-23 12:13:05:766 org.apache.camel.util.ServiceHelper TRACE - Shutting down service org.apache.camel.impl.DefaultRouteController#35e2d654
2018-01-23 12:13:05:766 o.a.camel.support.ServiceSupport TRACE - Service already stopped
2018-01-23 12:13:05:766 o.a.c.impl.DefaultShutdownStrategy INFO - Starting to graceful shutdown 1 routes (timeout 300 seconds)
2018-01-23 12:13:05:766 o.a.c.m.DefaultManagementLifecycleStrategy TRACE - Checking whether to register org.apache.camel.util.concurrent.RejectableThreadPoolExecutor#279ccd56[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0][ShutdownTask] from route: null
2018-01-23 12:13:05:766 o.a.c.i.DefaultExecutorServiceManager DEBUG - Created new ThreadPool for source: org.apache.camel.impl.DefaultShutdownStrategy#1e800aaa with name: ShutdownTask. -> org.apache.camel.util.concurrent.RejectableThreadPoolExecutor#279ccd56[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0][ShutdownTask]
2018-01-23 12:13:05:766 o.a.c.u.c.CamelThreadFactory TRACE - Created thread[Camel (Client Robur messaging service) thread #2 - ShutdownTask] -> Thread[Camel (Client Robur messaging service) thread #2 - ShutdownTask,5,main]
2018-01-23 12:13:05:766 o.a.c.impl.DefaultShutdownStrategy DEBUG - There are 1 routes to shutdown
2018-01-23 12:13:05:766 o.a.c.impl.DefaultShutdownStrategy TRACE - Shutting down route: RouteReceiver with options [Default,CompleteCurrentTaskOnly]
2018-01-23 12:13:05:766 o.a.c.impl.DefaultShutdownStrategy TRACE - Suspending: Consumer[message-queue://MY.QUEUE.NAME?errorHandlerLoggingLevel=TRACE&jmsMessageType=Text&testConnectionOnStartup=true]
2018-01-23 12:13:05:766 org.apache.camel.util.ServiceHelper TRACE - Suspending service Consumer[message-queue://MY.QUEUE.NAME?errorHandlerLoggingLevel=TRACE&jmsMessageType=Text&testConnectionOnStartup=true]
2018-01-23 12:13:05:766 o.a.c.c.j.DefaultJmsMessageListenerContainer DEBUG - Stopping listenerContainer: org.apache.camel.component.jms.DefaultJmsMessageListenerContainer#7b02881e with cacheLevel: 3 and sharedConnectionEnabled: true
Are you using SpringBoot? If yes, is it possible that your application does not block and therefore simply shuts down after startup because the main method is done?
According to http://camel.apache.org/spring-boot.html there are many ways to block the main thread in a SpringBoot application (see Link for further details):
Your route class extends the org.apache.camel.spring.boot.FatJarRouter class
Use CamelSpringBootApplicationController.blockMainThread()
include the spring-boot-starter-web dependency in pom.xml
Set the property camel.springboot.main-run-controller to true in property or YAML file

How to stop micro service with Spring Kafka Listener, when connection to Apache Kafka Server is lost?

I am currently implementing a micro service, which reads data from Apache Kafka topic. I am using "spring-boot, version: 1.5.6.RELEASE" for the micro service and "spring-kafka, version: 1.2.2.RELEASE" for the listener in the same micro service. This is my kafka configuration:
#Bean
public Map<String, Object> consumerConfigs() {
return new HashMap<String, Object>() {{
put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetResetConfig);
}};
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
I have implemented the listener via the #KafkaListener annotation:
#KafkaListener(topics = "${kafka.dataSampleTopic}")
public void receive(ConsumerRecord<String, String> payload) {
//business logic
latch.countDown();
}
I need to be able to shutdown the micro service, when the listener looses connection to the Apache Kafka server.
When I kill the kafka server I get the following message in the spring boot log:
2017-11-01 19:58:15.721 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) dead for group TestGroup
When I start the kafka sarver, I get:
2017-11-01 20:01:37.748 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) for group TestGroup.
So clearly the Spring Kafka Listener in my micro service is able to detect when the Kafka Server is up and running and when it's not. In the book by confluent Kafka The Definitive Guide in chapter But How Do We Exit? it is said that the wakeup() method needs to be called on the Consumer, so that a WakeupException would be thrown. So I tried to capture the two events (Kafka server down and Kafka server up) with the #EventListener tag, as described in the Spring for Apache Kafka documentation, and then call wakeup(). But the example in the documentation is on how to detect idle consumer, which is not my case. Could someone please help me with this. Thanks in advance.
I don't know how to get a notification of the server down condition (in my experience, the consumer goes into a tight loop within the poll()).
However, if you figure that out, you can stop the listener container(s) which will wake up the consumer and exit the tight loop...
#Autowired
private KafkaListenerEndpointRegistry registry;
...
this.registry.stop();
2017-11-01 16:29:54.290 INFO 21217 --- [ad | so47062346] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator localhost:9092 (id: 2147483647 rack: null) dead for group so47062346
2017-11-01 16:29:54.346 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
...
2017-11-01 16:30:00.643 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
2017-11-01 16:30:00.680 INFO 21217 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
You can improve the tight loop by adding reconnect.backoff.ms, but the poll() never exits so we can't emit an idle event.
spring:
kafka:
consumer:
enable-auto-commit: false
group-id: so47062346
properties:
reconnect.backoff.ms: 1000
I suppose you could enable idle events and use a timer to detect if you've received no data (or idle events) for some period of time, and then stop the container(s).

Resources