Spring Boot Admin with Docker Swarm Unable to pull Health Metrics - spring-boot

I am trying to get spring boot admin to work in a docker swarm cluster using a zookeeper discovery service mechanism to ensure that all clients are dynamically discovered once connected to zookeeper. Problem is it appears springboot admin is unable to reach the health actuator endpoints on the clients due to a connection refused even though all docker services are using the same overlay network and each container can ping one another which ive verified via docker exec -it ping to ensure they are all reachable from one another.
Ive also verified that the clients and the admin service's are properly connecting to zookeeper and that zookeeper + admin dashboard are infact seeing those clients as registered.
To recreate this issue ive created a simple docker compose that deploys two spring boot admin apps with the actuators enabled over the same overlay network via the compose file below:
version: '3.1'
services:
zoo1:
image: zookeeper:3.4.12
hostname: zoo1
networks:
- nsp_test
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda5v]
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888
zoo2:
image: zookeeper:3.4.12
hostname: zoo2
networks:
- nsp_test
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda6v]
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888
nspadmin:
image: admin:77
ports:
- "9084:8080"
networks:
- nsp_test
depends_on:
- "zoo1"
- "zoo2"
deploy:
restart_policy:
condition: on-failure
mode: global
environment:
ZK_HOST: zoo1:2181,zoo2:2182
SPRING_PROFILES_ACTIVE: ssldev
networks:
nsp_test:
external:
name: nsp_test
From this configuration I see both spring admin dashboards registered in zookeeper and display as OFFLINE (since it cant reach the /health actuator)
The following two addresses are what it registers for the clients in SBA.
https://10.255.0.19:8080/ OFFLINE
https://10.255.0.20:8080/ OFFLINE
The exception I get.
2018-12-31 04:20:31.926 INFO 1 --- [ updateTask1] d.c.boot.admin.registry.StatusUpdater : Couldn't retrieve status for Application [id=28eab1e1, name=nsp-admin, managementUrl=https://10.255.0.20:8080/, healthUrl=https://10.255.0.20:8080/health, serviceUrl=https://10.255.0.20:8080/]
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "https://10.255.0.20:8080/health": Connect to 10.255.0.20:8080 [/10.255.0.20] failed: connect timed out; nested exception is org.apache.http.conn.ConnectTimeoutException: Connect to 10.255.0.20:8080 [/10.255.0.20] failed: connect timed out
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:666) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:628) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:549) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at de.codecentric.boot.admin.web.client.ApplicationOperations.doGet(ApplicationOperations.java:68) ~[spring-boot-admin-server-1.5.6.jar!/:1.5.6]
at de.codecentric.boot.admin.web.client.ApplicationOperations.getHealth(ApplicationOperations.java:58) ~[spring-boot-admin-server-1.5.6.jar!/:1.5.6]
at de.codecentric.boot.admin.registry.StatusUpdater.queryStatus(StatusUpdater.java:111) [spring-boot-admin-server-1.5.6.jar!/:1.5.6]
at de.codecentric.boot.admin.registry.StatusUpdater.updateStatus(StatusUpdater.java:65) [spring-boot-admin-server-1.5.6.jar!/:1.5.6]
at de.codecentric.boot.admin.registry.StatusUpdateApplicationListener$1.run(StatusUpdateApplicationListener.java:47) [spring-boot-admin-server-1.5.6.jar!/:1.5.6]
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) [spring-context-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_151]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_151]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_151]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: org.apache.http.conn.ConnectTimeoutException: Connect to 10.255.0.20:8080 [/10.255.0.20] failed: connect timed out
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.springframework.http.client.HttpComponentsClientHttpRequest.executeInternal(HttpComponentsClientHttpRequest.java:89) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at org.springframework.http.client.AbstractBufferingClientHttpRequest.executeInternal(AbstractBufferingClientHttpRequest.java:48) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at org.springframework.http.client.AbstractClientHttpRequest.execute(AbstractClientHttpRequest.java:53) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:652) ~[spring-web-4.3.8.RELEASE.jar!/:4.3.8.RELEASE]
... 15 common frames omitted
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[na:1.8.0_151]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) ~[na:1.8.0_151]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) ~[na:1.8.0_151]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) ~[na:1.8.0_151]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[na:1.8.0_151]
at java.net.Socket.connect(Socket.java:589) ~[na:1.8.0_151]
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:339) ~[httpclient-4.5.3.jar!/:4.5.3]
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ~[httpclient-4.5.3.jar!/:4.5.3]
My SBA Configuration yml
server:
port: 8080
spring:
boot:
admin:
client:
prefer-ip: false
datasource:
driverClassName: org.postgresql.Driver
url: ${DB_URL}
username: ${DB_USER}
password: ${DB_PASSWORD}
application:
name: nsp-admin
cloud:
config:
discovery:
enabled: true
zookeeper:
connect-string: ${ZK_HOST}
discovery:
uri-spec: https://{address}:{port}
metadata:
management:
context-path: /
health:
path: /health
management:
security:
enabled: false
security:
basic:
enabled: false
#security.require-ssl: true
server.ssl.enabled: true
server.ssl.key-store-type: PKCS12
server.ssl.key-store: *****
server.ssl.key-store-password: *****
UPDATE
After debugging the issue a bit more I realized it without a doubt has to do with the hostname/IP the clients are registering in zookeeper.
When i perform a curl using the docker id's as the hostnames the /health api returns when performing the curl from SBA to the Client container id.
This Works:
docker exec -it 8403c5001b9e curl -k https://bf41c73af594:8080/health
This does not work results in a timeout: docker exec -it 8403c5001b9e curl -k https://10.255.0.20:8080/health
Is it possible to force zookeeper to register the hostname or the containerid instead?
UPDATE
Setting spring.cloud.zookeeper.discovery.instanceHost: ${HOSTNAME} in my application.yml resolves the issue. It forces the correct containerId to be registered to zookeeper.

You don't need to do all these circuses. In Docker, there is a concept called service discovery. It is a local DNS resolution taken care by docker.
You can either use the container name or specify an alias instead of IP/container id as these will change every time.
Method 1:
By default, docker adds network name with service name to name a container. You can fix a name to the container by using container_name keyword in decker-compose. Then you can use that name instead of IP. This will get the respective containers resolved.
An example compose file:
version: '3.1'
services:
zoo1:
image: zookeeper:3.4.12
hostname: zoo1
container_name: zoo1
networks:
- nsp_test
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda5v]
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888
zoo2:
image: zookeeper:3.4.12
hostname: zoo2
container_name: zoo2
networks:
- nsp_test
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda6v]
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888
nspadmin:
image: admin:77
ports:
- "9084:8080"
networks:
- nsp_test
depends_on:
- "zoo1"
- "zoo2"
deploy:
restart_policy:
condition: on-failure
mode: global
environment:
ZK_HOST: zoo1:2181,zoo2:2182
SPRING_PROFILES_ACTIVE: ssldev
networks:
nsp_test:
external:
name: nsp_test
Now you can reach zoo1 and zoo2 as zoo1,zoo2. Not suitable for swarm mode as container_name is ignored
Method 2: (Recommended for docker swarm mode)
You can specify alias for each host and can access that service by using the alias.
An example compose file:
version: '3.1'
services:
zoo1:
image: zookeeper:3.4.12
hostname: zoo1
networks:
default:
aliases:
- zoo1
- zoo.1
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda5v]
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888
zoo2:
image: zookeeper:3.4.12
hostname: zoo2
networks:
default:
aliases:
- zoo2
- zoo.2
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda6v]
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888
nspadmin:
image: admin:77
ports:
- "9084:8080"
networks:
- nsp_test
depends_on:
- "zoo1"
- "zoo2"
deploy:
restart_policy:
condition: on-failure
mode: global
environment:
ZK_HOST: zoo1:2181,zoo2:2182
SPRING_PROFILES_ACTIVE: ssldev
networks:
default:
external:
name: nsp_test
Here zoo1 can be resolved as zoo1,zoo.1,zoo1.nsp_test,zoo.1.nsp_test. The same goes for zoo2. suitable for swarm mode as well.
Method 3:
If you know what is the name of the service that is getting created then you can use that as well to resolve the container.
For example:
version: '3.1'
services:
zoo1:
image: zookeeper:3.4.12
hostname: zoo1
networks:
- nsp_test
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda5v]
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888
zoo2:
image: zookeeper:3.4.12
hostname: zoo2
networks:
- nsp_test
deploy:
restart_policy:
condition: on-failure
placement:
constraints: [node.hostname == nj51nreda6v]
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888
nspadmin:
image: admin:77
ports:
- "9084:8080"
networks:
- nsp_test
depends_on:
- "zoo1"
- "zoo2"
deploy:
restart_policy:
condition: on-failure
mode: global
environment:
ZK_HOST: zoo1:2181,zoo2:2182
SPRING_PROFILES_ACTIVE: ssldev
networks:
nsp_test:
external:
name: nsp_test
Let us assume that the above config creates containers with name zoo1_nsp_test and zoo2_nsp_test. You can resolve the containers by using these name as well. Not suitable for swarm node as container name differs from host to host.
Note:
All the above methods work only if the containers are connected to same network.
References:
Compose file version 3 reference#container_name
Compose file version 3 reference#aliases
service discovery
Load balancing, service discovery and security

Related

How to proper set up docker compose of kafka producer? [duplicate]

This question already has answers here:
Connect to Kafka running in Docker
(5 answers)
Closed 10 months ago.
I am converting test project to docker
I have working containers: Mysql,Application,Kafka.
Im having error in zookeeper or maybe wrong set up in docker-compose.yaml
//docker-compose.yaml
version: '3.8'
networks:
product-net:
driver: bridge
services:
productmicroservice:
image: productmicroservice:latest
container_name: productmicroservice
depends_on:
- product-mysqldb
- kafka
restart: always
build:
context: ./
dockerfile: Dockerfile
ports:
- "9001:8091"
environment:
MYSQL_HOST: product-mysqldb
MYSQL_USER: root
MYSQL_PASSWORD: root
links:
- kafka:localhost
networks:
- product-net
product-mysqldb:
image: mysql:8.0.28
restart: unless-stopped
container_name: product-mysqldb
ports:
- "3307:3306"
cap_add:
- SYS_NICE
environment:
MYSQL_DATABASE: dbpoc
MYSQL_ROOT_PASSWORD: root
networks:
- product-net
zookeeper:
image: wurstmeister/zookeeper:latest
container_name: zookeeper
restart: on-failure
ports:
- "2181:2181"
networks:
- product-net
kafka:
image: wurstmeister/kafka:2.11-1.1.1
restart: unless-stopped
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_MESSAGE_MAX_BYTES: 2000000
KAFKA_CREATE_TOPICS: "producttopic:1:1"
BROKER_ID: 1
ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- product-net
depends_on:
- zookeeper
//Dockerfile
FROM openjdk:8
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
//application.yaml
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://${MYSQL_HOST:localhost}:${MYSQL_PORT:3306}/dbpoc
username: root
password: root
kafka:
template:
default-topic: producttopic
producer:
bootstrap-servers:
- localhost:9092
key-serializer:
org.apache.kafka.common.serialization.StringSerializer
value-serializer:
org.springframework.kafka.support.serializer.JsonSerializer
jpa:
hibernate:
naming:
implicit-strategy: org.springframework.boot.orm.jpa.hibernate.SpringImplicitNamingStrategy
physical-strategy: org.springframework.boot.orm.jpa.hibernate.SpringPhysicalNamingStrategy
hibernate.ddl-auto: update
generate-ddl: "false"
show-sql: "false"
properties:
hibernate:
dialect: org.hibernate.dialect.MySQL5InnoDBDialect
mvc:
throw-exception-if-no-handler-found: "true"
web:
resources:
add-mappings: "false"
sql:
init:
mode: always
continue-on-error: "true"
server:
port: 8091
//Error show
kafka | creating topics: producttopic:1:1
zookeeper | 2022-05-03 12:21:55,223 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#215] - Accepted socket connection from /172.27.0.4:34150
zookeeper | 2022-05-03 12:21:55,225 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /172.27.0.4:34150
zookeeper | 2022-05-03 12:21:55,229 [myid:] - INFO [SyncThread:0:ZooKeeperServer#694] - Established session 0x100008559dd0003 with negotiated timeout 30000 for client /172.27.0.4:34150
zookeeper | 2022-05-03 12:21:55,377 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor#487] - Processed session termination for sessionid: 0x100008559dd0003
zookeeper | 2022-05-03 12:21:55,381 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /172.27.0.4:34150 which had sessionid 0x100008559dd0003
//after i try to send a data
2022-05-03 12:15:52.695 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
you have to set the ADVERTISE_LISTENERS properly for the internal and external docker network, something like:
KAFKA_ADVERTISED_LISTENERS: >-
LISTENER_DOCKER_INTERNAL://kafka:19092,
LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-localhost}:9092
you can find a working example here: https://github.com/stockgeeks/docker-compose/blob/master/one-to-run-them-all/docker-compose.yml
And an article with explanation here: https://dev.to/thegroo/one-to-run-them-all-1mg6
And some explanation on the advertise addresses in this article: https://dev.to/thegroo/running-kafka-on-kubernetes-for-local-development-with-storage-class-4oa9 in the ned the section "Connecting a Kafka client"

When I run in docker why localhost giving connection refused?

This is my docker-compose.yml
version: '3.7'
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
cassandra:
image: cassandra
container_name: cassandra
ports:
- 9042:9042
producer:
image: spring/producer
links:
- cassandra
depends_on:
- cassandra
- kafka-1
restart: always
consumer:
image: spring/consumer
links:
- cassandra
depends_on:
- cassandra
- kafka-1
restart: always
When I run with first three service in docker (kafka zookeper and cassandra). I can reach with producer and consumer with intellj runner. But when I add docker compose file (producer and consumer) as a services and docker-compose up, producer and consumer services getting localhost:9042 Connection refused error.
Why I cannot reach to cassandra from producer and consumer when i run with Docker. What differences?
This is my producer application.yml
spring:
kafka:
producer:
bootstrap-servers: localhost:29092
key-serializer: org.apache.kafka.common.serialization.IntegerSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
properties:
acks: all
retries: 10
admin:
properties:
bootstrap.servers: localhost:29092
template:
default-topic: users-events
data:
cassandra:
port: 9042
keyspace-name: mykeyspace
username: cassandra
schema-action: create_if_not_exists

Kafka-Elasticsearch Sink Connector not working

I am trying to send data from Kafka to Elasticsearch. I checked that my Kafka Broker is working because I can see the messages I produce to a topic is read by a Kafka Consumer. However, when I try to connect Kafka to Elasticsearch I get the following error.
Command:
connect-standalone etc/schema-registry/connect-avro-standalone.properties \
etc/kafka-connect-elasticsearch/quickstart-elasticsearch.properties
Error:
ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectStandalone)
org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:83)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
... 2 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
My Docker Compose File:
version: '3'
services:
zookeeper:
container_name : zookeeper
image: zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
kafka:
container_name : kafka
image: bitnami/kafka:1.0.0-r5
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_BROKER_ID: "42"
KAFKA_ADVERTISED_HOST_NAME: "kafka"
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
elasticsearch:
container_name : elasticsearch
image:
docker.elastic.co/elasticsearch/elasticsearch:7.8.0
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.seed_hosts=elasticsearch
- bootstrap.memory_lock=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data99:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana:
container_name : kibana
image: docker.elastic.co/kibana/kibana:7.8.0
# environment:
# - SERVER_NAME=Local kibana
# - SERVER_HOST=0.0.0.0
# - ELASTICSEARCH_URL=elasticsearch:9400
ports:
- "5601:5601"
depends_on:
- elasticsearch
kafka-connect:
container_name : kafka-connect
image: confluentinc/cp-kafka-connect:5.3.1
ports:
- 8083:8083
depends_on:
- zookeeper
- kafka
volumes:
- $PWD/connect-plugins:/connect-plugins
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka:9092
CONNECT_REST_ADVERTISED_HOST_NAME: "localhost"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: docker-kafka-connect-configs
CONNECT_OFFSET_STORAGE_TOPIC: docker-kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: docker-kafka-connect-status
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_KEY_CONVERTER-SCHEMAS_ENABLE: "false"
CONNECT_VALUE_CONVERTER-SCHEMAS_ENABLE: "false"
CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "ERROR"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_TOPICS: "test-elasticsearch-sink"
CONNECT_TYPE_NAME: "type.name=kafka-connect"
CONNECT_PLUGIN_PATH: '/usr/share/java' #'/usr/share/java'
# Interceptor config
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.3.1.jar
CONNECT_KAFKA_HEAP_OPTS: "-Xms256m -Xmx512m"
volumes:
data99:
driver: local
I checked some other questions and answers but couldn't come up with a solution to this problem.
Thanks in advance!
The Connect container starts Connect Distributed Server already. You should use HTTP and JSON properties to configure the Elastic connector rather than exec into the container shell and issue connect-standalone commands which default to using a broker running in the container itself.
Similarly, the Elastic quickstart file expects Elasticsearch running within the Connect container, by default

Access Kafka in Remote Host by IP Address running with Docker-Compose and Spring Boot

I have this docker-compose.yml in which I run Zookeeper, Kafka, Kafka Connect, and KafDrop, the thing is, when I run locally I can connect from my Spring Boot application to consume some topic messages.
What I need is to run the same configuration on a Linux machine and be able to connect from the Spring Boot application the same way.
When run it remotely on the Linux machine everything seems to be running Ok, but when I try to connect from the Spring Boot application I receive some erros showing that somethin is wrong in the connection.
I will try to explain step by step and see if someone can give a "light" on that:
docker-compose.yml:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
networks:
- broker-kafka
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
networks:
- broker-kafka
restart: unless-stopped
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS:
INTERNAL://kafka:29092,
EXTERNAL://localhost:9092
KAFKA_ADVERTISED_LISTENERS:
INTERNAL://kafka:29092,
EXTERNAL://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:
INTERNAL:PLAINTEXT,
EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_RETENTION_HOURS: 12
connect:
image: cdc:latest
networks:
- broker-kafka
depends_on:
- zookeeper
- kafka
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka:29092
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: connect-1
CONNECT_CONFIG_STORAGE_TOPIC: connect-1-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-1-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-1-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_OFFSET.STORAGE.REPLICATION.FACTOR: 1
CONNECT_CONFIG.STORAGE.REPLICATION.FACTOR: 1
CONNECT_OFFSET.STORAGE.PARTITIONS: 1
CONNECT_STATUS.STORAGE.REPLICATION.FACTOR: 1
CONNECT_STATUS.STORAGE.PARTITIONS: 1
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
kafdrop:
image: obsidiandynamics/kafdrop:latest
networks:
- broker-kafka
depends_on:
- kafka
ports:
- 19000:9000
environment:
KAFKA_BROKERCONNECT: kafka:29092
networks:
broker-kafka:
driver: bridge
What I need is to expose to my network this IP machine to be accessed by my Spring Boot application.
Let´s suppose this Linux machine has the IP 10.12.54.99.
How can I make it Kafka be accessible by: 10.12.54.99:9090 ?
Here is my application.properties:
spring.kafka.bootstrap-servers=10.12.54.99:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-commit-interval=100
spring.kafka.consumer.max-poll-records=10
spring.kafka.consumer.key-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
spring.kafka.consumer.group-id=connect-sql-server
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.ack-mode=manual-immediate
spring.kafka.listener.poll-timeout=3000
spring.kafka.listener.concurrency=3
spring.kafka.properties.spring.deserializer.key.delegate.class=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.properties.spring.deserializer.value.delegate.class=org.apache.kafka.common.serialization.StringDeserializer
This is a only consumer-specif application (no producers are used here).
When I run the application:
2020-12-07 10:59:40.361 WARN 58716 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-connect-sql-server-1, groupId=connect-sql-server] Connection to node -1 (/10.12.54.99:9092) could not be established. Broker may not be available.
2020-12-07 10:59:40.362 WARN 58716 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : [Consumer clientId=consumer-connect-sql-server-1, groupId=connect-sql-server] Bootstrap broker 10.12.54.99:9092 (id: -1 rack: null) disconnected
All the firewall ports are enabled in the Linux firewall machie.
Any enlightenment would be very much appreciated.
You need to bind your server's public ip in order to be able to access brokers remotely. However if you don't want to hardcode the ip, you can use .env file.
Do the following:
Create config.env file.
Add this line in config.env and add your host ip as below:
DOCKER_HOST_IP=111.111.11.111
Update your docker-compose:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
networks:
- broker-kafka
ports:
- ${DOCKER_HOST_IP:-127.0.0.1}:2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:latest
networks:
- broker-kafka
restart: unless-stopped
depends_on:
- zookeeper
ports:
- ${DOCKER_HOST_IP:-127.0.0.1}:9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS:
INTERNAL://kafka:29092,
EXTERNAL://localhost:9092
KAFKA_ADVERTISED_LISTENERS:
INTERNAL://kafka:29092,
EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:
INTERNAL:PLAINTEXT,
EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_LOG_RETENTION_HOURS: 12
connect:
image: cdc:latest
networks:
- broker-kafka
depends_on:
- zookeeper
- kafka
ports:
- 8083:8083
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka:29092
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: connect-1
CONNECT_CONFIG_STORAGE_TOPIC: connect-1-config
CONNECT_OFFSET_STORAGE_TOPIC: connect-1-offsets
CONNECT_STATUS_STORAGE_TOPIC: connect-1-status
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_OFFSET.STORAGE.REPLICATION.FACTOR: 1
CONNECT_CONFIG.STORAGE.REPLICATION.FACTOR: 1
CONNECT_OFFSET.STORAGE.PARTITIONS: 1
CONNECT_STATUS.STORAGE.REPLICATION.FACTOR: 1
CONNECT_STATUS.STORAGE.PARTITIONS: 1
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
kafdrop:
image: obsidiandynamics/kafdrop:latest
networks:
- broker-kafka
depends_on:
- kafka
ports:
- 19000:9000
environment:
KAFKA_BROKERCONNECT: kafka:29092
networks:
broker-kafka:
driver: bridge
It will bind to 127.0.0.1, if DOCKER_HOST_IP is not found.
Run the following command:
sudo docker-compose -f path-to-docker-compose.yml --env-file path-to-config.env up -d --force-recreate

Connect Spring with Elasticsearch in Docker

I have to connect my Spring client with Elasticsearch whose image have been taken by the official ES 2.4.6 image that Elastic has in Docker Hub but when I try to run the containers, the Docker console reports me this error:
[Glitch] failed to connect to node
[{#transport#-1}{localhost}{127.0.0.1:9300}], removed from nodes list
The application.properties of my Spring project is:
spring.data.elasticsearch.cluster-nodes=localhost:9300
index.v = default
server.port = 8443
And the docker-compose.yml is:
version: "2.2"
services:
elk:
image: cvazquezlos/elk:2.4.6
ports:
- 5000:5000
- 5601:5601
- 9200:9200
- 9300:9300
volumes:
- elk-data:/var/lib/elasticsearch
testloganalyzer:
image: cvazquezlos/testloganalyzer
ports:
- 8443:8443
volumes:
elk-data:
If I run the backend without Docker it works as expected, but when I run the backend with Docker, reports me the above error. The complete error is:
failed to connect to node [{#transport#-1}{localhost}{127.0.0.1:9300}], removed from nodes list
org.elasticsearch.transport.ConnectTransportException: [][127.0.0.1:9300] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:967) ~[elasticsearch-2.4.6.jar!/:2.4.6]
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:933) ~[elasticsearch-2.4.6.jar!/:2.4.6]
at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:906) ~[elasticsearch-2.4.6.jar!/:2.4.6]
at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:267) ~[elasticsearch-2.4.6.jar!/:2.4.6]
at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:390) ~[elasticsearch-2.4.6.jar!/:2.4.6]
at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:336) [elasticsearch-2.4.6.jar!/:2.4.6]
at org.elasticsearch.client.transport.TransportClientNodesService$ScheduledNodeSampler.run(TransportClientNodesService.java:369) [elasticsearch-2.4.6.jar!/:2.4.6]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_151]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_151]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_151]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_151]
at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) ~[netty-3.10.6.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) ~[netty-3.10.6.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) ~[netty-3.10.6.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) ~[netty-3.10.6.Final.jar!/:na]
at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) ~[netty-3.10.6.Final.jar!/:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) ~[netty-3.10.6.Final.jar!/:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) ~[netty-3.10.6.Final.jar!/:na]
... 3 common frames omitted
When the SpringBoot service works in docker it cannot resolve the localhost as a host of the ES. In this case you can also use the links property like following:
In testloganalyzer section add the links option:
testloganalyzer:
image: cvazquezlos/testloganalyzer
ports:
- 8443:8443
links:
- elk:elk
First is the service and second is an alias.
Next change it reference in the application.properties:
spring.data.elasticsearch.cluster-nodes=elk:9300
I think you're missing the networks section in your docker-compose configuration, try this:
version: "2.2"
services:
elk:
image: cvazquezlos/elk:2.4.6
ports:
- 5000:5000
- 5601:5601
- 9200:9200
- 9300:9300
networks:
- elk-network
volumes:
- elk-data:/var/lib/elasticsearch
testloganalyzer:
image: cvazquezlos/testloganalyzer
ports:
- 8443:8443
networks:
- elk-network
volumes:
elk-data:
networks:
elk-network:
driver: bridge

Resources