Register Zipkin with Eureka server - spring-boot

I have an application that is divided into a few microservices (using Spring Eureka project).
All the services are registered to Eureka Server - so that the communication between the services can be realized through a "Gateway API" (Eureka Server)
The logs produced by the services are reported to a Zipkin server that runs as a separate service.
All works as expected but when I go to the Eureka dashboard I am not able to see my Zipkin service since it is not registered with Eureka.
Question: Is it possible to register Zipkin with Eureka Server?
Here is my docker-compose file:
version: '3'
services:
logs-aggregator:
container_name: logs-aggregator
image: maimas/sr-logs-aggregator-service:0.1.2-SNAPSHOT
ports:
- 9411:9411
database-service:
container_name: database-service
image: maimas/sr-database-service:0.1.2-SNAPSHOT
ports:
- 27017:27017
- 28017:28017
gateway-api:
depends_on:
- logs-aggregator
- database-service
container_name: gateway-api
image: maimas/sr-gateway-api-service:0.1.2-SNAPSHOT
ports:
- 8081:8081
environment:
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://admin:12341234#gateway-api:8081/eureka
SPRING_ZIPKIN_BASEURL: http://logs-aggregator:9411/
user-service:
depends_on:
- gateway-api
container_name: user-service
image: maimas/sr-user-service:0.1.2-SNAPSHOT
ports:
- 8082:8082
environment:
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://admin:12341234#gateway-api:8081/eureka
SPRING_ZIPKIN_BASEURL: http://logs-aggregator:9411/
SPRING_DATA_MONGODB_URI: mongodb://database-service/smartrent2
property-service:
container_name: property-service
image: maimas/sr-property-service:0.1.2-SNAPSHOT
ports:
- 8083:8083
environment:
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://admin:12341234#gateway-api:8081/eureka
SPRING_ZIPKIN_BASEURL: http://logs-aggregator:9411/
SPRING_DATA_MONGODB_URI: mongodb://database-service/smartrent2
renter-service:
container_name: renter-service
image: maimas/sr-renter-service:0.1.2-SNAPSHOT
ports:
- 8084:8084
environment:
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://admin:12341234#gateway-api:8081/eureka
SPRING_ZIPKIN_BASEURL: http://logs-aggregator:9411/
SPRING_DATA_MONGODB_URI: mongodb://database-service/smartrent2

I think you must have found your answer by now. But I am posting this for future reference.
Take a look at this Github issue, it basically explains everything and provides a few workarounds.

Related

I want to communicate my client and kafka broker with docker compose

There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but
driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I get an error.
I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.
The name of my client trying to communicate with kafka is
driver-service
I looked through these resources but according to them my method should work:
Connect to Kafka running in Docker
My Python/Java/Spring/Go/Whatever Client Won’t Connect to My Apache
Kafka Cluster in Docker/AWS/My Brother’s Laptop. Please Help!
driver-service githup repositorie
My docker-compose file:
version: '3'
services:
gateway-server:
image: gateway-server-image
container_name: gateway-server-container
ports:
- '5555:5555'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- PASSENGER_SERVICE_URL=172.24.2.4:4444
- DRIVER_SERVICE_URL=172.24.2.5:3333
networks:
microservicesNetwork:
ipv4_address: 172.24.2.6
driver-service:
image: driver-service-image
container_name: driver-service-container
ports:
- '3333:3333'
environment:
- NOTIFICATION_SERVICE_URL=172.24.2.3:8888
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- KAFKA_GROUP_ID=driver-group-id
- KAFKA_BOOTSTRAP_SERVERS=broker:29092
- kafka.consumer.group.id=driver-group-id
- kafka.consumer.enable.auto.commit=true
- kafka.consumer.auto.commit.interval.ms=1000
- kafka.consumer.auto.offset.reset=earliest
- kafka.consumer.max.poll.records=1
networks:
microservicesNetwork:
ipv4_address: 172.24.2.5
passenger-service:
image: passenger-service-image
container_name: passenger-service-container
ports:
- '4444:4444'
environment:
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.4
notification-service:
image: notification-service-image
container_name: notification-service-container
ports:
- '8888:8888'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.3
payment-service:
image: payment-service-image
container_name: payment-service-container
ports:
- '7777:7777'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.2
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- microservicesNetwork
broker:
image: confluentinc/cp-kafka:7.0.1
container_name: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
GROUP_ID: driver-group-id
KAFKA_CREATE_TOPICS: "product"
networks:
- microservicesNetwork
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=broker
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
- KAFKA_CLUSTERS_0_READONLY=true
networks:
- microservicesNetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
platform: linux/x86_64
environment:
- discovery.type=single-node
- max_open_files=65536
- max_content_length_in_bytes=100000000
- transport.host= elasticsearch
volumes:
- $HOME/app:/var/app
ports:
- "9200:9200"
- "9300:9300"
networks:
- microservicesNetwork
postgresql:
image: postgres:11.1-alpine
platform: linux/x86_64
container_name: postgresql
volumes:
- ./postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=123456
- POSTGRES_USER=postgres
- POSTGRES_DB=cqrs_db
ports:
- "5432:5432"
networks:
- microservicesNetwork
networks:
microservicesNetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.24.2.0/16
gateway: 172.24.2.1
application.prod.properties ->
#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}
payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}
#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB
If the error says localhost/127.0.0.1:9092, then your environment variable isn't being used.
In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used
KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS
But, in your properties, it's unclear how this is used without showing your config class
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS
If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically
Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties, not container environment variables.
Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses
problem solved 😊
If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure (Source), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document(SOURCE), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the #Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇
Thank you very much #OneCricketeer and #Svend for their help.

Can't connect to my kafka running on docker from spring boot application running via intellij

I have a docker-compose file with Kafka, zookeeper, and spring boot application.
while I run the entire file everything works fine.
when I run it without my spring boot application in order to debug it via intellij It cannot connect to Kafka and doesn't work properly.
my docker-compose file:
version: "3.5" services: # Install Zookeeper. zookeeper:
container_name: zookeeper
image: debezium/zookeeper:1.2
networks:
- mynetwork
ports:
- 2181:2181
- 2888:2888
- 3888:3888 # Install Kafka. kafka:
container_name: kafka
image: debezium/kafka:1.2
depends_on:
- zookeeper
ports:
- 9092:9092
- 29092:29092
networks:
- mynetwork
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP= INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS= INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092
- KAFKA_LISTENERS= EXTERNAL_SAME_HOST://:29092,INTERNAL://:9092
- KAFKA_INTER_BROKER_LISTENER_NAME= PLAINTEXT # Install Postgres. postgres:
container_name: postgres
image: debezium/postgres:12
volumes:
- ./sql/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- 5432:5432
networks:
- mynetwork
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:0.2.1
ports:
- 8080:8080
networks:
- mynetwork
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 #Deploy a Consumer. consumer:
build:
context: .
container_name: pledge-consumer
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/postgres
ports:
- 8101:8080
networks:
- mynetwork
image: isber/ssm-pledgeservice:v1
depends_on:
- zookeeper
- kafka
- postgres
networks: mynetwork:
external: true
In the application I tried:
spring.kafka.bootstrap-servers=kafka:9092
which works when I run it via docker but not from intellij
I also tried when running with intellij:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.bootstrap-servers=localhost:29092
I found the problem, the image I used:
image: debezium/kafka:1.2
had a problem and it didn't read any of the parameters of the environment I added.
I upgraded to:
image: debezium/kafka:1.4
everything works.

Microservice not able to connect to AXON Server running as docker image

I have discovery service: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/DiscoveryService
I have product service: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/ProductsService
Following is my docker-compose.yml file: https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/docker-compose.yml
version: "3.8"
services:
axon-server:
image: axoniq/axonserver
container_name: axon-server
ports:
- 8124:8124
- 8024:8024
networks:
- axon-demo
discovery-service:
build:
context: ./DiscoveryService
container_name: discovery-service
ports:
- 8010:8010
networks:
- axon-demo
networks:
axon-demo:
driver: bridge
When i run the following command docker-compose up, I get axon-server and discovery-service running.
I now run ProductService using the following file src/main/java/com/appsdeveloperblog/estore/ProductsService/ProductsServiceApplication.java which is under https://github.com/Naresh-Chaurasia/API-MicroServices-Kafka/tree/master/Microservices-CQRS-SAGA-Kafka/ProductsService.
It works fine.
The problem start when I try to run ProductService as a microservice, and it fails to connect to axon server. To do that I modify the docker-compose.yml as follows:
version: "3.8"
services:
axon-server:
image: axoniq/axonserver
container_name: axon-server
ports:
- 8124:8124
- 8024:8024
networks:
- axon-demo
discovery-service:
build:
context: ./DiscoveryService
container_name: discovery-service
ports:
- 8010:8010
networks:
- axon-demo
product-service:
build:
context: ./ProductsService
ports:
- 8090:8090
depends_on:
- axon-server
- discovery-service
networks:
- axon-demo
networks:
axon-demo:
driver: bridge
Now if I try to run docker-compose up, I get the following error:
product-service_1 | 2021-08-20 03:06:27.973 INFO 1 --- [#29db04f6c0a4-0] i.a.a.c.impl.AxonServerManagedChannel : Requesting connection details from localhost:8124
product-service_1 | 2021-08-20 03:06:30.004 WARN 1 --- [#29db04f6c0a4-0] i.a.a.c.impl.AxonServerManagedChannel : Connecting to AxonServer node [localhost:8124] failed: UNAVAILABLE: io exception
I have gone through the following link, Spring Boot Microservices are unable to connect to Axon Server, which looks like similar problem but still not able to fix my problem.
Please guide.
Thanks.
I made the following changes and it works fine now.
In the file ProductsService\src\main\resources\application.properties
Old Entry: eureka.client.serviceUrl.defaultZone = http://localhost:8010/eureka
New Entry: eureka.client.serviceUrl.defaultZone = http://discovery-service:8010/eureka
Added the following line in the above file: axon.axonserver.servers=axon-server:8124
In the File: Microservices-CQRS-SAGA-Kafka\docker-compose.yml
Old Entry: version: "3.8"
New Entry: version: "2"
Added following code in yml file
product-service:
build:
context: ./ProductsService
ports:
- 8090:8090
networks:
- axon-demo
I have also checked in the code in git. The micro services can talk to each other, and connect to axon server and the problem reported earlier is resolved.
Thanks.
Just a thought.
Try to add to each service in application.properties
axon.axonserver.servers=localhost:8124
and to each service which connects to axon-server in docker-compose.yml
in environment variables section
axon-server:
image: axoniq/axonserver
container_name: axon-server
ports:
- 8124:8124
- 8024:8024
product-service:
...
environment:
AXON_AXONSERVER_SERVERS: axon:8124

Spring Cloud Consul and Consul Clients dockerized

I have 2 applications, both written using spring boot. Both are running in different docker containers. I also have consul running in a different docker container. I have exposed port 8500 for consul using docker-compose.yml file. So, how do I specify to my spring boot applications where to register themselves, i.e, where is consul running. Do I give the address of the mapped port (port mapped to my local machine), or some other change?
The example I'm using right now: https://github.com/Java-Techie-jt/cloud-consul-service-discovery
Edit:
docker-compose.yml:
version: "2"
services:
consul:
container_name: consul
image: consul
expose:
- "8300"
- "8400"
- "8500"
restart: always
registrator:
container_name: registrator
image: gliderlabs/registrator:master
volumes:
- "/var/run/docker.sock:/tmp/docker.sock"
command: -internal consul://consul:8500
restart: always
depends_on:
- consul
web1:
image: deis/mock-http-server
container_name: web1
expose:
- "8080"
environment:
SERVICE_NAME: "web"
SERVICE_TAGS: "web"
restart: always
depends_on:
- registrator
web2:
image: deis/mock-http-server
container_name: web2
expose:
- "8080"
environment:
SERVICE_8080_NAME: "web"
SERVICE_8080_TAGS: "web"
restart: always
depends_on:
- registrator
haproxy:
build: ./haproxy
container_name: my-haproxy
image: anthcourtney/haproxy-consul
ports:
- 80
depends_on:
- web1
- web2
test:
container_name: test-client
build: ./test
depends_on:
- haproxy
networks:
default:
You can use registrator for your service registry.
Registrator automatically registers and deregisters services for any Docker container by inspecting containers as they come online. Registrator supports pluggable service registries, which currently includes Consul, etcd and SkyDNS 2.
You can run registrator as a container.It will register each port of your application. Below is the sample compose file :-
version: '2'
services:
registrator:
image: "${REGISTRY}gliderlabs/registrator:latest"
command: [
"-ip=<docker-host-ip>",
"-retry-attempts", "100",
"-cleanup",
# "-internal",
"consul://vconsul:8500"
]
official documentation : https://gliderlabs.github.io/registrator/latest/

Connecting Spring Cloud Applications in Docker Container

I am attempting to host a Spring Cloud application in Docker containers.The underlying exception is as follows:
search_1 | Caused by: java.lang.IllegalStateException: Invalid URL: config:8888
I understand the reason is because of the URL specified in my config server.
spring.application.name=inventory-client
#spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.uri=config:8888
On my development machine, I am able to use localhost. However, based on a past question (relating to connecting to my database), I learned that localhost is not appropriate in containers. For my database, I was able to use the following:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=false
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.url=jdbc:postgresql://db:5432/leisurely_diversion
#spring.datasource.url=jdbc:postgresql://localhost:5000/leisurely_diversion
spring.datasource.driver-class-name=org.postgresql.Driver
but this obviously did not work as expected for the configuration server.
My docker-compose file:
# Use postgres/example user/password credentials
version: '3.2'
services:
db:
image: postgres
ports:
- 5000:5432
environment:
POSTGRES_PASSWORD: example
volumes:
- type: volume
source: psql_data
target: /var/lib/postgresql/data
networks:
- app
restart: always
config:
image: kellymarchewa/config_server
networks:
- app
volumes:
- /root/.ssh:/root/.ssh
restart: always
search:
image: kellymarchewa/search_api
networks:
- app
restart: always
ports:
- 8082:8082
depends_on:
- db
- config
- inventory
inventory:
image: kellymarchewa/inventory_api
depends_on:
- db
- config
ports:
- 8081:8081
networks:
- app
restart: always
volumes:
psql_data:
networks:
app:
Both services are running under the same user defined network; how I allow the services to find the configuration service?
Thanks.

Resources