Unable to host my Spring Boot modules on Docker environment. Here, I am connecting Oracle database11g and Sleuth, Zipkin, RabbitMQ.
If i am not adding sleuth, Zipkin, Rabbitmq then my docker-compose file executing successfully but i am facing issues after adding these 3 features. any help....
Below provided one is my docker-compose file
version: '3.5'
services:
apigateway:
image: kolludocker/apigateway-apigateway:0.0.1-phase4
ports:
- "8765:8765"
networks:
- eurekaserver-network
depends_on:
- eurekaserver
- rabbitmq
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://eurekaserver:8761/eureka
SPRING.ZIPKIN.BASEURL: http://zipkin-server:9411/
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
SPRING_RABBITMQ_HOST: rabbitmq
SPRING.ZIPKIN_SENDER_TYPE: rabbit
environment:
- TZ="Asia/Kolkata"
bankmodule:
image: kolludocker/bank-bankmodule:0.0.1-phase6
ports:
- "9096:9096"
networks:
- eurekaserver-network
depends_on:
- eurekaserver
- rabbitmq
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://eurekaserver:8761/eureka
environment:
SPRING.ZIPKIN.BASEURL: http://zipkin-server:9411/
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
SPRING_RABBITMQ_HOST: rabbitmq
SPRING.ZIPKIN_SENDER_TYPE: rabbit
environment:
- TZ="Asia/Kolkata"
customermodule:
image: kolludocker/customer-customermodule:0.0.1-phase6
ports:
- "9095:9095"
networks:
- eurekaserver-network
depends_on:
- eurekaserver
- rabbitmq
environment:
EUREKA.CLIENT.SERVICEURL.DEFAULTZONE: http://eurekaserver:8761/eureka
environment:
SPRING.ZIPKIN.BASEURL: http://zipkin-server:9411/
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
SPRING_RABBITMQ_HOST: rabbitmq
SPRING.ZIPKIN_SENDER_TYPE: rabbit
environment:
- TZ="Asia/Kolkata"
eurekaserver:
image: kolludocker/eurekaserver-eurekaserver:0.0.1-phase3
ports:
- "8761:8761"
networks:
- eurekaserver-network
environment:
- TZ="Asia/Kolkata"
zipkin-server:
image: openzipkin/zipkin:2.23
ports:
- "9411:9411"
networks:
- eurekaserver-network
depends_on:
- rabbitmq
environment:
- TZ="Asia/Kolkata"
environment:
RABBIT_URI: amqp://guest:guest#rabbitmq:5672
rabbitmq:
image: rabbitmq:3.5.3-management
ports:
- "5672:5672"
- "15672:15672"
networks:
- eurekaserver-network
environment:
- TZ="Asia/Kolkata"
networks:
eurekaserver-network:
My customer module property file:
spring.application.name=customermodule
server.port=9095
#spring.datasource.url=jdbc:oracle:thin:#localhost:1522/xe
spring.datasource.url=jdbc:oracle:thin:#localhost:1521/xe
#spring.datasource.url=jdbc:oracle:thin:#systemipaddress:1521/xe
spring.datasource.username=system
spring.datasource.password=system
spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
# HikariCP settings
spring.datasource.hikari.minimumIdle=5
spring.datasource.hikari.maximumPoolSize=20
spring.datasource.hikari.idleTimeout=30000
spring.datasource.hikari.maxLifetime=2000000
spring.datasource.hikari.connectionTimeout=30000
spring.datasource.hikari.poolName=HikariPoolKollu
#hibernate configs
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.database-platform=org.hibernate.dialect.Oracle10gDialect
spring.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=true
#Disable dafault springboot loggers
#logging.pattern.console=
spring.config.import=optional:configserver:localhost:8888
#Eureka Server
eureka.client.service-url.defaultZone=http://localhost:8761/eureka
#eureka.client.service-url.defaultZone=http://systemipaddress:8761/eureka
#fault tolerance
#dafult attempts are 3 but we required 5 attempts try
resilience4j.retry.instances.custRetryFallback.max-attempts=5
#each request should fire after 300 sec -5 minutes
resilience4j.retry.instances.custRetryFallback.wait-duration.seconds=1
#If the request fails, wait 1 + random_number_milliseconds seconds and retry
resilience4j.retry.instances.custRetryFallback.enable-exponential-backoff=true
#CircuitBreakers
resilience4j.circuitbreaker.instances.kollu-retry.failure-rate-threshold=50
#RateLimit
#2 requests in 300 sec's
resilience4j.ratelimiter.instances.kollu-retry.limit-for-period=2
resilience4j.ratelimiter.instances.kollu-retry.limit-refresh-period.seconds=300
#bulk requests
#concurrent call mean at time 10 users can send requests
resilience4j.bulkhead.instances.kollu-retry.max-concurrent-calls=10
#If we want to ignore zipkin server connectivity enable below line
#spring.zipkin.enabled=true
#spring.zipkin.baseUrl=http://zipkin-server/
#spring.zipkin.locator.discovery.enabled=true
Related
There are client, kafka and zookeeper in the same network, I am trying to connect from client to kafka with SERVICE_NAME:PORT but
driver-service-container | 2022-07-24 09:00:05.076 WARN 1 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient : [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I get an error.
I know that I can easily communicate containers in the same network using the service name, but I don't understand why it doesn't work.
The name of my client trying to communicate with kafka is
driver-service
I looked through these resources but according to them my method should work:
Connect to Kafka running in Docker
My Python/Java/Spring/Go/Whatever Client Won’t Connect to My Apache
Kafka Cluster in Docker/AWS/My Brother’s Laptop. Please Help!
driver-service githup repositorie
My docker-compose file:
version: '3'
services:
gateway-server:
image: gateway-server-image
container_name: gateway-server-container
ports:
- '5555:5555'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- PASSENGER_SERVICE_URL=172.24.2.4:4444
- DRIVER_SERVICE_URL=172.24.2.5:3333
networks:
microservicesNetwork:
ipv4_address: 172.24.2.6
driver-service:
image: driver-service-image
container_name: driver-service-container
ports:
- '3333:3333'
environment:
- NOTIFICATION_SERVICE_URL=172.24.2.3:8888
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
- KAFKA_GROUP_ID=driver-group-id
- KAFKA_BOOTSTRAP_SERVERS=broker:29092
- kafka.consumer.group.id=driver-group-id
- kafka.consumer.enable.auto.commit=true
- kafka.consumer.auto.commit.interval.ms=1000
- kafka.consumer.auto.offset.reset=earliest
- kafka.consumer.max.poll.records=1
networks:
microservicesNetwork:
ipv4_address: 172.24.2.5
passenger-service:
image: passenger-service-image
container_name: passenger-service-container
ports:
- '4444:4444'
environment:
- PAYMENT_SERVICE_URL=172.24.2.2:7777
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.4
notification-service:
image: notification-service-image
container_name: notification-service-container
ports:
- '8888:8888'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.3
payment-service:
image: payment-service-image
container_name: payment-service-container
ports:
- '7777:7777'
environment:
- SECURE_KEY_USERNAME=randomSecureKeyUsername!
- SECURE_KEY_PASSWORD=randomSecureKeyPassword!
networks:
microservicesNetwork:
ipv4_address: 172.24.2.2
zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
container_name: zookeeper
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
networks:
- microservicesNetwork
broker:
image: confluentinc/cp-kafka:7.0.1
container_name: broker
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
GROUP_ID: driver-group-id
KAFKA_CREATE_TOPICS: "product"
networks:
- microservicesNetwork
kafka-ui:
image: provectuslabs/kafka-ui
container_name: kafka-ui
ports:
- "8080:8080"
restart: always
environment:
- KAFKA_CLUSTERS_0_NAME=broker
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=broker:29092
- KAFKA_CLUSTERS_0_ZOOKEEPER=zookeeper:2181
- KAFKA_CLUSTERS_0_READONLY=true
networks:
- microservicesNetwork
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
platform: linux/x86_64
environment:
- discovery.type=single-node
- max_open_files=65536
- max_content_length_in_bytes=100000000
- transport.host= elasticsearch
volumes:
- $HOME/app:/var/app
ports:
- "9200:9200"
- "9300:9300"
networks:
- microservicesNetwork
postgresql:
image: postgres:11.1-alpine
platform: linux/x86_64
container_name: postgresql
volumes:
- ./postgresql/:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=123456
- POSTGRES_USER=postgres
- POSTGRES_DB=cqrs_db
ports:
- "5432:5432"
networks:
- microservicesNetwork
networks:
microservicesNetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.24.2.0/16
gateway: 172.24.2.1
application.prod.properties ->
#datasource
spring.datasource.url=jdbc:h2:mem:db_driver
spring.datasource.username=root
spring.datasource.password=1234
spring.datasource.driver-class-name=org.h2.Driver
spring.jpa.database-platform=org.hibernate.dialect.H2Dialect
#need spring-security config.
spring.h2.console.enabled=false
spring.h2.console.path=/h2-console
spring.jpa.show-sql=true
service.security.secure-key-username=${SECURE_KEY_USERNAME}
service.security.secure-key-password=${SECURE_KEY_PASSWORD}
payment.service.url=${PAYMENT_SERVICE_URL}
notification.service.url=${NOTIFICATION_SERVICE_URL}
#kafka configs
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS}
kafka.group.id =${KAFKA_GROUP_ID}
spring.cache.cache-names=driver
spring.jackson.serialization.fail-on-empty-beans= false
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=11MB
If the error says localhost/127.0.0.1:9092, then your environment variable isn't being used.
In the startup logs from the container, look at AdminClientConfig or ConsumerConfig sections, and you'll see the real bootstrap address that's used
KAFKA_BOOTSTRAP_SERVERS=broker:29092 is correct based on your KAFKA_ADVERTISED_LISTENERS
But, in your properties, it's unclear how this is used without showing your config class
kafka.bootstrap.servers=${KAFKA_BOOTSTRAP_SERVERS
If you read the spring kafka documentation closely, you'll see it needs to be spring.kafka.bootstrap.servers in order to be wired in automatically
Sidenote: All those kafka.consumer. attributes would need to be set as JVM properties, not container environment variables.
Also, Docker services should be configured to communicate with each other by service names, not assigned IP addresses
problem solved 😊
If I run driver-service on the local computer, it actually connects from localhost:9092, but if driver-service and kafka are in the same docker network, it needs to connect from "KAFKA_IP:29092" (service name can be used instead of KAFKA_IP), kafka is different for such different network environments. it expects us to configure (Source), when I ran my driver-service application on my local computer, kafka and driver-service could communicate, but they could not communicate in the same docker network. That is, the driver-service was not using the Kafka connection address that I defined in the application.prod.properties file that my application should use while running in docker. The problem was in my spring kafka integration, I was trying to give my client application the address to connect to kafka using the kafka.bootstrap.servers key in my properties file, I was defining this key in my properties file and pulling and assigning the value of this key in KafkaBean class, but the client did not see it.and it was persistently trying to connect to localhost:9092, first I specified my active profile in my dockerfile with the "ENTRYPOINT ["java", "-Dspring.profiles.active=prod", "-jar", "driver-service-0.0.2-SNAPSHOT.jar"]" command to use my application.prod.properties file while working in docker environment and then, if we use the key "spring.kafka.bootstrap-servers" instead of "kafka.bootstrap.servers" as stated in the spring Kafka document(SOURCE), spring can automatically detect from which address it can connect to Kafka. I just had to give the producer also the Kafka address using the #Value annotation so that the driver-service and Kafka could communicate seamlessly in the docker network 😇
Thank you very much #OneCricketeer and #Svend for their help.
I have a docker-compose file with Kafka, zookeeper, and spring boot application.
while I run the entire file everything works fine.
when I run it without my spring boot application in order to debug it via intellij It cannot connect to Kafka and doesn't work properly.
my docker-compose file:
version: "3.5" services: # Install Zookeeper. zookeeper:
container_name: zookeeper
image: debezium/zookeeper:1.2
networks:
- mynetwork
ports:
- 2181:2181
- 2888:2888
- 3888:3888 # Install Kafka. kafka:
container_name: kafka
image: debezium/kafka:1.2
depends_on:
- zookeeper
ports:
- 9092:9092
- 29092:29092
networks:
- mynetwork
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP= INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS= INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092
- KAFKA_LISTENERS= EXTERNAL_SAME_HOST://:29092,INTERNAL://:9092
- KAFKA_INTER_BROKER_LISTENER_NAME= PLAINTEXT # Install Postgres. postgres:
container_name: postgres
image: debezium/postgres:12
volumes:
- ./sql/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- 5432:5432
networks:
- mynetwork
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:0.2.1
ports:
- 8080:8080
networks:
- mynetwork
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 #Deploy a Consumer. consumer:
build:
context: .
container_name: pledge-consumer
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/postgres
ports:
- 8101:8080
networks:
- mynetwork
image: isber/ssm-pledgeservice:v1
depends_on:
- zookeeper
- kafka
- postgres
networks: mynetwork:
external: true
In the application I tried:
spring.kafka.bootstrap-servers=kafka:9092
which works when I run it via docker but not from intellij
I also tried when running with intellij:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.bootstrap-servers=localhost:29092
I found the problem, the image I used:
image: debezium/kafka:1.2
had a problem and it didn't read any of the parameters of the environment I added.
I upgraded to:
image: debezium/kafka:1.4
everything works.
I'm writing a test app in microservices to learn Spring Boot. I'm done with the app and looking to deploy it on AWS using docker/docker-compose to host each microservices.
I'm unable to connect any of the Spring Boot instances to their Mysql database. I've been stuck on it for a few days and I can't see what's wrong.
Here is my docker-compose.yml
version: "3.3"
services:
card_mysql:
image: "mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "cards"
ports:
- "33061:3306"
auth_mysql:
image: "mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "auth"
ports:
- "33062:3306"
market_mysql:
image: "mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "market"
ports:
- "33063:3306"
user_mysql:
image: "mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "user"
ports:
- "33064:3306"
adminer:
image: adminer
restart: always
ports:
- 8000:8080
user:
image: "openjdk:11"
restart: always
entrypoint: java -jar /app/services/user/target/cardmarket-user-0.0.1-SNAPSHOT.jar
volumes:
- ./:/app
depends_on:
- "user_mysql"
card:
image: "openjdk:11"
restart: always
entrypoint: java -jar /app/services/card/target/cardmarket-card-0.0.1-SNAPSHOT.jar
volumes:
- ./:/app
depends_on:
- "card_mysql"
market:
image: "openjdk:11"
restart: always
entrypoint: java -jar /app/services/market/target/cardmarket-market-0.0.1-SNAPSHOT.jar
volumes:
- ./:/app
depends_on:
- "market_mysql"
proxy:
image: "openjdk:11"
restart: always
entrypoint: java -jar /app/zuul-proxy/target/zuul-proxy-0.0.1-SNAPSHOT.jar
volumes:
- ./:/app
auth:
image: "openjdk:11"
restart: always
entrypoint: java -jar /app/zuul-proxy/zuul-proxy-0.0.1-SNAPSHOT.jar
volumes:
- ./:/app
depends_on:
- "auth_mysql"
Here are each of my spring datasource settings:
user application.properties
spring.jpa.hibernate.ddl-auto=create
spring.datasource.url=jdbc:mysql://user_mysql:3306/user
spring.datasource.initialization-mode=always
spring.datasource.username=root
spring.datasource.password=root
market application.properties
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://market_mysql:3306/market
spring.datasource.initialization-mode=always
spring.datasource.username=root
spring.datasource.password=root
Auth application.properties
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://auth_mysql:3306/auth
spring.datasource.initialization-mode=always
spring.datasource.username=root
spring.datasource.password=root
Card application.properties
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://card_mysql:3306/cards
spring.datasource.initialization-mode=always
spring.datasource.username=root
spring.datasource.password=root
I'm sure it is something small I'm missing or misunderstood, but I can't figure out what.
Edit 1
I ran a couple more tests but still wasn't able to figure out what's wrong.
I commented out everything but the card and card_mysql containers. The spring boot instance throw an exception for every configuration possible (using docker dns name, localhost, shared and network ip), none of them seems to work.
The two containers do communicate, I can ping between them using the dns name.
I'm using docker-desktop with wsl and deploy my containers on a wsl Ubuntu 20.04. I haven't tried on an actual Linux machine.
I've also tried to make the spring boot instance wait for the mysql one using wait-for-it.sh, it does wait correctly but the exception still occurs.
Here is parts of the exception spring boot triggers:
2021-05-16 12:23:48.948 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2021-05-16 12:23:50.277 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
...
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
...
Caused by: java.net.ConnectException: Connection refused (Connection refused)
Edit 2
I tried having just one instance of spring boot and mysql and linking them using a network, unfortunately the exception still triggers. The two containers still pings. Here is the docker-compose I used for this test:
version: "3.9"
services:
card_mysql:
image: "mysql"
restart: always
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "cards"
networks:
- "card_network"
card:
image: "openjdk:11"
restart: always
entrypoint: ["/app/wait-for-it.sh", "card_mysql:3306", "--", "java", "-jar", "/app/services/card/target/cardmarket-card-0.0.1-SNAPSHOT.jar"]
# entrypoint: ["/app/wait-for-it.sh", "card_mysql:3306", "--", "ping", "card_mysql"]
volumes:
- ./:/app
depends_on:
- "card_mysql"
networks:
- "card_network"
networks:
card_network: {}
Hi Spinarial please add network and gather it all in one network
version: "3.4"
services:
springMvcRestApi:
image: springbootproject:0.0.1
container_name: app
networks:
- postgres
ports:
- 8085:8085
depends_on:
- postgres
- pgadmin
links:
- postgres
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/reactiveDbcd?createDatabaseIfNotExist=true
SPRING_DATASOURCE_USERNAME: postgres
SPRING_DATASOURCE_PASSWORD: 12345
postgres:
container_name: postgres
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: 12345
PGDATA: /data/postgres
volumes:
- postgres:/data/postgres
expose:
- 5432
ports:
- 5432:5432
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-80}:80"
networks:
- postgres
restart: unless-stopped
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
i m trying to deploy a Spring Server on Docker. The Spring server is connecting to a RabbitMQ Server that runs in another app. I get an error while connecting to the rabbitmq. I added the host in my application.properties, didnt work. Than added to the app-container as environment variable and still doesnt work. I also rebuilded the jar and changed the images version.
version: '3'
services:
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672" #JMS Port
- "15672:15672" #Management Port - default user:pass = guest:guest
db:
image: mysql:5.7.22
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_DATABASE: "hospital"
MYSQL_PASSWORD: "root"
ports:
- "3306:3306"
networks:
- mysql_bridge
restart: always
springboot-docker-compose-app-container:
image: app-image
build:
context: ./
dockerfile: Dockerfile
environment: # Pass environment variables to the service
SPRING_DATASOURCE_URL: jdbc:mysql://db:3306/hospital?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false&allowPublicKeyRetrieval=true
SPRING_DATASOURCE_USERNAME: root
SPRING_DATASOURCE_PASSWORD: root
SPRING_RABBITMQ_HOST: rabbitmq
depends_on:
- rabbitmq
- db
volumes:
- /data/VerzorgerSOAP
ports:
- "8080:8080"
networks:
- mysql_bridge
- rabbiy_mq
networks:
mysql_bridge:
rabbiy_mq:
Also this is my application.properties
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.jpa.generate-ddl=true
spring.datasource.url = jdbc:mysql://db:3306/hospital?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false&allowPublicKeyRetrieval=true
spring.jpa.hibernate.ddl-auto = update
spring.datasource.username = root
spring.datasource.password = root
spring.jackson.serialization.WRITE_DATES_AS_TIMESTAMPS = false
server.ssl.enabled=false
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.properties.hibernate.globally_quoted_identifiers=true
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
spring.rabbitmq.host= rabbitmq
This is the error i get
org.springframework.amqp.AmqpIOException: java.net.UnknownHostException: rabbitmq
The networks configuration was missing for rabbitmq in the dockercompose file:
rabbitmq:
image: rabbitmq:management
ports:
- "5672:5672" #JMS Port
- "15672:15672" #Management Port - default user:pass = guest:guest
networks:
- rabbiy_mq
Another solution suggested by #david-maze is to remove all networks
I am attempting to host a Spring Cloud application in Docker containers.The underlying exception is as follows:
search_1 | Caused by: java.lang.IllegalStateException: Invalid URL: config:8888
I understand the reason is because of the URL specified in my config server.
spring.application.name=inventory-client
#spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.uri=config:8888
On my development machine, I am able to use localhost. However, based on a past question (relating to connecting to my database), I learned that localhost is not appropriate in containers. For my database, I was able to use the following:
spring.jpa.hibernate.ddl-auto=none
spring.jpa.show-sql=false
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.url=jdbc:postgresql://db:5432/leisurely_diversion
#spring.datasource.url=jdbc:postgresql://localhost:5000/leisurely_diversion
spring.datasource.driver-class-name=org.postgresql.Driver
but this obviously did not work as expected for the configuration server.
My docker-compose file:
# Use postgres/example user/password credentials
version: '3.2'
services:
db:
image: postgres
ports:
- 5000:5432
environment:
POSTGRES_PASSWORD: example
volumes:
- type: volume
source: psql_data
target: /var/lib/postgresql/data
networks:
- app
restart: always
config:
image: kellymarchewa/config_server
networks:
- app
volumes:
- /root/.ssh:/root/.ssh
restart: always
search:
image: kellymarchewa/search_api
networks:
- app
restart: always
ports:
- 8082:8082
depends_on:
- db
- config
- inventory
inventory:
image: kellymarchewa/inventory_api
depends_on:
- db
- config
ports:
- 8081:8081
networks:
- app
restart: always
volumes:
psql_data:
networks:
app:
Both services are running under the same user defined network; how I allow the services to find the configuration service?
Thanks.