Docker compose wait for spring boot app is up [duplicate] - spring

This question already has answers here:
Docker Compose wait for container X before starting Y
(20 answers)
Closed 2 years ago.
I using docker-compose in version 3.3 and I want to wait for a container with spring app is up and after that other containers with spring boot app should be started. I tried with health check but it's doesn't work. This is my docker-compose looks like:
version: '3.3'
services:
eureka:
build: ./eureka
ports:
- 8761:8761
networks:
- spring-cloud-network
environment:
- SPRING_ZIPKIN_BASEURL=http://zipkin:9411
depends_on:
- zipkin
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8761"]
interval: 10s
timeout: 10s
retries: 5
zipkin:
build: ./zipkin
ports:
- 9411:9411
networks:
- spring-cloud-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9411"]
interval: 10s
timeout: 10s
retries: 5
Is it possible to achieve what I want?

According to official documentation (https://docs.docker.com/compose/startup-order/) it seems that this is not possible, and they also explain why that is, while also providing some workarounds.

Related

Could not connect to docker redis container from docker application container with docker-compose

I faced the problem with docker-compose.
I tried a lot of themes here on StackOverflow, but it seems that i tried almost everything, but anyway my app does not work.
I have a docker-compose multiapp configuration with:
application
mysql
redis
The point is that when mysql and redis containers are up, my application that is in container throws an error:
org.springframework.context.ApplicationContextException: Failed to start bean 'springSessionRedisMessageListenerContainer';
nested exception is
org.springframework.data.redis.listener.adapter.RedisListenerExecutionFailedException: org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis;
nested exception is
io.lettuce.core.RedisConnectionException: Unable to connect to localhost/<unresolved>:6379;
nested exception is
org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis;
nested exception is
io.lettuce.core.RedisConnectionException: Unable to connect to localhost/<unresolved>:6379
However, when i launch my application in IDE instead of docker container, everything works well!
so it seems that the problem in Docker, i.e. in container settings. But, on the other hand, the container's application successfully connects to mysql, and the problem is only about redis container.
i'm confused. moreover, i wonder what means in localhost/<unresolved>:6379.
the docker-compose file looks like this:
version: '3.9'
services:
mysql-docker:
container_name: mysql-docker
image: mysql:8.0.31
restart: always
ports:
- "3306:3306"
volumes:
- ${MYSQL_DATA}:/var/lib/mysql
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
env_file:
- ./.env
healthcheck:
test: mysql sys --user=${MYSQL_ROOT_USER} --password=${MYSQL_ROOT_PASSWORD} --silent --execute "SELECT 1;"
interval: 3s
timeout: 10s
retries: 10
networks:
- my-network
redis-docker:
depends_on:
mysql-docker:
condition: service_healthy
container_name: redis-docker
image: redis:7.0.5
restart: always
ports:
- 6379:6379
command: "redis-server --loglevel warning --port ${SPRING_REDIS_PORT} --requirepass ${SPRING_REDIS_PASSWORD}"
volumes:
- ${SPRING_REDIS_DATA}:/var/lib/redis
env_file:
- ./.env
healthcheck:
test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ]
interval: 3s
timeout: 10s
retries: 10
networks:
- my-network
app:
depends_on:
mysql-docker:
condition: service_healthy
redis-docker:
condition: service_healthy
links:
- redis-docker
- mysql-docker
build: ./
restart: on-failure
env_file:
- ./.env
volumes:
- ${SPRING_M2_BUILD_DATA}:/root/.m2
stdin_open: true
tty: true
networks:
- my-network
networks:
my-network:
driver: bridge
Dockerfile for application is not interesting, there is just standart FROM, COPY and RUN gradle clean build.
Properties for redis in application.properties:
spring.redis.host=localhost
spring.redis.port=6379
spring.redis.password=verySecuredPassword
in .env file (it is needed for potential kubernetes):
SPRING_REDIS_PASSWORD=verySecuredPassword
SPRING_REDIS_PORT=6379
SPRING_REDIS_URL=redis://redis-docker:${SPRING_REDIS_PORT}
There are no other config files. As i got, there is no need for them - the application works well when it is not in container.
Moreover, it seems that the problem is not about binding redis to 0.0.0.0 (btw i tried that too - didn't work) - because it refuses only 'dockerized' connections somehow. But i could be wrong, ofc.
I tried redis-cli ping from windows powershell - and got pong. So, the redis obiously works, works correctly (becauseof correctness of application launched into IDE) and accepts connections.
What am i missing?

No resolvable bootstrap urls given in bootstrap.servers on maven install

I am building up a jar file for my spring boot microservice, but upon maven install i am getting the error as stated below.
Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:89) ~[kafka-clients-3.1.1.jar:na]
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:48) ~[kafka-clients-3.1.1.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:730) ~[kafka-clients-3.1.1.jar:na]
my docker-compose file for the application is
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
restart: always
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
networks:
- app-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
restart: always
depends_on:
- zookeeper
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
networks:
- app-network
mongodb:
image: mongo:latest
restart: always
container_name: mongodb
networks:
- app-network
ports:
- 27017:27017
matching-engine-helper:
image: matching-engine-helper:latest
restart: always
container_name: "matching-engine-helper"
networks:
- app-network
ports:
- 9192:9192
depends_on:
- mongodb
- zookeeper
- kafka
matching-engine-core:
image: matching-engine-core:latest
restart: always
container_name: "matching-engine-core"
networks:
- app-network
ports:
- 9191:9191
depends_on:
- matching-engine-helper
- zookeeper
- kafka
- mongodb
networks:
app-network:
driver: bridge
and the application.yml is
spring:
data:
mongodb:
database: matching-engine-mongodb
host: mongodb
port: 27017
kafka:
bootstrap-servers: kafka:9092
server:
port: 9192
let me know if i need some environment variable's configuration here. as its running fine on local environment on local host but not on docker containers as i am unable to make a jar out of it.
Assuming mvn install is running on your host, then your host doesn't know how to resolve kafka as a DNS name running in a container. It that's what you wanted, then you should rename your file to application-docker.yml and use Spring profiles to properly override any defaults when you actually do run your code in a container. In your case, you would need to have localhost:29092 for Kafka, and similarly use localhost:27017 for Mongo in your default application.yml. You can also use environment variables for those two properties rather than hard-code them.
Or, if mvn install is running in a Docker layer itself, then that is completely isolated from other Docker networks. You can pass --network flag to docker build, though.
Assuming you don't want to mvn install -DskipTests, ideally, your tests do not rely on external services to be running. If you want to run a Kafka unit test, which is failing to connect, then you should either mock that, or using Spring-Kafka's EmbeddedKafka for integration-tests.
Worth mentioning that Spring boot has a Maven plugin for building Docker images, and it doesn't need a JAR to do so.

Cannot send any request from Postman when I work my Spring Boot running in Docker?

I have a problem about sending any request from Postman in my Spring Boot with the usage of Neo4j.
There is no problem when I run it in localhost as I defined server.port in the application.properties file.
When I try to send any request localhost:7474 , I get an authentication error.Even if I defined username and password (neo4j:123456 defined in docker-compose.yml), I get 404 error.
How can I fix it like localhost:{port_number} ?
Here is my docker-compose.yml
version: '3'
services:
neo4j-db:
image: neo4j:4.3
container_name: app-neo4j-db
ports:
- 7474:7474
- 7687:7687
volumes:
- $HOME/neo4j/data:/data
- $HOME/neo4j/logs:/logs
- $HOME/neo4j/import:/import
- $HOME/neo4j/plugins:/plugins
environment:
NEO4J_AUTH: "neo4j/123456"
NEO4JLABS_PLUGINS: '["apoc"]'
NEO4J_dbms_security_procedures_unrestricted: apoc.\\\*,gds.\\\*
dbms_connector_bolt_listen__address: neo4j-db:7687
dbms_connector_bolt_advertised__address: neo4j-db:7687
healthcheck:
test: cypher-shell --username neo4j --password 123456 'MATCH (n) RETURN COUNT(n);' # Checks if neo4j server is up and running
interval: 10s
timeout: 10s
retries: 5
app:
image: 'springbootneo4jshortestpath:latest'
build:
context: .
dockerfile: Dockerfile
container_name: SpringBootNeo4jShortestPath
depends_on:
neo4j-db:
condition: service_healthy # Wait for neo4j to be ready
links:
- neo4j-db
environment:
NEO4J_URI: bolt://neo4j-db:7687
NEO4J_USER: neo4j
NEO4J_PASSWORD: 123456
volumes:
app-neo4j-db:

Dockerizing SpringBoot Mongo Exception opening socket

I am trying to dockerize my spring boot and mongo.
This is my application.yml
spring:
application:
name: service-app
data:
mongodb:
uri: mongodb://localhost:27017/user
I have found this to be recommended in other posts.
The error I get is:
Exception in monitor thread while connecting to server localhost:27017
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:70) ~.
[mongodb-driver-core-4.1.1.jar:na]
version: '3'
services:
mongo:
image: library/mongo:latest
container_name: mongo
restart: always
ports:
- 27017:27017
network_mode: host
volumes:
- $HOME/mongo:/data/db
healthcheck:
test: "exit 0"
user-service:
build:
context: .
dockerfile: Dockerfile
image: user-service
depends_on:
- mongo
network_mode: "host"
hostname: localhost
restart: always
ports:
- 8082:8082
healthcheck:
test: "exit 0"
FROM openjdk:8-jdk-alpine
ADD ./target/demo-0.0.1-SNAPSHOT.jar /usr/src/user-service-0.0.1-SNAPSHOT.jar
WORKDIR usr/src
ENTRYPOINT ["java -Djdk.tls.client.protocols=TLSv1.2","-jar", "user-service-0.0.1-SNAPSHOT.jar"]
It seems to be a common problem for noobsters, I couldn't solve it with what was on the internet already.
The problem is in not building the url properly.
By the information I gave, I should have gotten a proper answer, instead I got -1.
Some people are true trolls with privileges.
uri: mongodb://mongo/user
I feel like it became a gamble on Stackoverflow, depends on the type of people crossing over your question.

Re-use YAML sections

I'm putting together a docker-compose file. I'd like to re-use sections that are repetitive. For example, each container re-uses the same deploy config. I tried making a template for it:
...
redis:
image: redis
ports:
- 6379:6379
deploy: deploy_template
volumes:
- /srv/redis/data:/data
deploy_template:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 10
window: 120s
However this didn't work. Is there any way to do this?
You could use the YAML anchor and alias facility for that, effectively:
version: '2'
dummy: &deploy_template
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 10
window: 120s
services:
redis:
image: redis
ports:
- 6379:6379
deploy: *deploy_template
volumes:
- /srv/redis/data:/data
will be parsed as if you had specified:
version: '2'
dummy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 10
window: 120s
services:
redis:
image: redis
ports:
- 6379:6379
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 10
window: 120s
volumes:
- /srv/redis/data:/data
You can have multipe *deploy_template values for a single &deploy_template anchor.
The problem however is that the dummy key, and its value, will trip docker-compose, and at least in version 2 there was no place to put this information.
I therefore preprocess my docker-compose file using ruamel.dcw (I am the author of that package), which allows for a user-data top-level key that will not appear in the output and where you can put such anchor information. Starting with:
version: '2'
user-data:
author: dthree <calvin#hobbes.org>
description: redis container
env-defaults:
NAME: redis # default values if not specified in the environment
PORT: 6379
dummy:
- &deploy_template
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 10
window: 120s
- &some_other_template:
x: null
services:
redis:
image: ${NAME}
ports:
- "${PORT}:${PORT}"
deploy: *deploy_template
volumes:
- /srv/${NAME}/data:/data
this will expand to:
version: '2'
services:
redis:
image: ${NAME}
ports:
- ${PORT}:${PORT}
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 10
window: 120s
volumes:
- /srv/${NAME}/data:/data
before being handed to docker-compose itself (using the -f option). Any variables in the env-defaults "section" that are not already set in the environment in which you execute the preprocessor, will be given their default value, making it easy to override them.
As an aside: you should be careful with:
- 6379:6379
because if the port number gets below 60, the old YAML parser that docker-compose uses, interprets that scalar as a sexagesimal. I tend to always quote such values, especially when using env. variables.

Resources