I have a web application with database and rabbitMQ services. I am using docker-compose to build and run images.
rabbitmq:
image : rabbitmq:3-management
container_name: rabbitmq
hostname: rabbitmq
ports:
- "15672:15672"
expose:
- "5672"
- "4369"
- "25672"
coredb:
container_name: coredb
build: ./mongodb/
core:
container_name: core
build: ./core/
ports:
- "80:8080"
- "5683/udp:5683/udp"
- "5684/udp:5684/udp"
links:
- rabbitmq
- coredb
After running
docker-compose up
All the services get started properly. I can ping rabbitmq and codedb from core's shell. In the SpringBoot application code, I am using
CachingConnectionFactory(hostname)
to connect to rabbitMQ. The hostname i am giving is "rabbitmq". In the logs during event publishing, the error I see is "No route found". Core service can connect to database properly but cannot connect to rabbitMQ.
You can use docker inspect <container name> to inspect the config of the "core" service to make sure the link was setup. You can also check the hostname using docker exec -ti <container name> cat /etc/hosts (which I think you did already).
If it looks like it's properly linked up, the issue is probably that the core service is trying to connect to it before the rabbitmq service has actually started.
You can have the "core" service retry a few times (with a short delay) to try and setup the conenction.
Related
I have a problem here that I really cannot understand. I already saw few topics here with the same problem and those topics was successfully solved. I basically did the same thing and cannot understand what I'm doing wrong.
I have a Spring application container that tries to connect to a Mongo container through the following Docker Composer:
version: '3'
services:
app:
build: .
ports:
- "8080:8080"
links:
- db
db:
image: mongo
volumes:
- ./database:/data
ports:
- "27017:27017"
In my application.properties:
spring.data.mongodb.uri=mongodb://db:27017/app
Finally, my Dockerfile:
FROM eclipse-temurin:11-jre-alpine
WORKDIR /home/java
RUN mkdir /home/java/bar
COPY ./build/libs/foo.jar /home/java/bar/foo.jar
CMD ["java","-jar", "/home/java/bar/foo.jar"]
When I run docker compose up --build I got:
2022-11-17 12:08:53.452 INFO 1 --- [null'}-db:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server db:27017
Caused by: java.net.UnknownHostException: db
Running the docker compose ps I can see the mongo container running well, and I am able to connect to it through Mongo Compass and with this same Spring Application but outside of container. The difference running outside of container is the host from spring.data.mongodb.uri=mongodb://db:27017/app to spring.data.mongodb.uri=mongodb://localhost:27017/app.
Also, I already tried to change the host for localhost inside of the spring container and didnt work.
You need to specify MongoDB host, port and database as different parameters as mentioned here.
spring.data.mongodb.host=db
spring.data.mongodb.port=27017
spring.data.mongodb.authentication-database=admin
As per the official docker-compose documentation the above docker-compose file should worked since both db and app are in the same network (You can check if they are in different networks just in case)
If the networking is not working, as a workaround, instead of using localhost inside the spring container, use the server's IP, i.e, mongodb://<server_ip>:27017/app (And make sure there is no firewall blocking it)
I'm running 4 microservice using docker. Here one service depends on other services. That is why I need to check before using any service other services up or not?
To up all services I'm writing a bash script.
For my working purpose, I am using sleep until up properly rabbitmq.
what is the better solution to check rabbitmq up or not? Until not up rabbitmq I have to wait.
Now for my working pupose i am using like that -
# wait for rabbitmq container be ready
sleep 14
This is the docker-compose container for rabbitMQ
rabbitmq:
image: 'rabbitmq:3.8.9'
container_name: rabbitmq_dev
restart: always
ports:
- 5675:5672
environment:
- RABBITMQ_DEFAULT_USER=rabbit
- RABBITMQ_DEFAULT_PASS=pass
depends_on:
- consul
networks:
- my_networks
I think HealthCheck can solve your problem.
Reference links: Docker Compose wait for container X before starting Y
Here is the log
MassTransit.RabbitMqTransport.Integration.ConnectionContextFactory.CreateConnection(ISupervisor supervisor)
[02:51:48 DBG] Connect: guest#localhost:5672/
[02:51:48 WRN] Connection Failed: rabbitmq://localhost/
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
The RabbitMQ control panel is showing the exchanges and queues as created and when I make a publish request I see the queue come through but then get a MassTransit timeout as it tries to respond.
Here is my docker yaml setup. I assume MassTransit pulls its settings to connect from appsettings.json.
version: '3.4'
services:
hostedservice:
environment:
- ASPNETCORE_ENVIRONMENT=development
ports:
- "80"
rabbitmq3:
hostname: "rabbitmq"
image: rabbitmq:3-management
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
- RABBITMQ_DEFAULT_VHOST=/
ports:
# AMQP protocol port
- '5672:5672'
# HTTP management UI
- '15672:15672'
I'd suggest using the MassTransit Docker template to get a working setup. Or you can look at the source code and see how when running in a container, the template using rabbitmq as the host name to connect.
You can download the template using NuGet.
Thanks Chris that moved me to the container template usage that cleared up the connection issue!
version: '3.9'
services:
activemq:
image: rmohr/activemq:5.15.9-alpine
restart: always
ports:
- 61616:61616
- 8161:8161
- 5672:5672
container_name: activemq
app-service:
image: app-service:v1
restart: always
ports:
- 8080:8080
container_name: app-service
links:
- activemq
depends_on:
- activemq
In my app service I've configured the ActiveMQ broker URL using Spring Boot spring.activemq.broker-url=tcp://activemq:61616 and also username and password.
When I am trying to run docker-compose up the app service showing below error
DefaultMessageListenerContainer : Could not refresh JMS Connection for
destination 'queueName' - retrying using FixedBackOff{interval=5000,
currentAttempts=5, maxAttempts=unlimited}. Cause: Java.lang.NullPointerException.
I can access the ActiveMQ web console on browser (e.g. using http://localhost:8161).
Without docker container the same code is working fine in localhost.
I also had this exact problem and what helped me is adding spring.activemq.broker-url=tcp://activemq:61616 to docker-compose for app environment tag. For me it's like that:
app:
build:
context: .
container_name: app
ports:
- 8080:8080
environment:
- spring.activemq.broker-url=tcp://activemq:61616
depends_on:
- activemq
I think containerized spring app doesn't see broker-url from app properties for whatever reason
Yes, The big reason is your app run before activemq service.
You can try docker-compose up and see the console log in terminal.
Fixed:
It is not a good idea yet,but you can go to docker app and click on restart with you app container's name and then everything will work.
TL;DR: How do I have to change my below docker-compose.yml in order to allow one container to use a service of another over a custom (non-standard) port?
I have a pretty common setup: containers for a web app (Padrino [Ruby]), Postgres, Redis, and a queueing framework (Sidekiq). The web app comes with its custom Dockerfile, the remaining services come either from standard images (Postgres, Redis), or mount the data from the web app (Sidekiq). They are ties together via the following docker-compose.yml:
version: '2'
services:
web:
build: .
command: 'bundle exec puma -C config/puma.rb'
volumes:
- .:/myapp
ports:
- "9000:3000"
depends_on:
- postgres
- redis
sidekiq:
build: .
command: 'bundle exec sidekiq -C config/sidekiq.yml -r ./config/boot.rb'
volumes:
- .:/myapp
depends_on:
- postgres
- redis
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: my-postgres-user
POSTGRES_PASSWORD: my-postgres-pass
ports:
- '9001:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
redis:
image: redis
ports:
- '9002:6379'
volumes:
- 'redis:/var/lib/redis/data'
volumes:
redis:
postgres:
One key point to notice here is that I am exposing the containers services on non-standard ports (9000-9002).
If I start the setup with docker-compose up, the Redis and Postgres containers come up fine, but the containers for the web app and Sidekiq fail since they can't connect to Redis at redis:9002. Remarkably enough, the same setup works if I use 6379 (the standard Redis port) instead of 9002.
docker ps also looks fine afaik:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9148566c2509 redis "docker-entrypoint.sh" Less than a second ago Up About a minute 0.0.0.0:9002->6379/tcp rubydockerpadrino_redis_1
e6d47321c939 postgres:9.5 "/docker-entrypoint.s" Less than a second ago Up About a minute 0.0.0.0:9001->5432/tcp rubydockerpadrino_postgres_1
What's even more confusing: I can access the Redis container from the host via redis-cli -h localhost -p 9002 -n 0, but the web app and Sidekiq containers fail to establish a connection.
I am using this docker version on MacOS:
Docker version 1.12.3, build 6b644ec, experimental
Any ideas what I am doing wrong? I'd appreciate any hint how to get my setup running.
When you bind ports like this '9002:6379' you're telling Docker to forward traffic from localhost:9002 -> redis:6379. That's why this works from your host machine:
redis-cli -h localhost -p 9002 -n 0
However, when containers talk to each other, they are all connected to the same network by default (the Docker bridge or docker0). By default, containers can communicate with each other freely on this network, without needing any ports opened. Within this network, your redis container is listening for traffic on it's usual port (6379), host isn't involved at all. That's why your container to container communication works on 6379.