Connecting to a Mongo container from Spring container - spring

I have a problem here that I really cannot understand. I already saw few topics here with the same problem and those topics was successfully solved. I basically did the same thing and cannot understand what I'm doing wrong.
I have a Spring application container that tries to connect to a Mongo container through the following Docker Composer:
version: '3'
services:
app:
build: .
ports:
- "8080:8080"
links:
- db
db:
image: mongo
volumes:
- ./database:/data
ports:
- "27017:27017"
In my application.properties:
spring.data.mongodb.uri=mongodb://db:27017/app
Finally, my Dockerfile:
FROM eclipse-temurin:11-jre-alpine
WORKDIR /home/java
RUN mkdir /home/java/bar
COPY ./build/libs/foo.jar /home/java/bar/foo.jar
CMD ["java","-jar", "/home/java/bar/foo.jar"]
When I run docker compose up --build I got:
2022-11-17 12:08:53.452 INFO 1 --- [null'}-db:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server db:27017
Caused by: java.net.UnknownHostException: db
Running the docker compose ps I can see the mongo container running well, and I am able to connect to it through Mongo Compass and with this same Spring Application but outside of container. The difference running outside of container is the host from spring.data.mongodb.uri=mongodb://db:27017/app to spring.data.mongodb.uri=mongodb://localhost:27017/app.
Also, I already tried to change the host for localhost inside of the spring container and didnt work.

You need to specify MongoDB host, port and database as different parameters as mentioned here.
spring.data.mongodb.host=db
spring.data.mongodb.port=27017
spring.data.mongodb.authentication-database=admin
As per the official docker-compose documentation the above docker-compose file should worked since both db and app are in the same network (You can check if they are in different networks just in case)
If the networking is not working, as a workaround, instead of using localhost inside the spring container, use the server's IP, i.e, mongodb://<server_ip>:27017/app (And make sure there is no firewall blocking it)

Related

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

How to connect my spring boot app to redis container on docker?

I have created a spring app and i want to connect it to redis server which is deployed on docker-compose i put the needed properties as follow :
spring.redis.host=redis
spring.redis.port=6379
But i keep getting a ConnexionException so how can i Know on which host redis is running and how to connect to it.
Here is my docker-compose file :
version: '2'
services:
redis:
image: 'bitnami/redis:5.0'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- 'redis_data:/bitnami/redis/data'
volumes:
redis_data:
driver: local
From docker compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name
If you want to access redis by container name ('redis' in this case), the Spring boot application has also be deployed as a docker compose service, but it doesn't appear in the docker-compose file that you've provided in the question, so please add it.
Alternatively If you're trying to run the spring boot application on host machine, use 'localhost' instead of 'redis' in order to access the redis container.
Another approach you can use is "docker network" , Below are the steps to follow :
Create a docker network for redis
docker network create redis-docker
Spin up redis container is redis-docker network.
docker run -d --net redis-docker --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Inspect the redis-docker container
docker network inspect redis-docker
Copy the "IPv4Address" IP and paster in application.yml
Now build , start your application.

Connecting spring boot application in one docker container to a Cassandra database in another container

I need to connect spring boot application in one docker container to a Cassandra database in another container.
There are two ways.
"scripted" approach, where you first docker run cassandra container, then you docker run your app. You have to make sure first container exposes ports second container can connect to, or while starting second container reference ports from first by name
use docker compose, that should more or less look like this:
version: '2'
services:
cassandra:
image: cassandra:3.11.5
ports:
- 7000:7000
springapp:
image: springapp:latest
ports:
- 8080:8080
environment:
CASSANDRA_CONTACTPOINTS: 127.0.0.1
CASSANDRA_PORT: 7000

Docker-compose links container but Application throws "no route found"

I have a web application with database and rabbitMQ services. I am using docker-compose to build and run images.
rabbitmq:
image : rabbitmq:3-management
container_name: rabbitmq
hostname: rabbitmq
ports:
- "15672:15672"
expose:
- "5672"
- "4369"
- "25672"
coredb:
container_name: coredb
build: ./mongodb/
core:
container_name: core
build: ./core/
ports:
- "80:8080"
- "5683/udp:5683/udp"
- "5684/udp:5684/udp"
links:
- rabbitmq
- coredb
After running
docker-compose up
All the services get started properly. I can ping rabbitmq and codedb from core's shell. In the SpringBoot application code, I am using
CachingConnectionFactory(hostname)
to connect to rabbitMQ. The hostname i am giving is "rabbitmq". In the logs during event publishing, the error I see is "No route found". Core service can connect to database properly but cannot connect to rabbitMQ.
You can use docker inspect <container name> to inspect the config of the "core" service to make sure the link was setup. You can also check the hostname using docker exec -ti <container name> cat /etc/hosts (which I think you did already).
If it looks like it's properly linked up, the issue is probably that the core service is trying to connect to it before the rabbitmq service has actually started.
You can have the "core" service retry a few times (with a short delay) to try and setup the conenction.

Resources