Communication between UI layer and Rest service not happening properly over dockercompose [duplicate] - spring-boot

This question already has answers here:
Why Docker compose link does not work in react app?
(2 answers)
Closed 1 year ago.
The communication between the front end layer(React) and the backend layer (Spring boot: Rest API) is not happening properly over docker compose
version: "3"
services:
backend-service:
build:./backend
ports:
- 8080:8080
ui-service:
build: ./ui
ports:
- 8085:8085
So when I am calling https://localhost:8080 from the front end layer it works fine. Whereas when I call https://backend-service:8080 from the front end layer it gives me net::ERR_NAME_NOT_RESOLVED
That is bit unusual. Wondering if I did something wrong or this as designed ?

From Networking in Compose
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
You can try including the container_name field. More about it here
version: "3"
services:
backend-service:
build:./backend
container_name: backend-service
ports:
- 8080:8080
ui-service:
build: ./ui
ports:
- 8085:8085
If you do not specify container_name your container will be likely to be named backend-service-1 or similar. You can check containers name using docker ps` or looking at compose logs.

Related

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

How to check rabbitMQ connection(health check) up or not?

I'm running 4 microservice using docker. Here one service depends on other services. That is why I need to check before using any service other services up or not?
To up all services I'm writing a bash script.
For my working purpose, I am using sleep until up properly rabbitmq.
what is the better solution to check rabbitmq up or not? Until not up rabbitmq I have to wait.
Now for my working pupose i am using like that -
# wait for rabbitmq container be ready
sleep 14
This is the docker-compose container for rabbitMQ
rabbitmq:
image: 'rabbitmq:3.8.9'
container_name: rabbitmq_dev
restart: always
ports:
- 5675:5672
environment:
- RABBITMQ_DEFAULT_USER=rabbit
- RABBITMQ_DEFAULT_PASS=pass
depends_on:
- consul
networks:
- my_networks
I think HealthCheck can solve your problem.
Reference links: Docker Compose wait for container X before starting Y

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

Connecting spring boot application in one docker container to a Cassandra database in another container

I need to connect spring boot application in one docker container to a Cassandra database in another container.
There are two ways.
"scripted" approach, where you first docker run cassandra container, then you docker run your app. You have to make sure first container exposes ports second container can connect to, or while starting second container reference ports from first by name
use docker compose, that should more or less look like this:
version: '2'
services:
cassandra:
image: cassandra:3.11.5
ports:
- 7000:7000
springapp:
image: springapp:latest
ports:
- 8080:8080
environment:
CASSANDRA_CONTACTPOINTS: 127.0.0.1
CASSANDRA_PORT: 7000

Resources