Two microservices are deployed on AWS inside a container.I have a scenario where my microservice-A have to communicate with microservice-B. But When i tried with http://localhost:8082/url then it didn't work. Unfortunately i had to use the public url of my microservices. Due to the use of public url performance is getting slow.
Can anyone please help me ,so that microservices can be able to communicate on localhost inside docker container.
All you need is a docker network for this. I have achieved this using docker-compose. In the following example I have defined a network back-tier and both the services belong to it. After this your application can access your DB with its service name http://database:27017.
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database
Related
Let me start off by stating that I know this question has been asked on many forums. I have read them all.
I have two Docker containers that are built with docker-compose and contain a Laravel project each. They are both attached to a network and can ping one another successfully, however, when I make a request from Postman to the one backend that then makes a curl request to the other, I get the connection refused error shown below.
This is my docker-compose file for each project respectfully:
version: '3.8'
services:
bumblebee:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
networks:
- picknpack
ports:
- "8010:8000"
networks:
picknpack:
external: true
version: '3.8'
services:
optimus:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/app
ports:
- "8020:8000"
networks:
- picknpack
depends_on:
- optimus_db
optimus_db:
image: mysql:8.0.25
environment:
MYSQL_DATABASE: optimus
MYSQL_USER: test
MYSQL_PASSWORD: test1234
MYSQL_ROOT_PASSWORD: root
volumes:
- ./storage/dbdata:/var/lib/mysql
ports:
- "33020:3306"
networks:
picknpack:
external: true
Here you can see the successful ping:
I would love to keep messing with configuration files but I have a deadline to meet and nothing is working, any help would be appreciated.
EDIT
Please see inspection of network:
Within the docker network that I created, both containers are exposed on port 8000 as per their Dockerfiles. It was looking at me square in the face: 'Connection refused on port 80'. The HTTP client was using that as default rather than 8000. I updated the curl request to hit port 8000 and it works now. Thanks to #user3532758 for your help. Note that the containers are mapped to ports 8010 and 8020 in the external local network, not within the docker network. There they are both served on port 8000 with different IPs
I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net
I have developed and dockerised two applications web (react) and api (laravel, mysql), they have separate codebases and separate directories.
Could somebody please help explain how I can get my web application talking to my api whilst using docker at the same time
Update: Ultimately what I want to achieve is to have both my frontend and backend running on port 80 without having to have two web servers running as containers so that my docker development environment will work the same as using valet or mamp etc.
For development you could make use of docker-compose.
Key benefits:
Configure your app's services in YAML.
Single command to create/start the services defined on this configuration.
Compose creates a default network for your app. Each container joins this default network and they can see each other.
I use the following structure for a project.
projectFolder
|_backend (laravel app)
|_frontend (react app)
|_docker-compose.yml
|_backend.dockerfile
|_frontend.dockerfile
My docker-compose.yml
version: "3.3"
services:
frontend:
build:
context: ./
dockerfile: frontend.dockerfile
args:
- NODE_ENV=development
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
- ./frontend/package.json:/opt/package.json
environment:
- NODE_ENV=development
backend:
build:
context: ./
dockerfile: backend.dockerfile
working_dir: /var/www/html/actas
volumes:
- ./backend:/var/www/html/actas
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql"
ports:
- "8000:8000"
mysql:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
volumes:
dbdata:
Each part of the application is defined by a service in the docker-compose file. E.g.
frontend
backend
mysql
Docker-compose will create a default network and add each container to it. The hostname for
each container will be the service name defined in the yml file.
For example, the backend container access the mysql server with the name mysql. You can
see this on the service definition itself:
backend:
...
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql" <-- The hostname for the mysql container is the name of the service
With this, in the react app, I can setup the proxy configuration in package.json as follows
"proxy": "http://backend:8000",
One last thing, as mentioned by David Maze in the comments. Add the backend to your
hosts file, so the browser could resolve that name.
E.g /etc/hosts on ubuntu
127.0.1.1 backend
My goal is to implement Spring-cloud-Gateway as a reverse proxy which I plan to eventually utilize as a reverse proxy in tandem with Keycloak for microservice security. The issue I am currently having is as follows:
Run a microservice in docker without Spring-cloud-gateway
Run Spring-cloud-gateway with default settings and a single route to redirect to a microservice that exist inside Docker
This works as intended and redirects to the microservice when using localhost:8080. When I then include the Gateway into my Docker-compose and build the container it says "this site can't be reached" but all other services inside of the container are reachable via their ports. I need help determining why this is happening an I suspect it is because of my docker-compose.yml file.
Here it is:
version: "3"
services:
react-web:
container_name: react-auth-demo
build: "./ReactJS-Auth/"
volumes:
- "./app"
- "/app/node_modules"
ports:
- 3000:3000
environment:
- NODE_ENV=development
- CHOKIDAR_USEPOLLING=true
networks:
- demo-network
depends_on:
- spring-cloud-gateway
keycloak:
image: jboss/keycloak:8.0.1
container_name: keycloak-service
volumes:
- //c/tmp:/tmp
ports:
- 8180:8080
environment:
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_USER=admin
networks:
- demo-network
depends_on:
- spring-cloud-gateway
spring-cloud-gateway:
build: ./gateway-test
container_name: gateway-test
ports:
- 6000:6000
networks:
- demo-network
networks:
demo-network:
driver: bridge
Here is the code for the gateway:
#Bean
public RouteLocator customRouteLocator(RouteLocatorBuilder builder){
return builder.routes()
.route("1", r -> r
.path("/**")
.uri("http://192.168.99.100:3000/"))
.build();
}
The request is as follows: http://192.168.99.100:6000/ this should redirect me to the react-web service.
And lastly here is the link to the source code:
https://gitlab.com/LibreFoodPantry/modules/usermodule-bnm/Demo
This project is for an Independent Study in college. So all help and advice are very much appreciated even if it doesn't relate to the issue at hand. Thanks.
I have a web application on java spring that connects to postgres. Сonnection string to the database: spring.datasource.url=jdbc:postgresql://postgres:5432/postgres
There is a compose-file that raises the web application and the database:
version: "3"
services:
postgres:
networks:
- backend
image: postgres
ports:
- "5432:5432"
volumes:
- db-data:/var/lib/postgresql/data
worker1:
networks:
- backend
image: scripter51/worker
ports:
- "8082:8082"
deploy:
mode: replicated
replicas: 2
placement:
constraints: [node.role == worker]
networks:
backend:
volumes:
db-data:
Public services on the machine of the command docker stack deploy --compose-file comp.yml test
Problem: If the database and the web application are on the same machine - everything works, if at different - then the application can not find the database by the name of the service.
I was able to solve this problem.
I was trying to create a network between the host and the virtual machine using the means of the docker - apparently it does not work.