Testing a container against DynamoDB-Local - go

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Related

Make a request to a spring api running in a docker container from windows host

So, I searched around for an answer on this matter but either people don't address the issue or they say there's no problem doing this on their computer (mac or linux). It seems like this might be a windows problem.
I have a spring api running on a docker container (linux container). I use docker desktop on windows and I'm trying to make a request (in insomnia/postman/wtv) to that api.
If I run the api locally making the following request works perfectly:
http://localhost:8080/api/task/
This will list multiples task elements.
I've containerized this application like so:
Dockerfile
FROM openjdk:11.0.7
COPY ./target/spring-api-0.0.1-SNAPSHOT.jar /usr/app/
WORKDIR /usr/app
RUN sh -c 'touch spring-api-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java", "-jar", "spring-api-0.0.1-SNAPSHOT.jar"]
docker-compose.yml
version: '3.8'
services:
api:
build: .
depends_on:
- mysql
environment:
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/test?createDatabaseIfNotExist=true
ports:
- "8080:80"
mysql:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=test
If I do docker-compose up this works without issue:
The problem is, if I try to call the same endpoint as before from localhost I don't get any response.
Insomnia returns an error saying: Error: Server returned nothing (no headers, no data)
I've also tried connecting to the container's ip (got it from docker inspect) but no luck.
Ports are exposed in docker-compose.yml. What am I missing?
Thanks in advance.
Port mapping is incorrect.Spring boot application started at 8080 (from the image I see) inside container and it should be mapped to 8080 inside the container.
It should be like below:
ports:
- "8080:8080"

How to get docker containers from different projects speaking to each other

I have developed and dockerised two applications web (react) and api (laravel, mysql), they have separate codebases and separate directories.
Could somebody please help explain how I can get my web application talking to my api whilst using docker at the same time
Update: Ultimately what I want to achieve is to have both my frontend and backend running on port 80 without having to have two web servers running as containers so that my docker development environment will work the same as using valet or mamp etc.
For development you could make use of docker-compose.
Key benefits:
Configure your app's services in YAML.
Single command to create/start the services defined on this configuration.
Compose creates a default network for your app. Each container joins this default network and they can see each other.
I use the following structure for a project.
projectFolder
|_backend (laravel app)
|_frontend (react app)
|_docker-compose.yml
|_backend.dockerfile
|_frontend.dockerfile
My docker-compose.yml
version: "3.3"
services:
frontend:
build:
context: ./
dockerfile: frontend.dockerfile
args:
- NODE_ENV=development
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
- ./frontend/package.json:/opt/package.json
environment:
- NODE_ENV=development
backend:
build:
context: ./
dockerfile: backend.dockerfile
working_dir: /var/www/html/actas
volumes:
- ./backend:/var/www/html/actas
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql"
ports:
- "8000:8000"
mysql:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
volumes:
dbdata:
Each part of the application is defined by a service in the docker-compose file. E.g.
frontend
backend
mysql
Docker-compose will create a default network and add each container to it. The hostname for
each container will be the service name defined in the yml file.
For example, the backend container access the mysql server with the name mysql. You can
see this on the service definition itself:
backend:
...
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql" <-- The hostname for the mysql container is the name of the service
With this, in the react app, I can setup the proxy configuration in package.json as follows
"proxy": "http://backend:8000",
One last thing, as mentioned by David Maze in the comments. Add the backend to your
hosts file, so the browser could resolve that name.
E.g /etc/hosts on ubuntu
127.0.1.1 backend

Communication between two microservices deployed in docker container

Two microservices are deployed on AWS inside a container.I have a scenario where my microservice-A have to communicate with microservice-B. But When i tried with http://localhost:8082/url then it didn't work. Unfortunately i had to use the public url of my microservices. Due to the use of public url performance is getting slow.
Can anyone please help me ,so that microservices can be able to communicate on localhost inside docker container.
All you need is a docker network for this. I have achieved this using docker-compose. In the following example I have defined a network back-tier and both the services belong to it. After this your application can access your DB with its service name http://database:27017.
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database

Docker compose, link file outside container

I'm working with docker-compose for a laravel project and nginx.
This is my docker-compose.yml :
version: '2'
services:
backend:
image: my_image
depends_on:
- elastic
- mysql
mysql:
image: mysql:8.0.0
nginx:
depends_on:
- backend
image: my_image
ports:
- 81:80
So, my Laravel project is in the container backend, and If I run the command : docker-compose up -d it's ok, all containers are created and my project is running on port 81.
My problem is, In the Laravel project in my container backend, I have a .env file with database login, password and other stuff.
How can I edit this file after docker-compose up ? Directly in the container is not a good idea, is there a way to link a file outside a container with docker-compose ?
Thanks
One approach to this is to use the 'env_file' directive on the docker-compose.yml, in there you can put a list of key value pairs that will be exported into the container. For example:
web:
image: nginx:latest
env_file:
- .env
ports:
- "8181:80"
volumes:
- ./code:/code
Then you can configure your application to use these env values.
One catch with this approach is that you need to recreate the containers if you change any value or add a new one (docker-compose down && docker-compose up -d).

Using a shared MySQL container

Tl;Dr; Trying to get WordPress docker-compose container to talk to another docker-compose container.
On my Mac I have a WordPress & MySQL container which I have built and configured with a linked MySQL server. In production I plan to use a Google Cloud MySQL storage instance, so plan on removing the MySQL container from the docker-compose file (unlinking it) and then separate shared container I can use from multiple docker containers.
The issue I'm having is that I cant connect the WordPress container to the separate MySQL container. Would anyone be able to shed any light on how I might go about this?
I have tried unsuccessfully to create a network as well as tried creating a fixed IP that the local box has reference to via the /etc/hosts file (my preferred configuration as I can update the file according to ENV)
WP:
version: '2'
services:
wordpress:
container_name: spmfrontend
hostname: spmfrontend
domainname: spmfrontend.local
image: wordpress:latest
restart: always
ports:
- 8080:80
# creates an entry in /etc/hosts
extra_hosts:
- "ic-mysql.local:172.20.0.1"
# Sets up the env, passwords etc
environment:
WORDPRESS_DB_HOST: ic-mysql.local:9306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DB_NAME: wordpress
WORDPRESS_TABLE_PREFIX: spm
# sets the working directory
working_dir: /var/www/html
# creates a link to the volume local to the file
volumes:
- ./wp-content:/var/www/html/wp-content
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
MySQL:
version: '2'
services:
mysql:
container_name: ic-mysql
hostname: ic-mysql
domainname: ic-mysql.local
restart: always
image: mysql:5.7
ports:
- 9306:3306
# Create a static IP for the container
networks:
ipv4_address: 172.20.0.1
# Sets up the env, passwords etc
environment:
MYSQL_ROOT_PASSWORD: root # TODO: Change this
MYSQL_USER: root
MYSQL_PASS: root
MYSQL_DATABASE: wordpress
# saves /var/lib/mysql to persistant volume
volumes:
- perstvol:/var/lib/mysql
- backups:/backups
# creates a volume to persist data
volumes:
perstvol:
backups:
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
What you probably want to do is create a shared Docker network for the two containers to use, and point them both to it. You can create a network using docker network create <name>. I will use sharednet as an example below, but you can use any name you like.
Once the network is there, you can point both containers to it. When you're using docker-compose, you would do this at the bottom of your YAML file. This would go at the top level of the file, i.e. all the way to the left, like volumes:.
networks:
default:
external:
name: sharednet
To do the same thing on a normal container (outside compose), you can pass the --network argument.
docker run --network sharednet [ ... ]

Resources