I'm working with docker-compose for a laravel project and nginx.
This is my docker-compose.yml :
version: '2'
services:
backend:
image: my_image
depends_on:
- elastic
- mysql
mysql:
image: mysql:8.0.0
nginx:
depends_on:
- backend
image: my_image
ports:
- 81:80
So, my Laravel project is in the container backend, and If I run the command : docker-compose up -d it's ok, all containers are created and my project is running on port 81.
My problem is, In the Laravel project in my container backend, I have a .env file with database login, password and other stuff.
How can I edit this file after docker-compose up ? Directly in the container is not a good idea, is there a way to link a file outside a container with docker-compose ?
Thanks
One approach to this is to use the 'env_file' directive on the docker-compose.yml, in there you can put a list of key value pairs that will be exported into the container. For example:
web:
image: nginx:latest
env_file:
- .env
ports:
- "8181:80"
volumes:
- ./code:/code
Then you can configure your application to use these env values.
One catch with this approach is that you need to recreate the containers if you change any value or add a new one (docker-compose down && docker-compose up -d).
Related
I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net
I recently started trying to use the docker in a more advanced way. I had already used it to create mysql server and postgres server containers. At the moment I have a Spring test project, which runs normally when starting up. But I tried to instantiate the same with docker, it successfully creates a mysql server, and instantiates a tomcat, but it does not go up the project itself.
Dockerfile content:
FROM tomcat:9-jre11
ENV CATALINA_OPTS="$CATALINA_OPTS -Duser.timezone=America/Sao_Paulo -Xms1024m -Xmx2560m -XX:+UseParallelGC -XX:-OmitStackTraceInFastThrow"
ADD ./smartCircuit/target/*.war $CATALINA_HOME/webapps/
EXPOSE 8080
CMD catalina.sh run;
docker-compose.yaml content:
version: '3.5'
services:
db:
image: mysql:5.7
restart: always
container_name: mysql-container
environment:
MYSQL_DATABASE: 'smart_circuit'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
# <Port exposed> : < MySQL Port running inside container>
- '3307:3306'
expose:
# Opens port 3306 on the container
- '3306'
# Where our data will be persisted
volumes:
- my-db:/var/lib/mysql
web:
build: .
container_name: tomcat
ports:
- 8080:8080
environment:
DB_URL: jdbc:mysql://mysql-container:3306/smart_circuit?useSSL=false
DB_USER: user
DB_PASSWORD: password
links:
- db:mysql-container
# Names our volume
volumes:
my-db:
I'm expecting my application to start and be accessible when I use the command "sudo docker-compose up --build". For example I have one RestController mapped "project" with a method mapped "find" which receives an id and either retrieves it when found or throws a NotFoundException.
When I run it through InteliJ without docker it works, but when I start it through docker I get a Tomcat 404 Not Found return.
Basically I can access the mysql instance with no problem and the tomcat is kind of working, but the project itself is not being deployed.
I have developed and dockerised two applications web (react) and api (laravel, mysql), they have separate codebases and separate directories.
Could somebody please help explain how I can get my web application talking to my api whilst using docker at the same time
Update: Ultimately what I want to achieve is to have both my frontend and backend running on port 80 without having to have two web servers running as containers so that my docker development environment will work the same as using valet or mamp etc.
For development you could make use of docker-compose.
Key benefits:
Configure your app's services in YAML.
Single command to create/start the services defined on this configuration.
Compose creates a default network for your app. Each container joins this default network and they can see each other.
I use the following structure for a project.
projectFolder
|_backend (laravel app)
|_frontend (react app)
|_docker-compose.yml
|_backend.dockerfile
|_frontend.dockerfile
My docker-compose.yml
version: "3.3"
services:
frontend:
build:
context: ./
dockerfile: frontend.dockerfile
args:
- NODE_ENV=development
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
- ./frontend/package.json:/opt/package.json
environment:
- NODE_ENV=development
backend:
build:
context: ./
dockerfile: backend.dockerfile
working_dir: /var/www/html/actas
volumes:
- ./backend:/var/www/html/actas
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql"
ports:
- "8000:8000"
mysql:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
volumes:
dbdata:
Each part of the application is defined by a service in the docker-compose file. E.g.
frontend
backend
mysql
Docker-compose will create a default network and add each container to it. The hostname for
each container will be the service name defined in the yml file.
For example, the backend container access the mysql server with the name mysql. You can
see this on the service definition itself:
backend:
...
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql" <-- The hostname for the mysql container is the name of the service
With this, in the react app, I can setup the proxy configuration in package.json as follows
"proxy": "http://backend:8000",
One last thing, as mentioned by David Maze in the comments. Add the backend to your
hosts file, so the browser could resolve that name.
E.g /etc/hosts on ubuntu
127.0.1.1 backend
I have two docker containers, one with hazelcast java application (the core for the web application - jar package) and one with rest service for the web application (war package). I'm using docker-compose to build up whole project in docker which looks like this:
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
I also have Dockerfile for each container:
-escomled_datagrid:
FROM openjdk:8-jdk-alpine as build
WORKDIR /EscomledML
COPY ./. ./
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
CMD ["sh","/EscomledML/escomled_data_grid.sh","start"]
EXPOSE 8085
-tomcat
FROM tomcat:8.5-alpine
COPY ./sample.war /usr/local/tomcat/webapps/
COPY ./escomled-rest.war /usr/local/tomcat/webapps/
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
RUN sh -c 'touch /usr/local/tomcat/webapps/sample.war'
RUN sh -c 'touch /usr/local/tomcat/webapps/escomled-rest.war'
EXPOSE 8080
First container uses sh script in the runtime.
This way everyting works fine, the containers start and stay active.
The only problem is that they dont see each other, hazelcast server starts and waits for "member" to connect, war file (hazelcast member) also starts, but they dont "see" each other and wont connect. I put in the docker-compose file "links" and "depends on" tags, but that wont help.
The code for the project works fine when I start it localy, first I start data grid server as java application, then I start the tomcat containing rest service and the connection is established in no time.
So my question is, how do I link this two containers so they can see each other and work together?
try putting the containers in the same "network" by specifying the network bridge
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
networks:
- networknamename
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
networks:
- networknamename
networks:
networknamename:
driver: bridge
Tl;Dr; Trying to get WordPress docker-compose container to talk to another docker-compose container.
On my Mac I have a WordPress & MySQL container which I have built and configured with a linked MySQL server. In production I plan to use a Google Cloud MySQL storage instance, so plan on removing the MySQL container from the docker-compose file (unlinking it) and then separate shared container I can use from multiple docker containers.
The issue I'm having is that I cant connect the WordPress container to the separate MySQL container. Would anyone be able to shed any light on how I might go about this?
I have tried unsuccessfully to create a network as well as tried creating a fixed IP that the local box has reference to via the /etc/hosts file (my preferred configuration as I can update the file according to ENV)
WP:
version: '2'
services:
wordpress:
container_name: spmfrontend
hostname: spmfrontend
domainname: spmfrontend.local
image: wordpress:latest
restart: always
ports:
- 8080:80
# creates an entry in /etc/hosts
extra_hosts:
- "ic-mysql.local:172.20.0.1"
# Sets up the env, passwords etc
environment:
WORDPRESS_DB_HOST: ic-mysql.local:9306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DB_NAME: wordpress
WORDPRESS_TABLE_PREFIX: spm
# sets the working directory
working_dir: /var/www/html
# creates a link to the volume local to the file
volumes:
- ./wp-content:/var/www/html/wp-content
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
MySQL:
version: '2'
services:
mysql:
container_name: ic-mysql
hostname: ic-mysql
domainname: ic-mysql.local
restart: always
image: mysql:5.7
ports:
- 9306:3306
# Create a static IP for the container
networks:
ipv4_address: 172.20.0.1
# Sets up the env, passwords etc
environment:
MYSQL_ROOT_PASSWORD: root # TODO: Change this
MYSQL_USER: root
MYSQL_PASS: root
MYSQL_DATABASE: wordpress
# saves /var/lib/mysql to persistant volume
volumes:
- perstvol:/var/lib/mysql
- backups:/backups
# creates a volume to persist data
volumes:
perstvol:
backups:
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
What you probably want to do is create a shared Docker network for the two containers to use, and point them both to it. You can create a network using docker network create <name>. I will use sharednet as an example below, but you can use any name you like.
Once the network is there, you can point both containers to it. When you're using docker-compose, you would do this at the bottom of your YAML file. This would go at the top level of the file, i.e. all the way to the left, like volumes:.
networks:
default:
external:
name: sharednet
To do the same thing on a normal container (outside compose), you can pass the --network argument.
docker run --network sharednet [ ... ]