I am trying to figure out how to setup a simple stack for development and later deployment. I want to utilize Docker to serve Traefik in a container as the public facing reverse-proxy, which then interfaces as needed with a Nginx container that is used only to serve static frontend files (HTML, CSS, JS) and a backend PHP container that runs Laravel (I'm intentionally decoupling the frontend and API for this project).
I am trying my best to learn through all of the video and written tutorials out there, but things become complicated very quickly (at least, for my uninitiated brain) and it's a bit overwhelming. I have a one-week deadline to complete this project and I'm strongly considering dropping Docker altogether for the time being out of fear that I'll spend the whole trying to mess around with the configuration instead of actually coding!
To get started, I have a simple docker-compose with the following configuration that I've verified at least runs correctly:
version: '3'
services:
reverse-proxy:
image: traefik
command: --api --docker # Enables Web UI and tells Traefik to listen to Docker.
ports:
- "80:80" # HTTP Port
- "8080:8080" # Web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events.
Now, I need to figure out how to connect Nginx and PHP/Laravel effectively.
First of all don't put yourself under stress to learn new stuff. Because if you do, learning new stuff won't feel that comfortable anymore. Take your knowledge of technology and get stuff done. When you're done and you realize you have 1/2 days to go to your deadline, try to overdeliver by including new technology. This way you won't screw your deadline and you will not be under stress figuring our new technology or configuration.
The configuration you see below is not complete nor functionally tested. I just copied most of the stuff out of 3 of my main projects in order to give you a starting-point. Traefik as-is can be complicated to set up properly.
version: '3'
# Instantiate your own configuration with a Dockerfile!
# This way you can build somewhere and just deploy your container
# anywhere without the need to copy files around.
services:
# traefik as reverse-proxy
traefik:
build:
context: .
dockerfile: ./Dockerfile-for-traefik # including traefik.toml
command: --docker
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# this file you'll have to create manually `touch acme.json && chmod 600 acme.json`
- /home/docker/volumes/traefik/acme.json:/opt/traefik/acme.jso
networks:
- overlay
ports:
- 80:80
- 443:443
nginx:
build:
context: .
dockerfile: ./Dockerfile-for-nginx
networks:
- overlay
depends_on:
- laravel
volumes:
# you can copy your assets to production with
# `tar -c -C ./myassets . | docker cp - myfolder_nginx_1:/var/www/assets`
# there are many other ways to achieve this!
- assets:/var/www/assets
# define your application + whatever it needs to run
# important:
# - "build:" will search for a Dockerfile in the directory you're specifying
laravel:
build: ./path/to/laravel/app
environment:
MYSQL_ROOT_PASSWORD: password
ENVIRONMENT: development
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
links:
- mysql
volumes:
# this path is for development
- ./path/to/laravel/app:/app
# you need a database, right?
mysql:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
assets:
networks:
overlay:
Related
I am making a project for local development with dockerized apps. I have 3 different domain on my company that each domain has one docker-compose file with 5 services. (15 projects)
If User of my project wants to deploy only 1 service of their domain or/and 2 of the other domains projects, I have to comment out services in other docker-compose files that dont want to be deployed.
So my question is How can i comment out docker-compose(Go) files block with bash script? I want to choose the lines with their context. For example in below example i want to comment out ap2-php-fpm section. I cant make a work around solution because more projects incoming. I have to intervene go language script with bash script.
Demonstration
version: '3.3'
services:
app-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
ap2-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
networks:
general-nt:
external: true
I want to make this file as below with bash script.
version: '3.3'
services:
app-php-fpm:
container_name: app
build:
context: ${src}/
volumes:
- $path:path
networks:
general-nt:
aliases:
- app
expose:
- "9000"
# ap2-php-fpm:
# container_name: app
# build:
# context: ${src}/
# volumes:
# - $path:path
# networks:
# general-nt:
# aliases:
# - app
# expose:
# - "9000"
networks:
general-nt:
external: true
For many practical purposes, it may be enough to run docker-compose up with specific service names. If you run
docker-compose up -d app-php-fpm
it will start the service(s) on the command line, and anything it depends_on:, but not anything else. That would avoid the need to comment out parts of the YAML file. You could otherwise interact with the containers normally.
I've been developing for Laravel using Homestead (VirtualBox and Vagrant) on Windows 10. Recently I wanted to switch to Docker and the Linux Sub System on Windows (WSL2).
Under Homestead I've been running my app under my-domain.test. In my docker-compose file I use localhost on port 8008. I can access the website under localhost:8008 but I get an 404 on every single page I want to access. Inspecting the links, Laravel seems to use my old domain my-domain.test for every link generated. So instead of creating links like localhost:8008/xyz it generates links like https://my-domain.test/xyz.
Of course I've updated my .envfile, cleared the (config) cache, cloned a complete new copy of my repository and set up the project in a complete new directory within the sub system. I've also uninstalled all pieces of Vagrant, VirtualBox and Homestead.
I've searched the complete project for references on the old domain. I havn't found anything.
On an other system it works. Somehow my current system seems to hang on the old domain..
How can I achieve this without reseting my whole computer?
This is my docker-compose:
version: '3.3'
services:
pdbv1-db:
image: mariadb
container_name: pdbv1-db
restart: unless-stopped
ports:
- "3306:3306"
tty: true
environment:
MYSQL_ROOT_PASSWORD: pdb
MYSQL_DATABASE: pdb
MYSQL_USER: pdb
MYSQL_PASSWORD: pdb
networks:
- app-network
volumes:
- ./docker-db:/var/lib/mysql
pdbv1-backend:
build:
context: .
dockerfile: Dockerfile
args:
- WITH_XDEBUG=true
- USERID=$UID
env_file:
- .env
user: $UID:$GID
container_name: pdbv1-backend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: pdbv1-backend
SERVICE_TAGS: dev
PHP_IDE_CONFIG: serverName=Docker
working_dir: /var/www
ports:
- "8008:8080"
volumes:
- ./:/var/www
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
There's 2 ways to look at this:
go with my-domain.test
add that domain to your windows hosts file and point that to 127.0.0.1
Also check the dockerfile of your nginx and check your nginx conf file for your domain
the laravel code. check in your .env file for the url, is that localhost or my-domain.test?
Then look in the entire sourcecode for my-domain.test
and of course in the database itself as well.
(Edit: I see that you've already done that, but it would be the only explanation)
Frankly I would go with option 1: you will have your my-domain.test back and you can use multiple domains / multiple projects.
I only use localhost for quick stuff and for managing my database and my redis.
I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.
I have developed and dockerised two applications web (react) and api (laravel, mysql), they have separate codebases and separate directories.
Could somebody please help explain how I can get my web application talking to my api whilst using docker at the same time
Update: Ultimately what I want to achieve is to have both my frontend and backend running on port 80 without having to have two web servers running as containers so that my docker development environment will work the same as using valet or mamp etc.
For development you could make use of docker-compose.
Key benefits:
Configure your app's services in YAML.
Single command to create/start the services defined on this configuration.
Compose creates a default network for your app. Each container joins this default network and they can see each other.
I use the following structure for a project.
projectFolder
|_backend (laravel app)
|_frontend (react app)
|_docker-compose.yml
|_backend.dockerfile
|_frontend.dockerfile
My docker-compose.yml
version: "3.3"
services:
frontend:
build:
context: ./
dockerfile: frontend.dockerfile
args:
- NODE_ENV=development
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
- ./frontend/package.json:/opt/package.json
environment:
- NODE_ENV=development
backend:
build:
context: ./
dockerfile: backend.dockerfile
working_dir: /var/www/html/actas
volumes:
- ./backend:/var/www/html/actas
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql"
ports:
- "8000:8000"
mysql:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
volumes:
dbdata:
Each part of the application is defined by a service in the docker-compose file. E.g.
frontend
backend
mysql
Docker-compose will create a default network and add each container to it. The hostname for
each container will be the service name defined in the yml file.
For example, the backend container access the mysql server with the name mysql. You can
see this on the service definition itself:
backend:
...
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql" <-- The hostname for the mysql container is the name of the service
With this, in the react app, I can setup the proxy configuration in package.json as follows
"proxy": "http://backend:8000",
One last thing, as mentioned by David Maze in the comments. Add the backend to your
hosts file, so the browser could resolve that name.
E.g /etc/hosts on ubuntu
127.0.1.1 backend
Tl;Dr; Trying to get WordPress docker-compose container to talk to another docker-compose container.
On my Mac I have a WordPress & MySQL container which I have built and configured with a linked MySQL server. In production I plan to use a Google Cloud MySQL storage instance, so plan on removing the MySQL container from the docker-compose file (unlinking it) and then separate shared container I can use from multiple docker containers.
The issue I'm having is that I cant connect the WordPress container to the separate MySQL container. Would anyone be able to shed any light on how I might go about this?
I have tried unsuccessfully to create a network as well as tried creating a fixed IP that the local box has reference to via the /etc/hosts file (my preferred configuration as I can update the file according to ENV)
WP:
version: '2'
services:
wordpress:
container_name: spmfrontend
hostname: spmfrontend
domainname: spmfrontend.local
image: wordpress:latest
restart: always
ports:
- 8080:80
# creates an entry in /etc/hosts
extra_hosts:
- "ic-mysql.local:172.20.0.1"
# Sets up the env, passwords etc
environment:
WORDPRESS_DB_HOST: ic-mysql.local:9306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DB_NAME: wordpress
WORDPRESS_TABLE_PREFIX: spm
# sets the working directory
working_dir: /var/www/html
# creates a link to the volume local to the file
volumes:
- ./wp-content:/var/www/html/wp-content
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
MySQL:
version: '2'
services:
mysql:
container_name: ic-mysql
hostname: ic-mysql
domainname: ic-mysql.local
restart: always
image: mysql:5.7
ports:
- 9306:3306
# Create a static IP for the container
networks:
ipv4_address: 172.20.0.1
# Sets up the env, passwords etc
environment:
MYSQL_ROOT_PASSWORD: root # TODO: Change this
MYSQL_USER: root
MYSQL_PASS: root
MYSQL_DATABASE: wordpress
# saves /var/lib/mysql to persistant volume
volumes:
- perstvol:/var/lib/mysql
- backups:/backups
# creates a volume to persist data
volumes:
perstvol:
backups:
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
What you probably want to do is create a shared Docker network for the two containers to use, and point them both to it. You can create a network using docker network create <name>. I will use sharednet as an example below, but you can use any name you like.
Once the network is there, you can point both containers to it. When you're using docker-compose, you would do this at the bottom of your YAML file. This would go at the top level of the file, i.e. all the way to the left, like volumes:.
networks:
default:
external:
name: sharednet
To do the same thing on a normal container (outside compose), you can pass the --network argument.
docker run --network sharednet [ ... ]