How to configure traefik to correctly route traffic from a specific domain to a specific nginx container - laravel

I have created two container laravel webapp (project1 and project2) with own nginx/php-fpm and linked them with traefik container. Each project has its own folder and docker-compose.yaml properly configured with traefik labels.
Thanks to traefik what I expect is that when I visit the project1.laravel.test I look at the contents of project1 and when I visit the project2.laravel.test I look at the content of project2.
The issue is that when I visit project1.laravel.test, alternately, the content of project1 is shown and other times the content of project2 is shown. If I shut down the container of the project2, project 1 works fine. It seems that traefik is configured as a load balancer but I don't understand where is the issue.
How to replicate my issue?
1. git clone https://github.com/gtoto007/traefik-laravel-docker
2. cd traefik-laravel-docker
3. docker-compose -f traefik/docker-compose.yaml up -d
4. docker-compose -f project1/docker-compose.yaml up -d
5. docker-compose -f project1/docker-compose.yaml up -d
in your host file
127.0.0.1 traefik.laravel.test
127.0.0.1 project1.laravel.test
127.0.0.1 project2.laravel.test
MY DOCKER-COMPOSE FILES
Below for simplicity I put the three docker-compose of project1, project2 and traefik:
./traefik/docker-compose.yaml
version: "3.3"
networks:
my-network:
external: true
services:
traefik:
image: "traefik:v2.9"
container_name: "traefik"
networks:
- my-network
command:
- "--api"
- "--providers.docker.exposedbydefault=false"
- "--api.insecure=true"
- "--accesslog.filepath=/data/access.log"
# entrypoints
- "--entrypoints.http.address=:80"
- "--entrypoints.https.address=:443"
- "--entrypoints.traefik.address=:8888"
ports:
- "80:80"
- "443:443"
- "8888:8888"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.laravel.test`)"
- "traefik.http.routers.traefik.entrypoints=traefik"
./project1/docker-compose.yaml
version: '3.8'
networks:
my-network:
external: true
services:
nginx:
build:
dockerfile: docker/nginx/Dockerfile
context: ./
image: my-nginx
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d/
- my-data_project1:/var/www
networks:
- my-network
depends_on:
- php-fpm
labels:
- "traefik.enable=true"
- "traefik.http.routers.project1.rule=Host(`project1.laravel.test`)"
- "traefik.docker.network=my-network"
php-fpm:
build:
dockerfile: docker/php/Dockerfile
context: ./
image: my-php-fpm
ports:
- "5173:5173"
volumes:
- ./docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
- my-data_project1:/var/www
networks:
- my-network
project1:
build:
dockerfile: docker/project1/Dockerfile
context: ./
image: project1:1.0
volumes:
- my-data_project1:/var/www
networks:
- my-network
volumes:
my-data_project1:
./project2/docker-compose.yaml
version: '3.8'
networks:
my-network:
external: true
services:
nginx:
build:
dockerfile: docker/nginx/Dockerfile
context: ./
image: my-nginx
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d/
- my-data_project2:/var/www
networks:
- my-network
depends_on:
- php-fpm
labels:
- "traefik.enable=true"
- "traefik.http.routers.project2.rule=Host(`project2.laravel.test`)"
- "traefik.docker.network=my-network"
php-fpm:
build:
dockerfile: docker/php/Dockerfile
context: ./
image: my-php-fpm
ports:
- "5174:5173"
volumes:
- ./docker/php/local.ini:/usr/local/etc/php/conf.d/local.ini
- my-data_project2:/var/www
networks:
- my-network
project2:
build:
dockerfile: docker/project2/Dockerfile
context: ./
image: project2:1.0
volumes:
- my-data_project2:/var/www
networks:
- my-network
# - ./:/var/www
volumes:
my-data_project2:

I resolved the issue.
The problem is caused by container name conflict because the projects share the same network and some containers have the same hostname of default.
For example, if you do a ping nginx sometimes you get the ip of the nginx service of project1 and other times you get the ip of the nginx service of project2 because both services use the same hostname.
I fixed it by overwriting the hostnames for each container with unique hostname by Hostname property of docker-compose and corrected the nginx configuration for each project.
You can watch my fixed in this commit :
https://github.com/gtoto007/traefik-laravel-docker/commit/707925465b979168448128b8b307660bde2b5aeb

Related

Can't connect to my kafka running on docker from spring boot application running via intellij

I have a docker-compose file with Kafka, zookeeper, and spring boot application.
while I run the entire file everything works fine.
when I run it without my spring boot application in order to debug it via intellij It cannot connect to Kafka and doesn't work properly.
my docker-compose file:
version: "3.5" services: # Install Zookeeper. zookeeper:
container_name: zookeeper
image: debezium/zookeeper:1.2
networks:
- mynetwork
ports:
- 2181:2181
- 2888:2888
- 3888:3888 # Install Kafka. kafka:
container_name: kafka
image: debezium/kafka:1.2
depends_on:
- zookeeper
ports:
- 9092:9092
- 29092:29092
networks:
- mynetwork
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP= INTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL_SAME_HOST:PLAINTEXT
- KAFKA_ADVERTISED_LISTENERS= INTERNAL://kafka:9092,EXTERNAL_SAME_HOST://localhost:29092
- KAFKA_LISTENERS= EXTERNAL_SAME_HOST://:29092,INTERNAL://:9092
- KAFKA_INTER_BROKER_LISTENER_NAME= PLAINTEXT # Install Postgres. postgres:
container_name: postgres
image: debezium/postgres:12
volumes:
- ./sql/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- 5432:5432
networks:
- mynetwork
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:0.2.1
ports:
- 8080:8080
networks:
- mynetwork
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092 #Deploy a Consumer. consumer:
build:
context: .
container_name: pledge-consumer
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/postgres
ports:
- 8101:8080
networks:
- mynetwork
image: isber/ssm-pledgeservice:v1
depends_on:
- zookeeper
- kafka
- postgres
networks: mynetwork:
external: true
In the application I tried:
spring.kafka.bootstrap-servers=kafka:9092
which works when I run it via docker but not from intellij
I also tried when running with intellij:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.bootstrap-servers=localhost:29092
I found the problem, the image I used:
image: debezium/kafka:1.2
had a problem and it didn't read any of the parameters of the environment I added.
I upgraded to:
image: debezium/kafka:1.4
everything works.

Why docker sync files with map folder extremely slow? (Ubuntu)

On my local machine (Ubuntu 18.04, 8GB RAM, i5, HDD) I have two docker-compose files with my laravel project
docker-compose.yml
version: '3.7'
networks:
backend-network:
driver: bridge
frontend-network:
driver: bridge
services:
&app-service app: &app-service-template
container_name: k4fntr_app
build:
context: ./docker/php-fpm
args:
UID: ${UID?Use your user ID}
GID: ${GID?Use your group ID}
USER: ${USER?Use your user name}
user: "${UID}:${GID}"
hostname: *app-service
volumes:
- /etc/passwd/:/etc/passwd:ro
- /etc/group/:/etc/group:ro
- ./:/var/www/k4fntr
environment:
APP_ENV: "${APP_ENV}"
CONTAINER_ROLE: app
FPM_PORT: &php-fpm-port 9000
FPM_USER: "${UID:-1000}"
FPM_GROUP: "${GID:-1000}"
networks:
- backend-network
&queue-service queue:
<<: *app-service-template
container_name: k4fntr_queue
restart: always
hostname: *queue-service
depends_on:
- app
environment:
CONTAINER_ROLE: queue
&schedule-service schedule:
<<: *app-service-template
container_name: k4fntr_schedule
restart: always
hostname: *schedule-service
depends_on:
- app
environment:
CONTAINER_ROLE: scheduler
&sportlevel-listener sportlevel_listener:
<<: *app-service-template
container_name: k4fntr_sl_listener
restart: always
hostname: *sportlevel-listener
ports:
- "${SPORTLEVEL_LISTEN_PORT}:${SPORTLEVEL_LISTEN_PORT}"
depends_on:
- app
environment:
CONTAINER_ROLE: sl_listener
&php-fpm-service php-fpm:
<<: *app-service-template
container_name: k4fntr_php-fpm
user: 'root:root'
restart: always
hostname: *php-fpm-service
ports: [*php-fpm-port]
entrypoint: /fpm-entrypoint.sh
command: php-fpm --nodaemonize
networks:
- backend-network
- frontend-network
echo-server:
container_name: k4fntr_echo
image: oanhnn/laravel-echo-server
volumes:
- ./:/app
environment:
GENERATE_CONFIG: "false"
depends_on:
- app
ports:
- "6001:6001"
networks:
- backend-network
- frontend-network
redis:
container_name: k4fntr_redis
image: redis
restart: always
command: redis-server
volumes:
- ./docker/redis/config/redis.conf:/usr/local/etc/redis/redis.conf
- ./docker/redis/redis-data:/data:rw
ports:
- "16379:6379"
networks:
- backend-network
and docker-compose-dev.yml
version: '3.7'
volumes:
redis-data:
pg-data:
k4fntr_sync:
external: true
services:
&app-service app: &app-service-template
container_name: k4fntr_app
build:
context: ./docker/php-fpm
args:
UID: ${UID?Use your user ID}
GID: ${GID?Use your group ID}
USER: ${USER?Use your user name}
user: "${UID}:${GID}"
hostname: *app-service
volumes:
- /etc/passwd/:/etc/passwd:ro
- /etc/group/:/etc/group:ro
- k4fntr_sync:/var/www/k4fntr:nocopy
environment:
APP_ENV: "${APP_ENV}"
CONTAINER_ROLE: app
FPM_PORT: &php-fpm-port 9000
FPM_USER: "${UID:-1000}"
FPM_GROUP: "${GID:-1000}"
networks:
- backend-network
&php-fpm-service php-fpm:
<<: *app-service-template
container_name: k4fntr_php-fpm
user: 'root:root'
restart: always
hostname: *php-fpm-service
ports: [*php-fpm-port]
entrypoint: /fpm-entrypoint.sh
command: php-fpm --nodaemonize -d "opcache.enable=0" -d "display_startup_errors=On" -d "display_errors=On" -d "error_reporting=E_ALL"
networks:
- backend-network
- frontend-network
mail:
container_name: k4fntr_mail
image: mailhog/mailhog
ports:
- "1025:1025"
- "8025:8025"
networks:
- backend-network
nginx:
container_name: k4fntr_nginx
image: nginx
volumes:
- ./docker/nginx/config/default:/etc/nginx/conf.d
- k4fntr_sync:/var/www/k4fntr:nocopy
depends_on:
- *php-fpm-service
ports:
- "${NGINX_LISTEN_PORT}:80"
networks:
- frontend-network
database:
container_name: k4fntr_database
build: ./docker/postgres
restart: always
environment:
ENV: ${APP_ENV}
TESTING_DB: ${DB_DATABASE_TESTING}
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "15432:5432"
volumes:
- ./docker/postgres/prod/:/prod
- ./docker/postgres/pg-data:/var/lib/postgresql/data:rw
networks:
- backend-network
The problem is the fact that when I change some files in my project I have to wait a lot of time. From 15 to 40 seconds. It is impossible for local development. How can I solve this problem?
I learned some information with similar problems with other OS such as Mac or Windows, but I can't found the same problems with Linux.
The problem was that I thought that second file (docker-compose-dev.yml) overrided first file. I mean php-fpm section. If you look at docker-compose-dev you can see that there is the command
command: php-fpm --nodaemonize -d "opcache.enable=0" -d "display_startup_errors=On" -d "display_errors=On" -d "error_reporting=E_ALL"
Actually I used first file (what is very strongely, because I used the command
docker-compose -f docker-compose-dev.yml -f docker-compose.yml up
) and my opcache was cached. This was the main reason why I had to wait so long

Docker Redis unable to connect with laravel and predis

I'm using docker with laravel project but im struggling to it to connect to the laravel container
###############################################################################
# Generated on phpdocker.io #
###############################################################################
version: "3.1"
services:
redis:
image: redis:alpine
container_name: my-asset-management-redis
mysql:
image: mysql:8.0
container_name: my-asset-management-mysql
working_dir: /application
volumes:
- .:/application
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=app
- MYSQL_USER=user
- MYSQL_PASSWORD=pass
ports:
- "8085:3306"
webserver:
image: nginx:alpine
container_name: my-asset-management-webserver
working_dir: /application
volumes:
- .:/application
- ./phpdocker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- "8083:80"
php-fpm:
build: phpdocker/php-fpm
container_name: my-asset-management-php-fpm
working_dir: /application
volumes:
- .:/application
- ./phpdocker/php-fpm/php-ini-extras.ini:/etc/php/7.4/fpm/conf.d/99-extras.ini
And I have this as my REDIS_HOST=my-asset-management-redis in my env
But i keep getting this: Predis\Connection\ConnectionException : php_network_getaddresses: getaddrinfo failed: No such host is known. [tcp://my-asset-management-redis:6379]
I have the redis password set as NULL for redis in env as well.

How to start CosmosDB emulator with docker-compose?

I've got a docker-compose project in Visual Studio which starts 3 services. One of them use cosmosdb.
I've followed the instructions on https://hub.docker.com/r/microsoft/azure-cosmosdb-emulator/ to start the emulator in a docker container and it worked.
But now I want to get it up and running through docker-compose file. Following is my current configuration.
version: '3.4'
services:
gateway:
environment:
- ASPNETCORE_ENVIRONMENT=Development
image: ${DOCKER_REGISTRY-}gateway
ports:
- "7000:80"
depends_on:
- servicea
- serviceb
build:
context: .\ApiGateways\IAGTO.Fenix.ApiGateway
dockerfile: Dockerfile
servicea:
environment:
- ASPNETCORE_ENVIRONMENT=Development
image: ${DOCKER_REGISTRY-}servicea
depends_on:
- email.db
build:
context: .\Services\ServiceA
dockerfile: Dockerfile
serviceb:
environment:
- ASPNETCORE_ENVIRONMENT=Development
image: ${DOCKER_REGISTRY-}serviceb
build:
context: .\Services\ServiceB
dockerfile: Dockerfile
email.db:
image: microsoft/azure-cosmosdb-emulator
container_name: cosmosdb-emulator
ports:
- "8081:8081"
I can see the container running when I run docker container list
But requests to https://localhost:8081/_explorer/index.html fails.
Any help on this much appreciated
I was in the same situation but the container was started with the following docker-compose.yml and it became accessible.
I can browse https://localhost:8081/_explorer/index.html
version: '3.7'
services:
cosmosdb:
container_name: cosmosdb
image: microsoft/azure-cosmosdb-emulator
tty: true
restart: always
ports:
- "8081:8081"
- "8900:8900"
- "8901:8901"
- "8979:8979"
- "10250:10250"
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
- "10255:10255"
- "10256:10256"
- "10350:10350"
volumes:
- vol_cosmos:C:\CosmosDB.Emulator\bind-mount
volumes:
vol_cosmos:
Probably I needed to set "tty" or "volumes".
Using the linux cosmos db image, I set it up like this:
version: '3.4'
services:
db:
container_name: cosmosdb
image: "mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator"
tty: true
restart: always
mem_limit: 2G
cpu_count: 2
environment:
- AZURE_COSMOS_EMULATOR_PARTITION_COUNT=10
- AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE=true
ports:
- "8081:8081"
- "8900:8900"
- "8901:8901"
- "8979:8979"
- "10250:10250"
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
- "10255:10255"
- "10256:10256"
- "10350:10350"
volumes:
- vol_cosmos:/data/db
volumes:
vol_cosmos:
Part of the problem is that the emulator takes a while to start, and there is a timeout of 2 minutes before it's just stops waiting.
I'm trying to hack my way through it, but I haven't had much success.
For now the image only works stand alone (via docker run) and that's it.

Laravel on Docker: [2002] Connection refused

I am trying to put a Laravel app up on Docker, but the database container is giving me trouble.
Specifically, I am getting this error when I try to open the app in the browser:
SQLSTATE[HY000] [2002] Connection refused
But, as far as I can see, all the user credentials are correct. Perhaps I am missing something? Please see below.
docker-compose.yml:
version: '3'
services:
app:
build:
context: ./
dockerfile: app.dockerfile
working_dir: /var/www
volumes:
- ./yoga/:/var/www
environment:
- "DB_PORT=33061"
- "DB_HOST=database"
web:
build:
context: ./
dockerfile: web.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
ports:
- 8080:80
database:
image: mysql:5.7
container_name: database
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=yogadb"
- "MYSQL_USER=yogi"
- "MYSQL_PASSWORD=mypasshere"
- "MYSQL_ROOT_PASSWORD="
ports:
- "33061:3306"
volumes:
dbdata:
.env:
DB_CONNECTION=mysql
DB_HOST=database
DB_PORT=3306
DB_DATABASE=yogadb
DB_USERNAME=yogi
DB_PASSWORD=mypasshere
When I run the app outside docker, everything works correctly, I just replace DB_HOST=database with DB_HOST=127.0.0.1
What can I do to fix this?
docker ps output is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2da7283f7a65 docker_app "docker-php-entrypoi…" 19 minutes ago Up 7 seconds 9000/tcp docker_app_1
4801fe3312c1 mysql:5.7 "docker-entrypoint.s…" 2 hours ago Up 7 seconds 33060/tcp, 0.0.0.0:33061->3306/tcp 4801fe3312c1_database
ab370ae1d155 docker_web "nginx -g 'daemon of…" 25 hours ago Up 7 seconds 443/tcp, 0.0.0.0:8080->80/tcp docker_web_1
As #prd mentioned, you need to create bridged network for the containers [1], then add containers to the network [2].
Hostname of container is determined by name of the service in docker-compose.yml. In your case, if app service will connect to database service at hostname database & port 3306.
So docker-compose.yml becomes:
version: '3'
services:
app:
build:
context: ./
dockerfile: app.dockerfile
working_dir: /var/www
volumes:
- ./yoga/:/var/www
environment:
- "DB_PORT=3306" # Port of database container is 3306
- "DB_HOST=database"
networks:
- name_of_network # [2] add container to network
web:
build:
context: ./
dockerfile: web.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
ports:
- 8080:80
database: # Name of service, which determines hostname of container
image: mysql:5.7
container_name: database
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=yogadb"
- "MYSQL_USER=yogi"
- "MYSQL_PASSWORD=mypasshere"
- "MYSQL_ROOT_PASSWORD="
ports:
- "33061:3306"
networks:
- name_of_network # [2] add container to network
volumes:
dbdata:
networks:
name_of_network: # [1] create bridged network

Resources