Docker multiple sites on different ports - macos

Right now, I have a single static site running in a docker container, being served on port 80. This plays nicely, since 80 is the default for public traffic.
So in my /etc/hosts file I can add an entry like like 127.0.0.1 example.dev and navigating to example.dev and it automatically uses port 80.
What if I need to add an additional 2-3 dockerized dev sites to my environment? what would be the best course of action to prevent having to access these sites solely by port, i.e. 81,82,83,etc? Also, it seems under these circumstances, I would be limited to being able to rewrite only the dev site tied to port 80 to a specific hostname? is there a way to overcome this? what is the best way to manage multiple docker sites from different ports?
Note, I was hoping to access the docker container via the container's IP address i.e. 172.21.0.4 and then simply add a hostname entry to my hosts file, but accessing containers by IP address doesn't work on Mac.
docker-compose.yml
version: '3'
services:
mysql:
container_name: mysql
build: ./mysql
environment:
- MYSQL_DATABASE=example_dev
- MYSQL_USER=test
- MYSQL_PASSWORD=test
- MYSQL_ROOT_PASSWORD=0000
ports:
- 3307:3306
phpmyadmin:
container_name: myadmin
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
links:
- "mysql:db"
apache_site1:
container_name: apache_site1
build: ./php-apache
ports:
- 80:80
volumes:
- ../:/var/www/html
links:
- mysql:db
./php-apache/Dockerfile
FROM php:7-apache
COPY ./config/php.ini /usr/local/etc/php/
EXPOSE 80
thanks in advance

Your problem is best handled using a reverse proxy such as nginx. You can run the reverse proxy on port 80 and then configure it to route requests to the specific site. For example,
http://example.dev/site1 route to site1 at http://example.dev:8080
http://example.dev/site2 route to site2 at http://example.dev:8081
And thus you run your sites on ports 8080, 8081...

Specific solution
Based on the docker-compose file in the question. Edit the docker-compose.yml file, adding this service:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Then, change the apache_site1 service in this way:
apache_site1:
container_name: apache_site1
build: ./php-apache
volumes:
- ../:/var/www/html
links:
- mysql:db
environment:
- VIRTUAL_HOST=apache-1.dev
Run the docker-compose file and check that your apache-1 website is reachable:
curl -H 'Host: apache-1.dev' localhost
Or use the Chrome extension as described below.
More websites
When you need to add more websites, just add an apache_site2 entry like you want and be sure to set a VIRTUAL_HOST environment variable in its definition.
Generic solution
Use a single nginx with multiple server entries
If you don't want to use a reverse proxy with a subpath for each website,
you can setup a nginx reverse proxy listening on you host 80 port, with one server entry for each site/container you have.
server {
listen 80;
server_name example-1.dev;
location / {
proxy_pass http://website-1-container:port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name example-2.dev;
location / {
proxy_pass http://website-2-container:port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
... and so on
Then, you can use the Host header to request different domains to your localhost without changing your /etc/hosts:
curl -H 'Host: example-2.dev' localhost
If you're doing web development, and so you need to see web pages, you can use a browser extension to customize the Host header at each page request.
Already made solution with nginx and docker services
Use a docker-compose file with all your and use the jwilder/nginx-proxy image that will auto configure a nginx proxy for you using environment variables. This is an example docker-compose.ymlfile:
version: "3"
services:
website-1:
image: website-1:latest
environment:
- VIRTUAL_HOST=example-1.dev
website-2:
image: website-2:latest
environment:
- VIRTUAL_HOST=example-2.dev
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Apache solution
Use apache virtual hosts to setup multiple websites in the same way described for nginx. Be sure to enable the Apache ProxyPreserveHost Directive to forward the Host header to the proxied server.

Related

How to connect to my local machine domain from docker container?

I have local server with domain mydomain.com it is just alias to localhost:80
And I want to allow make requests to mydomain.com from my running docker-container.
When I'm trying to request to it I see
cURL error 7: Failed to connect to mydomain.com port 80: Connection refused
My docker-compose.yml
version: '3.8'
services:
nginx:
container_name: project-nginx
image: nginx:1.23.1-alpine
volumes:
- ./docker/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./src:/app
ports:
- ${NGINX_PORT:-81}:80
depends_on:
- project
server:
container_name: project
build:
context: ./
environment:
NODE_MODE: service
APP_ENV: local
APP_DEBUG: 1
ALLOWED_ORIGINS: ${ALLOWED_ORIGINS:-null}
volumes:
- ./src:/app
I'm using docker desktop for Windows
What can I do?
I've tried to add
network_mode: "host"
but it ruins my docker-compose startup
When I'm trying to send request to host.docker.internal I see this:
The requested URL was not found on this server. If you entered
the URL manually please check your spelling and try again.
The host network is not supported on Windows. If you are using Linux containers on Windows, make sure you have switched to Linux containers on Docker Desktop. That uses WSL2, so you should be able to use that in there.

nginx reverse-proxy bad gateway, no idea what im doing wrong

First time using nginx reverse-proxy with docker and i need your help, i have a service written in golang and I'm trying to implement nginx but im getting error and have no idea what is happening.
this is my yml file:
version: "3.6"
services:
reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
this is my nginx.conf file:
events {
worker_connections 1024;
}
http {
server {
listen 80 default_server;
server_name localhost 127.0.0.1;
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8081;
}
}
}
after running my service and docker-compose up when i hit curl localhost/api im getting bad gateway 502, although if i run curl localhost:8081, my service is running without any issue.Any help is appreciated because im really stuck here.Thanks in advance.
This is more a Docker problem than a nginx or go, when you are running containers inside the Docker engine, each one will run isolated from each other, so your nginx service doesn't know your backend service.
So, to fix this, you should use Docker networks. A network will enable your services to communicate between them. To do that, you should first edit your docker-compose.yml file
# docker-compose.yml
version: "3.8"
services:
my_server_service:
image: your_docker_image
restart: "always"
networks:
- "api-network"
reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
networks:
- "api-network"
networks:
api-network:
After that, you need to change the proxy-pass of your nginx.conf file to use the same name as your backend service, so change the string "127.0.0.1" to "my_server_service".
# nginx.conf
user nginx;
events {
worker_connections 1024;
}
http {
server {
listen 80 default_server;
server_name localhost 127.0.0.1;
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://my_server_service:8081;
}
}
Also, as you are using nginx as reverse proxy, you even don't need to bind your backend service port with your host machine, the docker network can solves it internally between the services

Laravel Sail (docker), nginx reverse proxy - Html renders localhost:8002 instead of site.xyz

I started testing the new Laravel Sail docker-compose environment with an nginx reverse proxy so I can access my website from a real tld while developing on my local machine.
My setup is:
OS - Ubuntu Desktop 20 with nginx and docker installed.
Nginx site-enabled on the host machine:
server {
server_name mysite.xyz;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/mysite.xyz.crt;
ssl_certificate_key /etc/ssl/private/mysite.xyz.key;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
location / {
proxy_pass http://localhost:8002;
}
}
Also I have 127.0.0.1 mysite.xyz in my host machine /etc/hosts file
And my docker-compose:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '8002:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- redis
image: 'mysql:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sailmysql:/var/lib/mysql'
networks:
- sail
redis:
image: 'redis:alpine'
ports:
- '${FORWARD_REDIS_PORT:-6379}:6379'
volumes:
- 'sailredis:/data'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sailmysql:
driver: local
sailredis:
driver: local
Site is loading fine when I access mysite.xyz from my host machine.
The issue I'm having is that on the register page that I ca see from my host machine by accessing the register page (https://mysite.xyz/register) the form action is: http://localhost:8002/register
The piece of code that generates the above url is <form method="POST" action="{{ route('register') }}">
This is a problem because I don't access the site from localhost:XXXX but instead from mysite.xyz which goes through the nginx reverse proxy and eventually ends up pointing to http://localhost:8002/register
**
What I checked:
In my Laravel .env file, the APP_URL is mysite.xyz
if I ssh into the the sail container and start artisan tinker and then run route('register') it outputs https://mysite.xyz/ so clearly, the laravel app inside the container seems to be behaving correctly.
The funny thing is that when it renders the html response, it renders the register route as http://localhost:8002/register
I tried searching the entire project for localhost:8002 and I can find it in /storage/framework/sessions/asdofhsdfasf8as7dgf8as7ogdf8asgd7
that bit of text says: {s:3:"url";s:27:"http://localhost:8002/login";}
So it seems that the session thinks it's localhost:8002 but tinker thinks it's mysite.xyz
I'm also a docker noob so who knows what I'm missing. I'm lost :)
The "problem" lies within your nginx configuration, not your code:
proxy_pass http://localhost:8002;
Laravel uses APP_URL in CLI (console) or whenever you use config('app.url') or env('APP_URL') in your code. For all other operations (such as URL constructing via the route helper) Laravel fetches the URL from the request:
URL Generation, Laravel 8.x Docs
The url helper may be used to generate arbitrary URLs for your application. The generated URL will automatically use the scheme (HTTP or HTTPS) and host from the current request being handled by the application.
What you need to do is to pass the correct URL and port in your nginx configuration, by adding:
proxy_pass http://localhost:8002;
proxy_set_header Host $host;
For additional information on the topic, you may want to have a look at this article: Setting up an Nginx Reverse Proxy

docker win 10 : localhost 403 forbidden

It's been one week I'm trying to display a simple index.html using Docker under Win 10. Docker is working, my docker-compose creates containers and volumes, my index.html is copied into the php container.
I added
127.0.0.1 localhost
127.0.0.1 dev.local.fr
into windows hosts, but when I try any url, localhost, 127.0.0.1, or dev.local.fr, or curl them in the command line, I get only a 403 forbidden.
This is my docker-compose.yml :
version: "3.2"
services:
php:
image: wodby/drupal-php:7.2-dev-4.8.4
networks:
- backend
volumes:
- ./project/:/var/www/html/
apache:
image: wodby/apache:2.4-4.0.2
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8080:80"
volumes:
- ./project/:/var/www/html/
environment:
APACHE_DOCUMENT_ROOT: /var/www/html
VIRTUAL_HOST: "dev.local.fr"
VIRTUAL_PORT: 80
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
networks:
frontend:
backend:
(but everything seems ok on the Docker side anyway...)
I've read hundreds of posts on the web and cannot find the way to reach my index.html from any browser.
I was thinking that maybe I should add some Vhosts in httpd.conf (like I was doing under Xampp or Wamp), but I didn't find this file into the apache container, and anyway I've got no idea how to add directives for Vhosts in httpd.conf from my docker-compose yml.
But it's my personal idea as nowhere in Docker's docs it's stated we must edit httpd.conf in order to make it work.
Any help or idea will be greatly appreciated, I really need to get a working server, as I'm a pro Drupal developper...
Regards.
Finally I found : adding one httpd.conf in the build did the trick. I needed also using a Dockerfile to make the copy.
docker-compose.yml :
version: "3.2"
services:
php:
build:
context: ./apache-php
image: php:7.2-apache
working_dir: /var/www/html
volumes:
- ./project:/var/www/html
extra_hosts:
- "pfg.local.fr:127.0.0.1"
hostname: pfg.local.fr
ports:
- 80:80
httpd.conf :
Listen 80
<VirtualHost *:80>
DocumentRoot /var/www/html
ServerName dev.local.fr
</VirtualHost>
Dockerfile :
FROM php:7.2.1-apache
COPY httpd.conf /etc/apache2/sites-available/000-default.conf
Directories :
apache-php
¬ Dockerfile
¬ httpd.conf
docker-compose.yml
index.html
(And to be complete, I also previously allowed for 10.0.75.0, 10.0.75.1 and 10.0.75.2 in Norton Firewall.)
So I can try to build up my Drupal now. Let's hope it doesn't take another week !

traefik proxy for docker container running in host network

i am running traefik as a proxy in docker container
i am using DockerToolBox in windows 10
the traefik proxy was able to recognize the service app which is running in 127.0.0.1, but the service app is actually running in docker host = 192.168.99.x ip
version: '3'
services:
reverse_proxy:
image: traefik
command: --api --docker
ports:
- "81:80"
- "8081:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- backend
whoami:
image: containous/whoami
labels:
- "traefik.frontend.rule=Host:whoami.default"
- "traefik.enable=true"
- "traefik.port=80"
network_mode: host
networks:
backend:
driver: bridge
in the Traefik dashboard http://192.168.99.100:8081
it shows http://127.0.0.1:80 for whoami service
instead of http://192.168.99.100:80
any help would be appreciated.
i want network_mode: host to pick 192.168.99.100 instead of 127.0.0.1
As traefik official documentation says, when resolving service IP, first it
try a lookup of host.docker.internal
and second
if the lookup was unsuccessful, fall back to 127.0.0.1
This means we can just add a host in the traefik container, using --add-host {docker0_IP}(it's the bridge's IP, you can easily use docker inspect {NAME_OF_TRAEFIK} and find the IP of Gateway(for me, it's 172.18.0.1). If you use docker-compose, you can use add following lines to your definition of traefik:
extra_hosts:
- host.docker.internal:{docker0_IP}
Also, I find that it's ok to use the IP my eth0 IP, which means the IP of your LAN(for me, it's 192.168.0.20).
Then, recreate traefik and everything works like a daisy.

Resources