Docker Nginx localhost in Windows 10 host file - windows

I'd like to try a Nginx Docker container but I don't know how I set the host file on a Windows 10 based on
http://geekyplatypus.com/dockerise-your-php-application-with-nginx-and-php7-fpm/comment-page-3/#comment-2935
. Can somebody help me, please?
web:
image: nginx:latest
ports:
- 8890:80
volumes:
- ./code:/code
- ./site.conf:/etc/nginx/conf.d/site.conf
and
server {
index index.php index.html;
server_name php-docker.local;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code;
}
It should work on php-docker.local:8890 address.
Thanks in advance.

Related

nginx reverse-proxy bad gateway, no idea what im doing wrong

First time using nginx reverse-proxy with docker and i need your help, i have a service written in golang and I'm trying to implement nginx but im getting error and have no idea what is happening.
this is my yml file:
version: "3.6"
services:
reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
this is my nginx.conf file:
events {
worker_connections 1024;
}
http {
server {
listen 80 default_server;
server_name localhost 127.0.0.1;
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://127.0.0.1:8081;
}
}
}
after running my service and docker-compose up when i hit curl localhost/api im getting bad gateway 502, although if i run curl localhost:8081, my service is running without any issue.Any help is appreciated because im really stuck here.Thanks in advance.
This is more a Docker problem than a nginx or go, when you are running containers inside the Docker engine, each one will run isolated from each other, so your nginx service doesn't know your backend service.
So, to fix this, you should use Docker networks. A network will enable your services to communicate between them. To do that, you should first edit your docker-compose.yml file
# docker-compose.yml
version: "3.8"
services:
my_server_service:
image: your_docker_image
restart: "always"
networks:
- "api-network"
reverse-proxy:
image: nginx:1.17.10
container_name: reverse_proxy
volumes:
- ./reverse_proxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- 80:80
networks:
- "api-network"
networks:
api-network:
After that, you need to change the proxy-pass of your nginx.conf file to use the same name as your backend service, so change the string "127.0.0.1" to "my_server_service".
# nginx.conf
user nginx;
events {
worker_connections 1024;
}
http {
server {
listen 80 default_server;
server_name localhost 127.0.0.1;
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://my_server_service:8081;
}
}
Also, as you are using nginx as reverse proxy, you even don't need to bind your backend service port with your host machine, the docker network can solves it internally between the services

Trying to dockerize Spring Boot and Laravel apps with Nginx

Basically I do have an app where I have 2 backend projects.
The first one is a Spring Boot project where I have some recommendations algorithms which I want Angular to execute.
The second one is a Laravel project which contains some endpoints for the database (users, items, etc.)
The way Angular is going to communicate with Spring Boot is via WebSocket, so I need some routes to get the data.
The way Angular is going to communicate with Laravel is via API Rest protocol, some basic stuff.
This is the docker-compose I have for the app (I don't know if ports for Nginx are right, but I don't know how to do it)
version: '3'
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- "80:80"
- "8090:8090"
volumes:
- .:/var/www/html
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php
- mysql
networks:
- laravel
mysql:
image: mysql:5.7.22
container_name: mysql
tty: true
ports:
- "3306:3306"
volumes:
- ./docker/mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: ''
MYSQL_USER: ''
MYSQL_PASSWORD: ''
MYSQL_ROOT_PASSWORD: ''
networks:
- laravel
php:
build:
context: .
dockerfile: ./docker/Dockerfile
container_name: php
volumes:
- .:/var/www/html
- ./machine-learning/RecommendationSystem:/app
- "./machine-learning/.m2:/.m2"
ports:
- "9000:9000"
networks:
- laravel
maven:
build:
context: .
dockerfile: ./docker/java/Dockerfile
container_name: java
volumes:
- ./machine-learning:/usr/src/machine-learning
depends_on:
- php
- mysql
networks:
- laravel
Also, here it is my Nginx conf file:
upstream springboot
{
keepalive 32;
server localhost:8090 fail_timeout=0;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name localhost;
root /var/www/html/public;
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
try_files $uri /index.php =404;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_buffers 16 16k;
fastcgi_read_timeout 300;
fastcgi_buffer_size 32k;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location /machine-learning {
proxy_pass http://localhost:8090/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
}
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log main;
}
When I execute mvn package inside the container and then execute curl --verbose http://localhost:8090, it shows me the content I am expecting (a basic hello world message), so I guess the Nginx conf file is correct. When I access this same route in my web browser, I receive the: "The connection was reset" message.
Anyone could help me?

How to make an HTTPS server with Nginx in Docker for development machine?

How to make an HTTPS server with Nginx in Docker for the development machine.
I have tried making self-signed certificate and key and moved it up to docker container through volume and assigned it NGINX config, but it doesn't work with HTTPS but work with HTTP.
In host.conf
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
ssl_certificate /etc/nginx/certs/localhost.crt;
ssl_certificate_key /etc/nginx/certs/localhost.key;
server_name localhost;
charset utf-8;
index index.php;
root /var/www/html/web;
...
In docker compose
version: "3"
services:
nginx:
restart: "always"
build: ./nginx
volumes:
- .:/var/www/html
- .certs:/etc/nginx/certs
ports:
- "${SERVER_PORT:-80}:80"
- "443:443"
depends_on:
- php
networks:
- worksite

Docker for Windows. Cannot connect to nginx

Using Windows 10 with Docker, I'm trying to reach my Linux container running nginx. I'm trying to access my localhost (or via an IP address) through a web browser and I get "cannot reach this page". Inside my nginx container if I try and access localhost or direct ip with CURL I get "Connection Refused". I am a complete beginner to docker with Windows and its a nightmare to figure out! Have tried localhost:8080 and 172.18.0.4:8080 (which is the IP shown in docker inspect nginx_1)
Here is my docker-compose.yml
version: '2'
volumes:
database_data:
driver: local
services:
nginx:
image: nginx:latest
ports:
- 8080:80
volumes:
- ./docker/nginx/default.conf:/etc/nginx/conf.d
volumes_from:
- php
php:
build: ./docker/php
expose:
- 9000
volumes:
- .:/var/www/html
mysql:
image: mysql:latest
expose:
- 3306
volumes:
- database_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: project
MYSQL_USER: project
MYSQL_PASSWORD: project
And here is my nginx default.conf file:
server {
listen 80 default_server;
root /var/www/html/public;
index index.php;
charset utf-8;
location / {
try_files $uri $uri/ /index.php;
}
location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt { access_log off; log_not_found off; }
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
sendfile off;
client_max_body_size 100m;
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param APPLICATION_ENV development;
fastcgi_intercept_errors off;
fastcgi_buffer_size 16k;
fastcgi_buffers 4 16k;
}
location ~ /\.ht {
deny all;
}
}
What is missing from my config that is preventing me access to my index.php file from my host machine?
Many thanks!
Just tested your setup and it worked fine for me.
Since you are already getting a "cannot reach this page" from your browser, the issue is in already in the nginx, and not the php container. when the php container is not working, the page will load and give you an nginx Error like "Cannot find file" (e.g. the index.php) or similar.
Can you check if the nginx config is correctly loaded into the container? To do so, type "docker exec -it sh" and navigate to the /etc/nginx/conf.d/default.conf file
additional info
you cant access the nginx container via 172.18.0.4:8080. This is the ip of the container, but you map the port 8080 only to your host machine. The default port of nginx container is 80. Since the "normal" container ports are only availibe inside the docker network, you cant access the container this way.

Docker multiple sites on different ports

Right now, I have a single static site running in a docker container, being served on port 80. This plays nicely, since 80 is the default for public traffic.
So in my /etc/hosts file I can add an entry like like 127.0.0.1 example.dev and navigating to example.dev and it automatically uses port 80.
What if I need to add an additional 2-3 dockerized dev sites to my environment? what would be the best course of action to prevent having to access these sites solely by port, i.e. 81,82,83,etc? Also, it seems under these circumstances, I would be limited to being able to rewrite only the dev site tied to port 80 to a specific hostname? is there a way to overcome this? what is the best way to manage multiple docker sites from different ports?
Note, I was hoping to access the docker container via the container's IP address i.e. 172.21.0.4 and then simply add a hostname entry to my hosts file, but accessing containers by IP address doesn't work on Mac.
docker-compose.yml
version: '3'
services:
mysql:
container_name: mysql
build: ./mysql
environment:
- MYSQL_DATABASE=example_dev
- MYSQL_USER=test
- MYSQL_PASSWORD=test
- MYSQL_ROOT_PASSWORD=0000
ports:
- 3307:3306
phpmyadmin:
container_name: myadmin
image: phpmyadmin/phpmyadmin
ports:
- 8080:80
links:
- "mysql:db"
apache_site1:
container_name: apache_site1
build: ./php-apache
ports:
- 80:80
volumes:
- ../:/var/www/html
links:
- mysql:db
./php-apache/Dockerfile
FROM php:7-apache
COPY ./config/php.ini /usr/local/etc/php/
EXPOSE 80
thanks in advance
Your problem is best handled using a reverse proxy such as nginx. You can run the reverse proxy on port 80 and then configure it to route requests to the specific site. For example,
http://example.dev/site1 route to site1 at http://example.dev:8080
http://example.dev/site2 route to site2 at http://example.dev:8081
And thus you run your sites on ports 8080, 8081...
Specific solution
Based on the docker-compose file in the question. Edit the docker-compose.yml file, adding this service:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Then, change the apache_site1 service in this way:
apache_site1:
container_name: apache_site1
build: ./php-apache
volumes:
- ../:/var/www/html
links:
- mysql:db
environment:
- VIRTUAL_HOST=apache-1.dev
Run the docker-compose file and check that your apache-1 website is reachable:
curl -H 'Host: apache-1.dev' localhost
Or use the Chrome extension as described below.
More websites
When you need to add more websites, just add an apache_site2 entry like you want and be sure to set a VIRTUAL_HOST environment variable in its definition.
Generic solution
Use a single nginx with multiple server entries
If you don't want to use a reverse proxy with a subpath for each website,
you can setup a nginx reverse proxy listening on you host 80 port, with one server entry for each site/container you have.
server {
listen 80;
server_name example-1.dev;
location / {
proxy_pass http://website-1-container:port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
server {
listen 80;
server_name example-2.dev;
location / {
proxy_pass http://website-2-container:port;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
... and so on
Then, you can use the Host header to request different domains to your localhost without changing your /etc/hosts:
curl -H 'Host: example-2.dev' localhost
If you're doing web development, and so you need to see web pages, you can use a browser extension to customize the Host header at each page request.
Already made solution with nginx and docker services
Use a docker-compose file with all your and use the jwilder/nginx-proxy image that will auto configure a nginx proxy for you using environment variables. This is an example docker-compose.ymlfile:
version: "3"
services:
website-1:
image: website-1:latest
environment:
- VIRTUAL_HOST=example-1.dev
website-2:
image: website-2:latest
environment:
- VIRTUAL_HOST=example-2.dev
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
Apache solution
Use apache virtual hosts to setup multiple websites in the same way described for nginx. Be sure to enable the Apache ProxyPreserveHost Directive to forward the Host header to the proxied server.

Resources