docker win 10 : localhost 403 forbidden - windows

It's been one week I'm trying to display a simple index.html using Docker under Win 10. Docker is working, my docker-compose creates containers and volumes, my index.html is copied into the php container.
I added
127.0.0.1 localhost
127.0.0.1 dev.local.fr
into windows hosts, but when I try any url, localhost, 127.0.0.1, or dev.local.fr, or curl them in the command line, I get only a 403 forbidden.
This is my docker-compose.yml :
version: "3.2"
services:
php:
image: wodby/drupal-php:7.2-dev-4.8.4
networks:
- backend
volumes:
- ./project/:/var/www/html/
apache:
image: wodby/apache:2.4-4.0.2
depends_on:
- php
- mysql
networks:
- frontend
- backend
ports:
- "8080:80"
volumes:
- ./project/:/var/www/html/
environment:
APACHE_DOCUMENT_ROOT: /var/www/html
VIRTUAL_HOST: "dev.local.fr"
VIRTUAL_PORT: 80
mysql:
image: mysql:5.6.40
networks:
- backend
environment:
- MYSQL_ROOT_PASSWORD=rootpassword
networks:
frontend:
backend:
(but everything seems ok on the Docker side anyway...)
I've read hundreds of posts on the web and cannot find the way to reach my index.html from any browser.
I was thinking that maybe I should add some Vhosts in httpd.conf (like I was doing under Xampp or Wamp), but I didn't find this file into the apache container, and anyway I've got no idea how to add directives for Vhosts in httpd.conf from my docker-compose yml.
But it's my personal idea as nowhere in Docker's docs it's stated we must edit httpd.conf in order to make it work.
Any help or idea will be greatly appreciated, I really need to get a working server, as I'm a pro Drupal developper...
Regards.

Finally I found : adding one httpd.conf in the build did the trick. I needed also using a Dockerfile to make the copy.
docker-compose.yml :
version: "3.2"
services:
php:
build:
context: ./apache-php
image: php:7.2-apache
working_dir: /var/www/html
volumes:
- ./project:/var/www/html
extra_hosts:
- "pfg.local.fr:127.0.0.1"
hostname: pfg.local.fr
ports:
- 80:80
httpd.conf :
Listen 80
<VirtualHost *:80>
DocumentRoot /var/www/html
ServerName dev.local.fr
</VirtualHost>
Dockerfile :
FROM php:7.2.1-apache
COPY httpd.conf /etc/apache2/sites-available/000-default.conf
Directories :
apache-php
¬ Dockerfile
¬ httpd.conf
docker-compose.yml
index.html
(And to be complete, I also previously allowed for 10.0.75.0, 10.0.75.1 and 10.0.75.2 in Norton Firewall.)
So I can try to build up my Drupal now. Let's hope it doesn't take another week !

Related

How to connect to my local machine domain from docker container?

I have local server with domain mydomain.com it is just alias to localhost:80
And I want to allow make requests to mydomain.com from my running docker-container.
When I'm trying to request to it I see
cURL error 7: Failed to connect to mydomain.com port 80: Connection refused
My docker-compose.yml
version: '3.8'
services:
nginx:
container_name: project-nginx
image: nginx:1.23.1-alpine
volumes:
- ./docker/nginx/conf.d/default.conf:/etc/nginx/conf.d/default.conf
- ./src:/app
ports:
- ${NGINX_PORT:-81}:80
depends_on:
- project
server:
container_name: project
build:
context: ./
environment:
NODE_MODE: service
APP_ENV: local
APP_DEBUG: 1
ALLOWED_ORIGINS: ${ALLOWED_ORIGINS:-null}
volumes:
- ./src:/app
I'm using docker desktop for Windows
What can I do?
I've tried to add
network_mode: "host"
but it ruins my docker-compose startup
When I'm trying to send request to host.docker.internal I see this:
The requested URL was not found on this server. If you entered
the URL manually please check your spelling and try again.
The host network is not supported on Windows. If you are using Linux containers on Windows, make sure you have switched to Linux containers on Docker Desktop. That uses WSL2, so you should be able to use that in there.

Laravel Sail (docker), nginx reverse proxy - Html renders localhost:8002 instead of site.xyz

I started testing the new Laravel Sail docker-compose environment with an nginx reverse proxy so I can access my website from a real tld while developing on my local machine.
My setup is:
OS - Ubuntu Desktop 20 with nginx and docker installed.
Nginx site-enabled on the host machine:
server {
server_name mysite.xyz;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/mysite.xyz.crt;
ssl_certificate_key /etc/ssl/private/mysite.xyz.key;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
location / {
proxy_pass http://localhost:8002;
}
}
Also I have 127.0.0.1 mysite.xyz in my host machine /etc/hosts file
And my docker-compose:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '8002:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- redis
image: 'mysql:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sailmysql:/var/lib/mysql'
networks:
- sail
redis:
image: 'redis:alpine'
ports:
- '${FORWARD_REDIS_PORT:-6379}:6379'
volumes:
- 'sailredis:/data'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sailmysql:
driver: local
sailredis:
driver: local
Site is loading fine when I access mysite.xyz from my host machine.
The issue I'm having is that on the register page that I ca see from my host machine by accessing the register page (https://mysite.xyz/register) the form action is: http://localhost:8002/register
The piece of code that generates the above url is <form method="POST" action="{{ route('register') }}">
This is a problem because I don't access the site from localhost:XXXX but instead from mysite.xyz which goes through the nginx reverse proxy and eventually ends up pointing to http://localhost:8002/register
**
What I checked:
In my Laravel .env file, the APP_URL is mysite.xyz
if I ssh into the the sail container and start artisan tinker and then run route('register') it outputs https://mysite.xyz/ so clearly, the laravel app inside the container seems to be behaving correctly.
The funny thing is that when it renders the html response, it renders the register route as http://localhost:8002/register
I tried searching the entire project for localhost:8002 and I can find it in /storage/framework/sessions/asdofhsdfasf8as7dgf8as7ogdf8asgd7
that bit of text says: {s:3:"url";s:27:"http://localhost:8002/login";}
So it seems that the session thinks it's localhost:8002 but tinker thinks it's mysite.xyz
I'm also a docker noob so who knows what I'm missing. I'm lost :)
The "problem" lies within your nginx configuration, not your code:
proxy_pass http://localhost:8002;
Laravel uses APP_URL in CLI (console) or whenever you use config('app.url') or env('APP_URL') in your code. For all other operations (such as URL constructing via the route helper) Laravel fetches the URL from the request:
URL Generation, Laravel 8.x Docs
The url helper may be used to generate arbitrary URLs for your application. The generated URL will automatically use the scheme (HTTP or HTTPS) and host from the current request being handled by the application.
What you need to do is to pass the correct URL and port in your nginx configuration, by adding:
proxy_pass http://localhost:8002;
proxy_set_header Host $host;
For additional information on the topic, you may want to have a look at this article: Setting up an Nginx Reverse Proxy

How to use Laravel docker container & MySQL DB with a Vue one?

I have an app which uses Vue CLI as a front-end and Laravel as a back-end. Now I am trying to launch my app on a server using docker.
My docker skills can only allow me one thing: Vue docker container. But as far as I have to use Laravel as a back-end I have to create a container for that too (+ MySQL, of course).
So here what I've got: Dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
EXPOSE 8080
CMD ["npm", "run", "serve"]
docker-compose.yml
version: '3'
services:
web:
build: .
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- "/app/node_modules"
- ".:/app"
The problem is that I understand how to connect Laravel into Dockerfile. It just doesn't add up in my mind.
May be I should use Ubuntu, not just node? Anyways, I'm asking once again for your support
According to this article you will need to follow the steps below.
Make your project folder look like this: (d: directory, f: file)
d: backend
d: frontend
d: etc
d: nginx
d: conf.d
f: default.conf.nginx
d: php
f: .gitignore
d: dockerize
d: backend
f: Dockerfile
f: docker-compose.yml
Add docker-compose.yml
version: '3'
services:
www:
image: nginx:alpine
volumes:
- ./etc/nginx/conf.d/default.conf.nginx:/etc/nginx/conf.d/default.conf
ports:
- 81:80
depends_on:
- backend
- frontend
frontend:
image: node:current-alpine
user: ${UID}:${UID}
working_dir: /home/node/app
volumes:
- ./frontend:/home/node/app
environment:
NODE_ENV: development
command: "npm run serve"
backend:
build:
context: dockerize/backend
# this way container interacts with host on behalf of current user.
# !!! NOTE: $UID is a _shell_ variable, not an environment variable!
# To make it available as a shell var, make sure you have this in your ~/.bashrc (./.zshrc etc):
# export UID="$UID"
user: ${UID}:${UID}
volumes:
- ./backend:/app
# custom adjustments to php.ini
# i. e. "xdebug.remote_host" to debug the dockerized app
- ./etc/php:/usr/local/etc/php/local.conf.d/
environment:
# add our custom config files for the php to scan
PHP_INI_SCAN_DIR: "/usr/local/etc/php/conf.d/:/usr/local/etc/php/local.conf.d/"
command: "php artisan serve --host=0.0.0.0 --port=8080"
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
volumes:
- ./etc/mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: tor
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
SERVICE_TAGS: dev
SERVICE_NAME: mysql
Add default.conf.nginx
server {
listen 81;
server_name frontend;
error_log /var/log/nginx/error.log debug;
location / {
proxy_pass http://frontend:8080;
}
location /sockjs-node {
proxy_pass http://frontend:8080;
proxy_set_header Host $host;
# below lines make ws://localhost/sockjs-node/... URLs work, enabling hot-reload
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api/ {
# on the backend side, the request URI will _NOT_ contain the /api prefix,
# which is what we want for a pure-api project
proxy_pass http://backend:8080/;
proxy_set_header Host localhost;
}
}
Add Dockerfile
FROM php:fpm-alpine
RUN apk add --no-cache $PHPIZE_DEPS oniguruma-dev libzip-dev curl-dev \
&& docker-php-ext-install pdo_mysql mbstring zip curl \
&& pecl install xdebug redis \
&& docker-php-ext-enable xdebug redis
RUN mkdir /app
VOLUME /app
WORKDIR /app
EXPOSE 8080
CMD php artisan serve --host=0.0.0.0 --port=8080
DON'T FORGET TO ADD vue.config.js to your frontend folder
// vue.config.js
module.exports = {
// options...
devServer: {
disableHostCheck: true,
host: 'localhost',
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept'
},
watchOptions: {
poll: true
},
proxy: 'http://localhost/api',
}
}
Run sudo docker-compose up
If you want to do migrations run this: sudo docker-compose exec backend php artisan migrate
You will need 4 containers, defined in a docker-compose file:
frontend (your Vue application, which you already have)
backend (Laravel application)
web server (eg. Nginx or Apache)
database (MySQL)
It is possible to combine the 'web-server' and 'backend' containers into one, but this is generally bad advice.
Your compose file would look similar to this:
version: '3'
services:
frontend:
build: ./frontend
ports:
- 8080:8080
volumes:
- ./frontend:/app
backend:
build: ./backend
volumes:
- ./backend:/var/www/my_app
environment:
DB_HOST=db
DB_PORT=3306
webserver:
image: nginx:alpine
ports:
- 8000:8000
volumes:
- ./backend:/var/www/my_app
database:
image: mariadb:latest
container_name: db
ports:
- 3306:3306
environment:
MYSQL_DATABASE: dbname
MYSQL_ROOT_PASSWORD: dbpass
volumes:
- ./sql:/var/lib/mysql
where ./backend contains the Laravel application code, ./frontend contains the Vue application, and both contain a Dockerfile. Refer to Docker Hub for specific instructions on each image needed. This exposes 3 ports to your host system: 8080 (Vue app), 8000 (Laravel app), and 3306 (MySQL).
Alternatively, you can omit the web server if you use the artisan cli's serve command in your Laravel container, similar to what you're already doing in the Dockerfile for your Vue application.
The image would have to include something like CMD php artisan serve --host=0.0.0.0 --port=8000

Getting an 404 because of wrong domain running laravel in docker under wsl2

I've been developing for Laravel using Homestead (VirtualBox and Vagrant) on Windows 10. Recently I wanted to switch to Docker and the Linux Sub System on Windows (WSL2).
Under Homestead I've been running my app under my-domain.test. In my docker-compose file I use localhost on port 8008. I can access the website under localhost:8008 but I get an 404 on every single page I want to access. Inspecting the links, Laravel seems to use my old domain my-domain.test for every link generated. So instead of creating links like localhost:8008/xyz it generates links like https://my-domain.test/xyz.
Of course I've updated my .envfile, cleared the (config) cache, cloned a complete new copy of my repository and set up the project in a complete new directory within the sub system. I've also uninstalled all pieces of Vagrant, VirtualBox and Homestead.
I've searched the complete project for references on the old domain. I havn't found anything.
On an other system it works. Somehow my current system seems to hang on the old domain..
How can I achieve this without reseting my whole computer?
This is my docker-compose:
version: '3.3'
services:
pdbv1-db:
image: mariadb
container_name: pdbv1-db
restart: unless-stopped
ports:
- "3306:3306"
tty: true
environment:
MYSQL_ROOT_PASSWORD: pdb
MYSQL_DATABASE: pdb
MYSQL_USER: pdb
MYSQL_PASSWORD: pdb
networks:
- app-network
volumes:
- ./docker-db:/var/lib/mysql
pdbv1-backend:
build:
context: .
dockerfile: Dockerfile
args:
- WITH_XDEBUG=true
- USERID=$UID
env_file:
- .env
user: $UID:$GID
container_name: pdbv1-backend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: pdbv1-backend
SERVICE_TAGS: dev
PHP_IDE_CONFIG: serverName=Docker
working_dir: /var/www
ports:
- "8008:8080"
volumes:
- ./:/var/www
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
There's 2 ways to look at this:
go with my-domain.test
add that domain to your windows hosts file and point that to 127.0.0.1
Also check the dockerfile of your nginx and check your nginx conf file for your domain
the laravel code. check in your .env file for the url, is that localhost or my-domain.test?
Then look in the entire sourcecode for my-domain.test
and of course in the database itself as well.
(Edit: I see that you've already done that, but it would be the only explanation)
Frankly I would go with option 1: you will have your my-domain.test back and you can use multiple domains / multiple projects.
I only use localhost for quick stuff and for managing my database and my redis.

Bitnami Magento site always point to port 80 for any links

I am new to this area. I have a docker-compose.yml file which starts Magento & MariaDB dockers container. And here is the script:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- ENVIRONMENT=Test3
ports:
- '89:80' #for Test3
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
I tried to use http://127.0.0.1:89 for the site, and it did happen at beginning (e.g. I could open site with URL: http://127.0.0.1:89 ). However when I view page source I found these style/js still points to http://127.0.0.1 (port 80) one. Also I couldn't access its other page like http://120.0.0.1:89/admin.
Then I google, for example some posts mention I need to change base_url value in "core_config_data" table which I did (https://magento.stackexchange.com/questions/39752/how-do-i-fix-my-base-urls-so-i-can-access-my-magento-site). And I do clear the var/cache folder on both Magento & MariaDB containers, but result is still the same. (I didn't find var/session folder which that link mentions. Maybe a little bit different among Bitnami system and others.)
So how could I try now? And also is there anyway that I could set base_url with correct port to MariaDB at very beginning in my docker-compose.yml file?
P.S. Everything works fine if using default port 80.
Thanks a lot!
You can indicate the port where Apache should be listening in the docker-compose.yml file in this way:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
ports:
- '89:89'
- '443:443'
environment:
- APACHE_HTTP_PORT=89
volumes:
- 'magento_data:/bitnami/magento'
- 'php_data:/bitnami/php'
- 'apache_data:/bitnami/apache'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
Please, note the use of the APACHE_HTTP_PORT environment variable on the Magento container. Also, note that the port forwarding should be 89:89 in this case.
Take into account that this change should be performed when you launch for the first time the containers. That means that, if you have some volumes already, this method won't work because your configuration will be restored from those volumes. So, ensure that you don't have any volume. You can check it by executing
docker volume ls
and checking that there isn't any volume named
local DATE_apache_data
local DATE_magento_data
local DATE_mariadb_data
Also, you can also delete the volumes executing:
docker-compose down -v

Resources