I have an app which uses Vue CLI as a front-end and Laravel as a back-end. Now I am trying to launch my app on a server using docker.
My docker skills can only allow me one thing: Vue docker container. But as far as I have to use Laravel as a back-end I have to create a container for that too (+ MySQL, of course).
So here what I've got: Dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
EXPOSE 8080
CMD ["npm", "run", "serve"]
docker-compose.yml
version: '3'
services:
web:
build: .
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- "/app/node_modules"
- ".:/app"
The problem is that I understand how to connect Laravel into Dockerfile. It just doesn't add up in my mind.
May be I should use Ubuntu, not just node? Anyways, I'm asking once again for your support
According to this article you will need to follow the steps below.
Make your project folder look like this: (d: directory, f: file)
d: backend
d: frontend
d: etc
d: nginx
d: conf.d
f: default.conf.nginx
d: php
f: .gitignore
d: dockerize
d: backend
f: Dockerfile
f: docker-compose.yml
Add docker-compose.yml
version: '3'
services:
www:
image: nginx:alpine
volumes:
- ./etc/nginx/conf.d/default.conf.nginx:/etc/nginx/conf.d/default.conf
ports:
- 81:80
depends_on:
- backend
- frontend
frontend:
image: node:current-alpine
user: ${UID}:${UID}
working_dir: /home/node/app
volumes:
- ./frontend:/home/node/app
environment:
NODE_ENV: development
command: "npm run serve"
backend:
build:
context: dockerize/backend
# this way container interacts with host on behalf of current user.
# !!! NOTE: $UID is a _shell_ variable, not an environment variable!
# To make it available as a shell var, make sure you have this in your ~/.bashrc (./.zshrc etc):
# export UID="$UID"
user: ${UID}:${UID}
volumes:
- ./backend:/app
# custom adjustments to php.ini
# i. e. "xdebug.remote_host" to debug the dockerized app
- ./etc/php:/usr/local/etc/php/local.conf.d/
environment:
# add our custom config files for the php to scan
PHP_INI_SCAN_DIR: "/usr/local/etc/php/conf.d/:/usr/local/etc/php/local.conf.d/"
command: "php artisan serve --host=0.0.0.0 --port=8080"
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
volumes:
- ./etc/mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: tor
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
SERVICE_TAGS: dev
SERVICE_NAME: mysql
Add default.conf.nginx
server {
listen 81;
server_name frontend;
error_log /var/log/nginx/error.log debug;
location / {
proxy_pass http://frontend:8080;
}
location /sockjs-node {
proxy_pass http://frontend:8080;
proxy_set_header Host $host;
# below lines make ws://localhost/sockjs-node/... URLs work, enabling hot-reload
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api/ {
# on the backend side, the request URI will _NOT_ contain the /api prefix,
# which is what we want for a pure-api project
proxy_pass http://backend:8080/;
proxy_set_header Host localhost;
}
}
Add Dockerfile
FROM php:fpm-alpine
RUN apk add --no-cache $PHPIZE_DEPS oniguruma-dev libzip-dev curl-dev \
&& docker-php-ext-install pdo_mysql mbstring zip curl \
&& pecl install xdebug redis \
&& docker-php-ext-enable xdebug redis
RUN mkdir /app
VOLUME /app
WORKDIR /app
EXPOSE 8080
CMD php artisan serve --host=0.0.0.0 --port=8080
DON'T FORGET TO ADD vue.config.js to your frontend folder
// vue.config.js
module.exports = {
// options...
devServer: {
disableHostCheck: true,
host: 'localhost',
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept'
},
watchOptions: {
poll: true
},
proxy: 'http://localhost/api',
}
}
Run sudo docker-compose up
If you want to do migrations run this: sudo docker-compose exec backend php artisan migrate
You will need 4 containers, defined in a docker-compose file:
frontend (your Vue application, which you already have)
backend (Laravel application)
web server (eg. Nginx or Apache)
database (MySQL)
It is possible to combine the 'web-server' and 'backend' containers into one, but this is generally bad advice.
Your compose file would look similar to this:
version: '3'
services:
frontend:
build: ./frontend
ports:
- 8080:8080
volumes:
- ./frontend:/app
backend:
build: ./backend
volumes:
- ./backend:/var/www/my_app
environment:
DB_HOST=db
DB_PORT=3306
webserver:
image: nginx:alpine
ports:
- 8000:8000
volumes:
- ./backend:/var/www/my_app
database:
image: mariadb:latest
container_name: db
ports:
- 3306:3306
environment:
MYSQL_DATABASE: dbname
MYSQL_ROOT_PASSWORD: dbpass
volumes:
- ./sql:/var/lib/mysql
where ./backend contains the Laravel application code, ./frontend contains the Vue application, and both contain a Dockerfile. Refer to Docker Hub for specific instructions on each image needed. This exposes 3 ports to your host system: 8080 (Vue app), 8000 (Laravel app), and 3306 (MySQL).
Alternatively, you can omit the web server if you use the artisan cli's serve command in your Laravel container, similar to what you're already doing in the Dockerfile for your Vue application.
The image would have to include something like CMD php artisan serve --host=0.0.0.0 --port=8000
Related
I have 2 apps I'm trying to host on one machine (Ubuntu 22). The first app is a client side react app that makes api calls to my backend which is a docker compose laravel/nginx/sql. I went into sudo vim /etc/nginx/sites-available/MY.DOMAIN.COM and added the following:
server {
listen 80;
root /var/www/html;
index index.php index.html index.htm; #index.nginx-debian.html;
server_name MY.IP.ADDRESS.HERE;
location / {
try_files $uri $uri/ =404;
}
}
server {
listen 81;
listen [::]:81;
root /var/www/html/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name MY_DOMAIN_NAME.COM www.MY_DOMAIN_NAME.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
On port 80 is where my react app lives. When someone visits MY.DOMAIN.COM I'm able to serve it without any issues. I'm not sure why putting server_name MY.IP.ADDRESS.HERE; works there vs the domain name like I did for port 81 server_name MY_DOMAIN_NAME.COM www.MY_DOMAIN_NAME.com;. Anyway on port 81 is where my laravel app should live. It's using a docker-compose.yml with this configuration:
version: "3.7"
services:
app:
build:
args:
user: admin
uid: 1000
context: ./
dockerfile: Dockerfile
image: my-laravel
container_name: my-laravel-app
restart: unless-stopped
working_dir: /var/www/
# command: bash -c "php artisan migrate:fresh --seed"
# depends_on:
# - db
# links:
# - db
volumes:
- ./:/var/www
networks:
- my-laravel
db:
image: mysql:8.0.30
container_name: my-laravel-db
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- ./docker-compose/mysql:/docker-entrypoint-initdb.d
networks:
- my-laravel
nginx:
image: nginx:alpine
container_name: my-laravel-nginx
restart: unless-stopped
ports:
- '8081:81'
volumes:
- ./:/var/www
- ./docker-compose/nginx:/etc/nginx/conf.d/
networks:
- my-laravel
networks:
my-laravel:
driver: bridge
The react app is trying to make api calls to const domain = "http://MY.IP.ADDRESS.HERE:8081"; and fails with the message net::ERR_CONNECTION_REFUSED This is where I'm not sure what I did wrong.
I have nginx installed, but I'm also using it in the docker image. Is there any conflict there? Do I even need to configure my installed nginx instance to serve the docker image?
Anyway, the funny thing is I had this all setup and working last week, but I decided to add an ssl cert with cert bot which messed everything up and now I'm starting from scratch. The problem is I don't remember how I got it working to being with lol.
Any help here would be greatly appreciated, thank you in advance.
I'm currently working on a full stack application using Spring-boot (Kotlin), SvelteKit (run with Vite), and MongoDB, each with their own Docker container. My backend service is being forwarded to port 6868 on my localhost, and when I run my frontend service with "npm run dev" locally (which triggers this script): vite dev --host 0.0.0.0 --port 8080 and remove the service from my docker-compose.yml (see frontend-svelte service in docker-compose below), I am able to call my backend at localhost:6868 (see +page.js below). However, when I run my frontend inside of a docker container, the request to localhost:6868 fails. This sort of makes sense to me since localhost:6868 would refer to the inside of the docker container if the code was being sent from the server (docker container) as opposed to the browser. When I change localhost:6868 to spring-boot:8080 (the docker container) the initial request sent Server side does succeed (i.e. the console log below in /frontend-svelte/routes/+page.js does print out) however, there is still an error in the browser for the subsequent requests being sent from the client side. It seems to me that the issue is the discrepancy caused by requests sent from client-side vs server-side, so how can I resolve this issue? Thanks everyone for your help!
docker-compose.yml
version: "3.8"
services:
mongodb:
image: mongo:5.0.2
restart: unless-stopped
ports:
- 27017:27017
volumes:
- db:/data/db
spring-boot:
image: bracket_backend
build:
context: ./backend
dockerfile: Dev.Dockerfile
depends_on:
- mongodb
ports:
- 6868:8080
stdin_open: true
tty: true
volumes:
- ./backend/src:/app/src
env_file:
- ./.env
frontend-svelte:
image: bracket_frontend
build:
context: ./frontend-svelte
dockerfile: Dev.Dockerfile
ports:
- 1234:8080
stdin_open: true
tty: true
volumes:
- ./frontend-svelte/src:/app/src
depends_on:
- spring-boot
volumes:
db:
/backend/Dev.Dockerfile
FROM maven:3.8.6-openjdk-18-slim
WORKDIR /app
COPY ./.mvn ./mvn
COPY ./mvnw ./
COPY ./pomDev.xml ./
# Note that src is mounted as a volume to allow code update w/o restarting container
ENTRYPOINT mvn spring-boot:run -f pomDev.xml
/frontend-svelte/Dev.Dockerfile
FROM node:16-slim
WORKDIR /app
COPY package.json .
RUN npm install --legacy-peer-deps
COPY svelte.config.js .
COPY vite.config.js .
COPY jsconfig.json .
COPY playwright.config.js .
# Note that src is mounted so changes will occur.
ENTRYPOINT npm run dev
/frontend-svelte/routes/+page.js (this is where the request to the backend is made. This succeeds if not run from docker container)
import { getBaseUrl } from '$lib/utils.js';
/** #type {import('./$types').PageLoad} */
export async function load({ params }) {
console.log("test")
const response = await fetch(`http://localhost:6868/api/groups`); // THIS LINE MAKES THE REQUEST TO THE BACKEND
console.log("respoonse is: ");
console.log(response);
if (!response.ok) {
throw new Error(`Error! status: ${response.status}`);
}
return response.json()
}
everyone, I am confused I am new to the DevOps world and I have no idea how to use docker-compose or swarm in production I mean what are the best practices in production for both I followed up with this article on the digital ocean How To Install and Set Up Laravel with Docker Compose on Ubuntu 20.04
all works like a charm in local, test, and dev environments, and I tried to take this to the next step for production env, and I noticed some things should be changed for production mode like so
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside.
Binding to different ports on the host. check the link for more info use compose in production
I don't know how to achieve #1 point
here's my DockerFile below to build my custom laravel image and docker-compose for my services
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Download php extension installer
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
# Give php extension installer a permission
RUN chmod +x /usr/local/bin/install-php-extensions
# Install php extensions via php extension installer
RUN install-php-extensions zip
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www/html
USER $user
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile
image: app
container_name: app
restart: always
working_dir: /var/www/html
volumes:
- ./:/var/www/html
networks:
- backend
db:
image: mysql:8.0
container_name: db
restart: always
environment:
MYSQL_DATABASE: ROUTE
MYSQL_ROOT_PASSWORD: 2020
MYSQL_PASSWORD: 2020
MYSQL_USER: sqluser
volumes:
- db:/var/lib/mysql
networks:
- backend
nginx:
image: nginx:1.21.6
container_name: nginx
restart: always
ports:
- 8000:80
networks:
- backend
volumes:
- ./:/var/www/html
- ./docker-compose/nginx:/etc/nginx/conf.d/
phpmyadmin:
image: phpmyadmin
container_name: pma
restart: always
ports:
- 8283:80
environment:
PMA_HOSTS: db
PMA_ARBITRARY: 1
PMA_USER: sqluser
PMA_PASSWORD: 2020
networks:
- backend
networks:
backend:
driver: bridge
volumes:
db:
Note:-
in Nginx service, I Created two shared volumes. The first one will synchronize contents from the current directory to /var/www inside the container. This way, when you make local changes to the application files, they will be quickly reflected in the application being served by Nginx inside the container (Which is not good for production). The second volume will make sure our Nginx configuration file, located at docker-compose/nginx/, is copied to the container’s Nginx configuration folder.
i tried to remove the first volume but keep the second one to use my custom configuration but it did not work at all why?
My goal is to have a one-liner (kinda) for building and starting my app. I want to be able to execute this command and run several additional commands related to my app (like migrations, init and starting a websocket server).
But when I try to use ENTRYPOINT directive my php service runs commands but the service itself is not working properly.
docker log nginx shows me this error:
[error] 31#31: *2 connect() failed (111: Connection refused) while connecting to upstream,
If I comment out ENTRYPOINT in back.dockerfile my app runs fine, but I have to manually run those artisan commands
docker-compose build && docker-compose up -d
Is there a way to achieve such result?
My docker-compose.yml
networks:
laravel:
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
volumes:
- ./app:/var/www/html
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
ports:
- "8000:80"
depends_on:
- php
- mysql
networks:
- laravel
tty: true
php:
build:
context: .
dockerfile: ./docker/back.dockerfile
container_name: php
ports:
- "9000:9000"
depends_on:
- node
networks:
- laravel
node:
build:
context: .
dockerfile: ./docker/front.dockerfile
container_name: node
ports:
- "3000:3000"
networks:
- laravel
mysql:
image: mysql:8.0
container_name: mysql
restart: unless-stopped
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: laravel
MYSQL_USER: laravel
MYSQL_PASSWORD: laravel
MYSQL_ROOT_PASSWORD: root
SERVICE_NAME: mysql
networks:
- laravel
back.dockerfile
FROM php:7.4-fpm-alpine
RUN docker-php-ext-install pdo pdo_mysql
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
RUN apk update
RUN apk upgrade
RUN apk add bash
RUN alias composer='php /usr/bin/composer'
ENV COMPOSER_ALLOW_SUPERUSER 1
COPY ./app/composer.json ./app/composer.lock ./
COPY ./app .
RUN composer install --no-scripts --prefer-dist --optimize-autoloader
RUN chown -R www-data:www-data /var/www/html
RUN chmod -R 755 /var/www/html/storage
COPY ./entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 6001
entrypoint.sh
#!/bin/sh
set -e
php artisan migrate --path 'database/migrations'
php artisan app:init
php artisan websockets:serve
exec "$#"
I started testing the new Laravel Sail docker-compose environment with an nginx reverse proxy so I can access my website from a real tld while developing on my local machine.
My setup is:
OS - Ubuntu Desktop 20 with nginx and docker installed.
Nginx site-enabled on the host machine:
server {
server_name mysite.xyz;
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/ssl/certs/mysite.xyz.crt;
ssl_certificate_key /etc/ssl/private/mysite.xyz.key;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
location / {
proxy_pass http://localhost:8002;
}
}
Also I have 127.0.0.1 mysite.xyz in my host machine /etc/hosts file
And my docker-compose:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.0/app
ports:
- '8002:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mysql
- redis
image: 'mysql:8.0'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sailmysql:/var/lib/mysql'
networks:
- sail
redis:
image: 'redis:alpine'
ports:
- '${FORWARD_REDIS_PORT:-6379}:6379'
volumes:
- 'sailredis:/data'
networks:
- sail
networks:
sail:
driver: bridge
volumes:
sailmysql:
driver: local
sailredis:
driver: local
Site is loading fine when I access mysite.xyz from my host machine.
The issue I'm having is that on the register page that I ca see from my host machine by accessing the register page (https://mysite.xyz/register) the form action is: http://localhost:8002/register
The piece of code that generates the above url is <form method="POST" action="{{ route('register') }}">
This is a problem because I don't access the site from localhost:XXXX but instead from mysite.xyz which goes through the nginx reverse proxy and eventually ends up pointing to http://localhost:8002/register
**
What I checked:
In my Laravel .env file, the APP_URL is mysite.xyz
if I ssh into the the sail container and start artisan tinker and then run route('register') it outputs https://mysite.xyz/ so clearly, the laravel app inside the container seems to be behaving correctly.
The funny thing is that when it renders the html response, it renders the register route as http://localhost:8002/register
I tried searching the entire project for localhost:8002 and I can find it in /storage/framework/sessions/asdofhsdfasf8as7dgf8as7ogdf8asgd7
that bit of text says: {s:3:"url";s:27:"http://localhost:8002/login";}
So it seems that the session thinks it's localhost:8002 but tinker thinks it's mysite.xyz
I'm also a docker noob so who knows what I'm missing. I'm lost :)
The "problem" lies within your nginx configuration, not your code:
proxy_pass http://localhost:8002;
Laravel uses APP_URL in CLI (console) or whenever you use config('app.url') or env('APP_URL') in your code. For all other operations (such as URL constructing via the route helper) Laravel fetches the URL from the request:
URL Generation, Laravel 8.x Docs
The url helper may be used to generate arbitrary URLs for your application. The generated URL will automatically use the scheme (HTTP or HTTPS) and host from the current request being handled by the application.
What you need to do is to pass the correct URL and port in your nginx configuration, by adding:
proxy_pass http://localhost:8002;
proxy_set_header Host $host;
For additional information on the topic, you may want to have a look at this article: Setting up an Nginx Reverse Proxy