How to expose vite js host to the outside docker - laravel

I'm new in vite js when upgrade from Laravel version 8 to 9.
I'm building docker for a Laravel 9 project use vite js. There is a problem: I can't expose host of resources out of docker containers. It's still working in the inside docker containers.
Are there any advice ? Thanks.
This is my docker-compose file
version: "3.9"
services:
nginx:
image: nginx:1.23-alpine
ports:
- 80:80
mem_limit: "512M"
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
- type: bind
source: ./docker/nginx/dev/default.conf
target: /etc/nginx/conf.d/default.conf
php:
platform: linux/amd64
build:
context: .
dockerfile: ./docker/php/dev/Dockerfile
mem_limit: "512M"
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
oracle:
platform: linux/amd64
image: container-registry.oracle.com/database/express:21.3.0-xe
ports:
- 1521:1521
# - 5500:5500
volumes:
- type: volume
source: oracle
target: /opt/oracle/oradata
volumes:
oracle:

I figured out issue. Caused vite does not expose host to network.
My solution is:
edit file package.json
"scripts": {
"dev": "vite --host",
"build": "vite build"
}
expose 5173 port in docker-compose.yml file
php:
platform: linux/amd64
build:
context: .
dockerfile: ./docker/php/dev/Dockerfile
mem_limit: "512M"
ports:
- 5173:5173
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api

There has been voiced downsides to named volumes #David Maze.
Since you can't access the contents of a named volume from outside of Docker, they're harder to back up and manage, and a poor match for tasks like injecting config files and reviewing logs.
Would you try altering all volume types to bind.
Mount volume from host in Dockerfile long format

Related

Error while deploying Laravel Website with Docker Compose

I tried to deploy my Laravel website with docker-compose(docker-compose.yml), once i run my compose file the website throws an error as
could not find driver (SQL: select * from ...)
I went through many articles and I can't find a solution to the issue. Can anyone please help out clearly
Docker Compose Version: 3.2
Docker Engine Version: 19.*
Here is my docker-compose.yml file:
version: '3.2'
services:
wserver:
image: nginx:latest
container_name: web-nginx
ports:
- 4080:80
volumes:
- type: bind
source: /root/website/files
target: /var/www/html
- type: bind
source: /root/website/nginx/conf.d/
target: /etc/nginx/conf.d/
phpweb:
build:
context: ./php/
container_name: web-phpfpm
ports:
- 4900:9000
volumes:
- type: bind
source: /root/website/files
target: /var/www/html
dbserver:
image: mariadb:latest
container_name: web-mariadb
ports:
- 4306:3306
volumes:
- type: bind
source: /root/website/mariadb/
target: /var/lib/mysql/
- type: bind
source: /root/website/mysql/mariadb.conf.d/
target: /etc/mysql/mariadb.conf.d/
environment:
TZ: "Asia/Kolkata"
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
Docker File:
FROM php:7.4-fpm
RUN docker-php-ext-install mysqli

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

Volume name issue with docker-compose on windows

I'm trying to start a multi containers applications for codeceptjs using docker-compose. On linux the docker compose yml file works fine but on windows it fails complaining about "volume name is too short". Why docker compose complains on Windows ?
Here's the yml file content:
version: '3.7'
services:
hub:
image: selenium/hub:latest
[...]
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
environment:
[...]
networks:
test_network:
ipv4_address: 10.2.0.3
test-acceptance:
image: test/codeceptjs
[...]
volumes:
- $WORKSPACE:/tests
- node_modules:/node_modules
networks:
test_network:
ipv4_address: 10.2.0.5
volumes:
node_modules:
networks:
test_network:
driver: bridge
ipam:
driver: default
config:
-
subnet: 10.2.0.0/24
X
Maybe it's just a typo but the offending values are probably here:
volumes:
node_modules:
You need to put something after the colon.

Docker deployment works with MacOs but not with Ubuntu 16.04

I'm trying to dockerise my laravel app: https://github.com/xoco70/kendozone/tree/docker-local
My dev env is working, now I am working on a deployable app in local environment.
In MacOs, Everything is ok.
I build it with:
docker build . -f app.dockerfile.local -t kendozone:local-1.0.0
And run it with
docker-compose -f docker-compose-local.yml up --force-recreate
The problem is with npm run dev with is a webpack build command
It will just compile Sass, combine Js and CSS, and copy it to /var/www/public folder
But when I run my app in ubuntu, I can access login page but it seems to load without any css / js.
With MacOs, I can see them with no problem....
Here is my docker-compose:
version: '2'
services:
# The Application
app:
image: kendozone:local-1.0.0
working_dir: /var/www
volumes:
- codevolume:/var/www
environment:
- "DB_DATABASE=homestead"
- "DB_USERNAME=homestead"
- "DB_PASSWORD=secret"
- "DB_PORT=3306"
- "DB_HOST=database"
depends_on:
- database
# The Web Server
web:
build:
context: ./
dockerfile: nginx.dockerfile
working_dir: /var/www
volumes:
- codevolume:/var/www
ports:
- 8090:80
depends_on:
- app
# The Database
database:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=root"
ports:
- "33061:3306"
volumes:
dbdata:
codevolume:
Any Idea ???
One way to fix this is to make node available in your docker base image, and then actually run npm install and npm run production to build a production ready image of your application.

Docker compose not using external named volume on Mac

Though I mount data on external volume, upon docker-compose down I am losing my data. I am using docker-compose version 1.14.0, build c7bdf9e and docker version 17.06.1-ce, build 874a737
# docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- data:/var/lib/postgres/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
volumes:
data:
external: true

Resources