I tried to deploy my Laravel website with docker-compose(docker-compose.yml), once i run my compose file the website throws an error as
could not find driver (SQL: select * from ...)
I went through many articles and I can't find a solution to the issue. Can anyone please help out clearly
Docker Compose Version: 3.2
Docker Engine Version: 19.*
Here is my docker-compose.yml file:
version: '3.2'
services:
wserver:
image: nginx:latest
container_name: web-nginx
ports:
- 4080:80
volumes:
- type: bind
source: /root/website/files
target: /var/www/html
- type: bind
source: /root/website/nginx/conf.d/
target: /etc/nginx/conf.d/
phpweb:
build:
context: ./php/
container_name: web-phpfpm
ports:
- 4900:9000
volumes:
- type: bind
source: /root/website/files
target: /var/www/html
dbserver:
image: mariadb:latest
container_name: web-mariadb
ports:
- 4306:3306
volumes:
- type: bind
source: /root/website/mariadb/
target: /var/lib/mysql/
- type: bind
source: /root/website/mysql/mariadb.conf.d/
target: /etc/mysql/mariadb.conf.d/
environment:
TZ: "Asia/Kolkata"
MYSQL_ROOT_PASSWORD: password
SERVICE_TAGS: dev
SERVICE_NAME: mysql
Docker File:
FROM php:7.4-fpm
RUN docker-php-ext-install mysqli
Related
I'm new in vite js when upgrade from Laravel version 8 to 9.
I'm building docker for a Laravel 9 project use vite js. There is a problem: I can't expose host of resources out of docker containers. It's still working in the inside docker containers.
Are there any advice ? Thanks.
This is my docker-compose file
version: "3.9"
services:
nginx:
image: nginx:1.23-alpine
ports:
- 80:80
mem_limit: "512M"
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
- type: bind
source: ./docker/nginx/dev/default.conf
target: /etc/nginx/conf.d/default.conf
php:
platform: linux/amd64
build:
context: .
dockerfile: ./docker/php/dev/Dockerfile
mem_limit: "512M"
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
oracle:
platform: linux/amd64
image: container-registry.oracle.com/database/express:21.3.0-xe
ports:
- 1521:1521
# - 5500:5500
volumes:
- type: volume
source: oracle
target: /opt/oracle/oradata
volumes:
oracle:
I figured out issue. Caused vite does not expose host to network.
My solution is:
edit file package.json
"scripts": {
"dev": "vite --host",
"build": "vite build"
}
expose 5173 port in docker-compose.yml file
php:
platform: linux/amd64
build:
context: .
dockerfile: ./docker/php/dev/Dockerfile
mem_limit: "512M"
ports:
- 5173:5173
volumes:
- type: bind
source: ./api
target: /usr/share/nginx/html/api
There has been voiced downsides to named volumes #David Maze.
Since you can't access the contents of a named volume from outside of Docker, they're harder to back up and manage, and a poor match for tasks like injecting config files and reviewing logs.
Would you try altering all volume types to bind.
Mount volume from host in Dockerfile long format
Is it possible to use use a hostname for a custom service. Currently, I have the following:
Redis service: docker-compose.redis.yml
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-redis
image: redis:latest
restart: always
ports:
- 6379
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=6379
volumes: []
web:
links:
- redis:$DDEV_HOSTNAME
Redis Commander Service: docker-compose.commander.yml
version: '3.6'
services:
redis:
container_name: ddev-${DDEV_SITENAME}-commander
image: rediscommander/redis-commander:latest
restart: always
ports:
- 8081
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
environment:
- VIRTUAL_HOST=$DDEV_HOSTNAME
- HTTP_EXPOSE=8081
- REDIS_HOSTS=local:redis:6379
volumes: []
web:
links:
- commander:$DDEV_HOSTNAME
At the moment I can access the Redis Commander from the outside with <project-name>.ddev.local:8081/.
What I want to achieve, if possible is to access the Redis Commander from a custom hostname or subdomain like: comander.<project-name>.ddev.local or commander.local.
After a bit of research and a lot of help from Randy Fay, we were able to accomplish it. We had to run the following:
$ sudo ddev hostname commander.local 127.0.0.1
The Redis Commander Service file(docker-compose.commander.yml) had to be updated to:
version: '3.6'
services:
commander:
container_name: ddev-${DDEV_SITENAME}-commander
image: rediscommander/redis-commander:latest
restart: always
ports:
- 8081
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
environment:
- VIRTUAL_HOST=commander.local
- HTTP_EXPOSE=80
- REDIS_HOSTS=local:redis:6379
volumes: []
web:
links:
- commander:$DDEV_HOSTNAME
- commander:commander.local
for it to work.
I am experimenting with Visual Studio's docker support and want to add a volume mount for C:\inetpub\wwwroot\App_Data.
My Dockerfile looks like this:
FROM microsoft/aspnet:4.7.1-windowsservercore-1709
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
My docker-compose.yml file looks like this:
version: '3.4'
services:
my.app:
image: ${DOCKER_REGISTRY}myapp
build:
context: .\My.App
dockerfile: Dockerfile
Now I've tried just about every variation of specifying volumes in my docker-compose.override.yml file, including:
version: '3.4'
services:
my.app:
volumes:
- "C:\\inetpub\\wwwroot\\App_Data"
ports:
- "80"
networks:
default:
external:
name: nat
services:
my.app:
volumes:
- "C:\\temp\\dockerappdata1":"C:\\inetpub\\wwwroot\\App_Data"
ports:
- "80"
networks:
default:
external:
name: nat
services:
my.app:
volumes:
- type: volume
source: "app_data"
target: "C:\\inetpub\\wwwroot\\App_Data"
ports:
- "80"
networks:
default:
external:
name: nat
volumes:
app_data:
But in all cases, I cannot run the project and it reports either some kind of configuration issue with compose or else an issue when starting the container, with the super-unhelpful message:
encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly.
What is the right syntax?
version: '3.4'
services:
my.app:
volumes:
- "C:\\inetpub\\wwwroot\\App_Data"
ports:
- "80"
networks:
default:
external:
name: nat
services:
my.app:
volumes:
- "C:\\temp\\dockerappdata1":"C:\\inetpub\\wwwroot\\App_Data"
ports:
- "80"
networks:
default:
external:
name: nat
services:
my.app:
volumes:
- type: volume
source: "app_data"
target: "C:\\inetpub\\wwwroot\\App_Data"
ports:
- "80"
networks:
default:
external:
name: nat
volumes:
app_data:
The problem I think here is, you are trying to mount a directory which is already there.
If I understand your question correctly, you'd like to volume mount "C:\inetpub\wwwroot\App_Data" into the container, correct?
If that's the case, here's what you should add in the yaml file:
services:
my.app:
volumes:
- C:\\inetpub\\wwwroot\\App_Data:C:\\temp\\dockerappdata1
# Syntax is HOST_PATH:CONTAINER_PATH:[ro/rw] (the access mode is optional)
ports:
- "80"
More info on the syntax: https://docs.docker.com/compose/compose-file/#short-syntax-3
I am running a postgres database generated by the below docker-compose file on Windows. Before running docker-compose up --build, I created a docker volume with docker volume --name postgresdata --driver local. The latter is done to avoid mounting a Windows folder into Postgres.
However, when I run docker-compose down followed by docker-compose up --build, the database is empty which I would not have expected. Any ideas or suggestions?
This is the docker-compose.yml file I am using:
version: '3.0'
services:
db:
image: postgres:latest
restart: always
ports:
- 5432:5432
env_file:
- env_file
volumes:
- postgresdata
networks:
- db1
market_data:
build: .
environment:
PYTHONUNBUFFERED: 'true'
stdin_open: true
tty: true
links:
- db:db
container_name: market_data_container
volumes:
- '.:/market_data'
depends_on:
- db
networks:
- db1
adminer:
image: adminer
restart: always
ports:
- 8080:8080
networks:
- db1
depends_on:
- db
volumes:
market_data:
postgresdata:
external: true
networks:
db1:
driver: bridge
Postgres uses already a volume to persist data, but docker-compose down deletes this volume. You are using named volumes in your compose file, but don't mount it correctly.
version: '3.0'
services:
db:
image: postgres:latest
restart: always
ports:
- 5432:5432
env_file:
- env_file
volumes:
- postgresdata:/var/lib/postgresql/data
networks:
- db1
Add the default path for postgres data to your volume postgresdata:/var/lib/postgresql/data. This should fix it.
Though I mount data on external volume, upon docker-compose down I am losing my data. I am using docker-compose version 1.14.0, build c7bdf9e and docker version 17.06.1-ce, build 874a737
# docker-compose.yml
version: '3'
services:
db:
image: postgres
volumes:
- data:/var/lib/postgres/data
web:
build: .
command: bundle exec rails s -p 3000 -b '0.0.0.0'
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
volumes:
data:
external: true