I have been trying for the last few days to set up Laravael for Docker on my WSL2 enabled machine. After digging through various bloated stacks - I've tried to build my own stack for local development. My issue is I cannot both mount the uncompiled Laravel folder to the container, and also install dependencies via composer. Below are my current set of files. I cannot access the default application because autoload.php has not been created by composer. If I copy the files to the container via dockerfile and then proceed to run composer dependencies, I end up with a static application that will not reflect changes as I make them in VSCode.
For clarification, my goal is simply to be able to edit my Laravel application without needing to re-build the image everytime.
dockerfile
FROM php:7.4.14-apache
RUN apt-get update -y && apt-get install -y zip unzip
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Install Laravel
# VOLUME ./laravel /var/www/html
# WORKDIR /var/www/html
# RUN composer install
docker-compose.yml
version: '3'
services:
web:
build:
context: .
dockerfile: dockerfile.httpd
ports:
- '80:80'
volumes:
- './laravel/public:/var/www/html'
db:
image: mariadb
restart: always
environment:
- MYSQL_ROOT_PASSWORD=my_secure_pwd
- MARIADB_USER=mydbuser
- MARIADB_DATABASE=laravel
- MARIADB_PASSWORD=mydbuserpwd
Related
I am very new to containerizing approach of deploying applications. I am trying to deploy my Laravel app to Azure using Docker and ACI. I couldn't find any well explained articles or articles matching my requirements of deployment.
I am actually trying to setup a proper DevOps pipeline, with sequence being: I push my code to GitHub, Run GitHub Actions, Build Docker Image, Push to ACR and Pull in ACI.
I attempted to build the Laravel docker image in my local environment with Nginx and Supervisor in a single image and it works well. But now I want to use automated Let's Encrypt SSL in my Nginx server. If I rebuild the image every time requesting a new SSL certificate for my server with certbot that wouldn't be a right idea, right? So, what is the best way to do it?
Here is my current Dockerfile without SSL:
# Use the official PHP 8.1 image as the base image
FROM php:8.1-fpm
# Install necessary packages
RUN apt-get update && apt-get install -y git zip unzip supervisor libpng-dev libonig-dev libxml2-dev libzip-dev nginx
# Full update system
RUN apt-get upgrade -y
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql gd mbstring exif pcntl bcmath zip
# Set the working directory to /var/www/html
WORKDIR /var/www/html
# Copy the Laravel application files to the container
COPY . .
# Copy the Nginx configuration file
COPY nginx.conf /etc/nginx/sites-available/default
# Install Composer and run it to install the application dependencies
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN composer install --no-dev --no-interaction
# Copy the environment file
RUN cp .env.example .env
# Generate the application key
RUN php artisan key:generate
# Set the ownership and permissions of the application files
RUN chown -R www-data:www-data /var/www/html
# Copy the Supervisor configuration file
COPY supervisor.conf /etc/supervisor/conf.d/mysupervisor.conf
# Expose port 80 for the Nginx web server
EXPOSE 80
# Start Nginx and Supervisor
CMD ["/bin/sh", "-c" , "service nginx restart && /usr/bin/supervisord -c /etc/supervisor/supervisord.conf"]
I am using a separate nodejs container as in How to use a separate node container in your ddev setup?, but I'd really like to add the gatsby-cli npm package to it, and maybe add sudo as well. I know how to add a custom Dockerfile to the web service and do these things but how can I do it with a custom service?
You can do the same things ddev does to add a custom Dockerfile, and add a build stanza and a .ddev/<servicename>-build directory with the needed files.
So for a .ddev/docker-compose.node.yaml file:
version: '3.6'
services:
node:
build:
context: "${DDEV_APPROOT}/.ddev/node-build"
dockerfile: "${DDEV_APPROOT}/.ddev/node-build/Dockerfile"
args:
BASE_IMAGE: "node"
image: "node-${DDEV_SITENAME}-built"
user: "node"
restart: "no"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
volumes:
- "../:/var/www/html:cached"
working_dir: /var/www/html
command: ["tail", "-f", "/dev/null"]
And then mkdir .ddev/node-build and create .ddev/node-build/Dockerfile with
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y -o Dpkg::Options::="--force-confold" --no-install-recommends --no-install-suggests bash sudo
COPY sudoers.d/ddev /etc/sudoers.d
RUN npm install -g gatsby-cli
and .ddev/node-build/sudoers.d/ddev with
ALL ALL=NOPASSWD: ALL
The result in this case is you get gatsby-cli installed via npm, and also get bash and passwordless sudo installed. This is just an example, there is plenty more that can be done, of course.
This saves the trouble of creating and maintaining a custom Docker image and pushing it up to hub.docker.com.
I am doing laravel application and installing Laravel jetstream in Docker containers. I have separate containers for composer and artisan. And when I try to install jetstream by command:
docker-compose run --rm artisan jetstream:install inertia
I get an error:
Starting mysql ... done
sh: exec: line 1: composer: not found
Unable to locate publishable resources.
Publishing complete.
Inertia scaffolding installed successfully.
Please execute the "npm install && npm run dev" command to build your assets.
The webpage still doesn't work with error message Class 'Inertia\Inertia' not found. I assume there is a problem with connection between composer and artisan containers, but how I can set up this connection?
Docker-compose.yml
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
networks:
- laravel
artisan:
build:
context: .
dockerfile: Dockerfile
container_name: artisan
volumes:
- ./src:/var/www/html
depends_on:
- mysql
working_dir: /var/www/html
entrypoint: ['/var/www/html/artisan']
networks:
- laravel
Sadly I can't comment yet (< 50 reputation), but had a similar issue as you just now, an alternative to running it in different containers is to make a helper container and executing it all inside:
docker-compose.yml (note: ./code is your laravel root/folder)
version: '3'
services:
helper:
build: ./composer-artisan-helper
volumes:
- ./code:/app
Make a folder composer-artisan-helper and create a Dockerfile inside:
FROM php:7.4-fpm
# Install git
RUN apt-get update && apt-get install -y git
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Keep running
ENTRYPOINT ["tail", "-f", "/dev/null"]
# Set work-directory
WORKDIR /app
Now run it and drop into that container:
docker-compose exec helper bash
make sure it now has all your laravel folder by doing a quick ls - if everything is fine execute
php artisan jetstream:install inertia
you should now be greeted eventually with Inertia scaffolding installed successfully..
Hope that helps! even though it's not split up into multiple containers (which isn't possible afaik anyway).
I was trying to install it with command:
docker-compose run --rm artisan jetstream:install inertia
But actually it works when I try to run it in second container:
docker-compose run --rm composer php artisan jetstream:install inertia
In my configuration artisan container doesn't have a connection to the composer, but the composer container has to the artisan obviously.
there is a right answer here! thank you Jsowa
But you can install related packages separately.
You need to install inertiajs package for laravel with the following command.
docker-compose run --rm composer require inertiajs/inertia-laravel
docker-compose run --rm artisan inertia:middleware
I am new at docker and docker-compose and I am developing a Laravel-project on docker and docker-compose with Laradock as following a tutorial(not sure whether It is a correct way or not to refer this situation though).
I want to install the composer in this environment to be able to use the composer command.
As a matter of fact, I wanted to do seeding to put data into DB that I made by php artisan make:migrate but this error appeared.
include(/var/www/laravel_practice/vendor/composer/../../database/seeds/AdminsTableSeeder.php): failed to open stream: No such file or directory
So I googled this script to find a solution that will solve the error then I found it.
It says, "Do composer dump-autoload and try seeding again", so I followed it then this error appeared.
bash: composer: command not found
Because I have not installed composer into docker-container.
My docker's condition is like this now.
・workspace
・mysql
・apache
・php-fpm
Since I have not installed the composer, I have to install it into docker-container to solve the problem, BUT I have no idea how to install it into docker-container.
So could anyone tell me how to install composer into docker-container?
Thank you.
here is the laradock/mysql/Dockerfile and laravelProject/docker-compose.yml.
ARG MYSQL_VERSION=5.7
FROM mysql:${MYSQL_VERSION}
LABEL maintainer="Mahmoud Zalt <mahmoud#zalt.me>"
#####################################
# Set Timezone
#####################################
ARG TZ=UTC
ENV TZ ${TZ}
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone && chown -R mysql:root /var/lib/mysql/
COPY my.cnf /etc/mysql/conf.d/my.cnf
CMD ["mysqld"]
EXPOSE 3306
version: '2'
services:
db:
image: mysql:5.7
ports:
- "6603:3306"
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_DATABASE=laravelProject
- LANG=C.UTF-8
volumes:
- db:/var/lib/mysql
command: mysqld --sql-mode=NO_ENGINE_SUBSTITUTION --character-set-server=utf8 --collation-server=utf8_unicode_ci
web:
image: arbiedev/php-nginx:7.1.8
ports:
- "8080:80"
volumes:
- ./www:/var/www
- ./nginx.conf:/etc/nginx/sites-enabled/default
volumes:
db:
You can build your own image and use it in your Docker compose file.
FROM php:7.2-alpine3.8
RUN apk update
RUN apk add bash
RUN apk add curl
# INSTALL COMPOSER
RUN curl -s https://getcomposer.org/installer | php
RUN alias composer='php composer.phar'
# INSTALL NGINX
RUN apk add nginx
I used the PHP alpine image as my base image because it's lightweight, so you might have to install other dependencies yourself. In your docker-compose file
web:
build: path/to/your/Dockerfile/directory
image: your-image-tag
ports:
- "8080:80"
volumes:
- ./www:/var/www
- ./nginx.conf:/etc/nginx/sites-enabled/default
You could do something like this:
FROM php:8.0.2-apache
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y mariadb-client libxml2-dev
RUN apt-get autoremove -y && apt-get autoclean
RUN docker-php-ext-install mysqli pdo pdo_mysql xml
COPY --from=composer /usr/bin/composer /usr/bin/composer
the argument COPY --from= should solve your problem.
FROM php:7.3-fpm-alpine
RUN docker-php-ext-install pdo pdo_mysql
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer
RUN apk update
RUN apk upgrade
RUN apk add bash
RUN alias composer='php /usr/bin/composer'
I create a container for my laravel project using this dockerfile:
FROM composer:1.8
RUN apk add --no-cache libpng libpng-dev libjpeg-turbo-dev libwebp-dev zlib-dev libxpm-dev
RUN docker-php-ext-install pdo mbstring gd
RUN docker-php-ext-enable gd
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8181
EXPOSE 8181
then I try to use it as a service inside a docker-compose.yml file:
version: '3'
services:
webservice:
image: private.repo.com/my_user/webservice
ports:
- '80:8181'
depends_on:
- mariadb
mariadb:
image: mariadb
volumes:
- './db:/var/lib/mysql'
environment:
- MYSQL_ROOT_PASSWORD=some_password
phpmyadmin:
image: phpmyadmin/phpmyadmin
depends_on:
- mariadb
ports:
- '443:80'
environment:
- PMA_HOST=mariadb
- PMA_PORT=3306
now I am getting error 500 when I want to call my APIs.
(error is [QueryException] could not find driver)
I searched about it and all I found is that it usually about happens because there is something wrong with .env file
this is my .env file in laravel project
DB_CONNECTION=mysql
DB_HOST=mariadb
DB_PORT=3306
DB_DATABASE=my_db_name
DB_USERNAME=my_db_user
DB_PASSWORD=my_db_pass
the DB_HOST is the same as my docker-compose.yml file.
I also tried to exporting port 3306 in my mariadb service but it didn't work either
where did I go wrong, pls help
-----------update-----------
I also checked the ip of mariadb service and put it on my service container (while they both working, without shutting them down) but the problem remains.
it was actually pretty simple.
I had to installed pdo_mysql too
now my docker file is like this:
FROM composer:1.8
RUN apk add --no-cache libpng libpng-dev libjpeg-turbo-dev libwebp-dev zlib-dev libxpm-dev
RUN docker-php-ext-install pdo mbstring gd pdo_mysql # this line is changed
RUN docker-php-ext-enable gd
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8181
EXPOSE 8181
I found the solution here: https://github.com/docker-library/php/issues/62#issuecomment-70306737