Error installing laravel through composer in docker - laravel

I'm having a problem installing laravel through a dockerfile. I'm using docker-compose that pulls a dockerfile where I basically have this:
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel app
CMD apachectl -D FOREGROUND
but when I access the container and I will see the files that should have been created with the composer I see that it is empty even though I have seen the command executed in the container build.
The container is working perfectly and even I can access it ... only files that do not even appear.
If I run the composer command manually after the container is created the files appear.

In your Dockerfile, you used WORKDIR /var/www and then RUN composer create-project ... which makes composer create files under /var/www on the container file system.
In your docker-compose.yml file you used to start your container:
version: '3.7'
services:
app:
container_name: "app"
build:
context: ./docker
dockerfile: Dockerfile-app
ports:
- "80"
- "443"
restart: always
depends_on:
- db
volumes:
- ".:/var/www"
You are declaring a volume that will be mounted on that same location /var/www in your container.
What happens is that the volume content will take the place of whatever you had on /var/www in the container file system.
I suggest you read carefully the documentation regarding docker volumes, and more specifically the part titled Populate a volume using a container.
Now to move on, ask yourself why you needed that volume in the first place. Is it necessary to change files at run time ?
If not, just add your files at build time:
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel app
COPY . /var/www
CMD apachectl -D FOREGROUND
and remove the volume for /var/www.
EDIT
Developing with the help of a Docker container
During development, you change php files on your docker host (assumed to be you development computer) and need to frequently test the result by testing your app served by the webserver from the docker container.
It would be cumbersome to have to rebuild a Docker image every time you need to test your app. The solution is to mount a volume so that the container can serve the files from your development computer:
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
CMD apachectl -D FOREGROUND
and start it with:
version: '3.7'
services:
app:
container_name: "app"
build:
context: ./docker
dockerfile: Dockerfile-app
ports:
- "80"
- "443"
restart: always
depends_on:
- db
volumes:
- ".:/var/www"
...
When you need to run some commands within that container, just use docker exec:
docker-compose exec app composer create-project --prefer-dist laravel/laravel app
Producing project artifacts
Since what you will be deploying is not a zip/tar archive containing your source code and configurations but a docker image, you need to build the docker image you will use at deployment time.
Dockerfile for production
For production use, you want to have a Docker image which holds all required files and does not need any docker volume, excepted for holding data produced by users (uploaded files, database files, etc)
FROM php:7.3-apache-stretch
*some apt-get and install composer*
WORKDIR /var/www
COPY . /var/www
CMD apachectl -D FOREGROUND
Notice that there is no RUN composer create-project --prefer-dist laravel/laravel app in this Dockerfile. This is because this command is to initialise your project and this is a development time task, not a deployment time task.
You will also need a place to host your docker images (a Docker registry). You can deploy your own registry as a Docker container using the official registry image, or use the one provided by companies:
Gitlab.com - Gitlab Registry (free)
Docker.com - hub.docker.com (1 private image free)
Google.com - Google Container Registry
...
So you need to build a docker image, and then push that image on your registry. Best practice is to automate those tasks with the help of continuous integration tools such as Jenkins, Gitlab CI, Travis CI, Circle CI, Google Cloud Build ...
Your CI job will run the following commands:
git clone <url of you git repo> my_app
cd my_app
git checkout <some git tag>
docker build -t <registry>/<my_app>:<version>
docker login <registry> --user=<registry user> --password=<registry password>
docker push <registry>/<my_app>:<version>
Deploying your Docker image
Start you container with:
version: '3.7'
services:
app:
container_name: "app"
image: <registry>/<my_app>:<version>
ports:
- "80"
- "443"
restart: always
depends_on:
- db
...
Notice here that the docker-compose file does not build any image. For production it is a better practice to refer to an already built docker image (which has been deployed earlier on a staging environment for validation).

Related

Phoenix server doesn't render css and images with Docker

I'm new to Elixir and Phoenix, and having to work in CI/CD environment I'm trying to figure out how to use Phoenix with Docker.
I've tried various tutorials and videos out there, many of them doesn't work, but those who do work, they have the same result.
Phoenix server doesn't seems to find some resources (the assets folder?).
But inside my Dockerfile I'm copying the entire app folder, and I can confirm that /assets is inside the container by attaching to it.
Dockerfile:
FROM elixir:alpine
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY . .
RUN mix do deps.get, deps.compile
CMD ["mix", "phx.server"]
Docker-compose
version: '3.6'
services:
db:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_HOST_AUTH_METHOD: trust
image: 'postgres:11-alpine'
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
web:
build: .
depends_on:
- db
environment:
MIX_ENV: dev
env_file:
- .env
ports:
- '4000:4000'
volumes:
- .:/app
volumes:
pgdata:
Steps I'm doing to create the containers and running the server:
docker-compose build
docker-compose run web mix ecto.create
docker-compose up
The database is created successfully in the db container.
What can be happening here?
Sorry if it's straightforward, I don't use Docker for a while and I still didn't understood Phoenix boilerplate completely.
If you know some good resources about Docker and CI/CD pipelines with Phoenix, I also appreciate so I can study it.
You also need to build the assets. npm install --prefix assets This needs to be done after after mix deps.get but can be done after the mix deps.compile which isn't really needed. You can start the server after mix deps.get and it will compile the deps and your app automatically.

docker-compose build context dockerfile envar image

I would like use docker-compose to build/run dockerfiles that have envars in their FROM keyword. The problem that I am getting now is that I seem to be unable to pass envars from my environment through docker-compose into the dockerfile.
docker-compose.yml
version: "3.2"
services:
api:
build: 'api/'
restart: on-failure
depends_on:
- mysql
networks:
- frontend
- backend
volumes:
- ./api/php/:/var/www/html/
Dockerfile in 'api/'
FROM ${DOCKER_IMAGE_API}
RUN apk update
RUN apk upgrade
RUN docker-php-ext-install mysqli
Why?
I want to do this so that I can run docker-compose from a bash script that detects the host architecture and changes the base image of the underlying dockerfiles in the host application.
FROM instructions support variables that are declared by any ARG instructions that occur before the first FROM. So what you can do is this:
ARG IMAGE
FROM $IMAGE
when you run the build command, you then pass the --build-arg as follows:
docker build -t test --build-arg IMAGE=alpine .
you can also choose to have a default value for the IMAGE variable, to be used if the --build-arg flag isn't used.
Alternatively, in case you were to use docker compose build and not docker build (and I think this is your case), you can specify the variable in the docker-compose build --build-arg:
version: "3.9"
services:
api:
build: .
and then
docker compose build --build-arg IMAGE=alpine

Docker does not create a new container when using docker-compose build

I've set up two windows container for ASP.NET and MSSQL server. On the first docker-compose build everything works as expected. Then after I've made some changes to the custom dockerfile and run docker-compose build again it uses the old container again, not making any changes.
I assumed that when i did a build it created a new container. Am i misunderstanding how docker works?
This is the docker-compose.yml
version: '3'
services:
db:
image: microsoft/mssql-server-windows-developer
environment:
sa_password: "Password1234!"
ACCEPT_EULA: "Y"
ports:
- "8003:1433"
build:
context: .
dockerfile: mssql.dockerfile
web:
build:
context: .
dockerfile: web.dockerfile
image: mcr.microsoft.com/dotnet/framework/aspnet:4.8
#volumes:
# - .:C:/inetpub/wwwroot
ports:
- "8080:80"
- "8081:431"
This is the mssql.dockerfile
# escape=`
FROM microsoft/mssql-server-windows-developer
#set shell
SHELL ["powershell.exe", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#make temp folder
RUN mkdir C:\temp
#copy script to temp folder
COPY DownloadDatabase.ps1 C:\temp
COPY RestoreDatabase.ps1 C:\temp
#run script to retrieve production database
WORKDIR C:\temp
RUN .\DownloadDatabase.ps1 -sourcefile <url> -destinationfile <target>
CMD .\RestoreDatabase.ps1
It is very easy to tell if the image has been re-used because the mkdir C:\temp errors out saying the directory already exists.
EDIT: I've already tried all the options on docker compose. no-cache, force-rm
docker-compose build
Only builds images but does not start containers.
That's why your changes in dockerfile are not applied. You have rebuilded the image but not the container. It's the reason why the container previoulsy launched is based on the older version of the image.
docker-compose up
From Docker documentation :
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
In order to make shure that both of your image and container are rebuilded you have to add this flags :
docker-compose up --force-recreate --build
That way your containers are based on the correct image version.
Explanation on flags from Docker documentation :
--build Build images before starting containers.
--force-recreate Recreate containers even if their configuration
and image haven't changed.
If you want to do this for a specific service just add the service name at the end of command line :
docker-compose up --force-recreate --build serviceName
Another flag useful if you want a clear output is the -d flag :
-d, --detach Detached mode: Run containers in the background,
print new container names. Incompatible with
It turns out i simply had to do docker-compose pull before docker-compose build to refresh the dockerfile! Now it builds a fresh image every time!

Routes not updating inside my Laravel Container

I've got this docker-compose:
version: '3'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile
image: laravel-docker
ports:
- 8080:80
volumes:
- ./:/var/www
links:
- mysql
- redis
environment:
DB_HOST: mysql
DB_DATABASE: laravel_docker
DB_USERNAME: app
DB_PASSWORD: password
REDIS_HOST: redis
SESSION_DRIVER: redis
CACHE_DRIVER: redis
mysql:
image: mysql:5.7
ports:
- 13306:3306
environment:
MYSQL_DATABASE: laravel_docker
MYSQL_USER: app
MYSQL_PASSWORD: password
MYSQL_ROOT_PASSWORD: password
redis:
image: redis:4.0-alpine
ports:
- 16379:6379
and this Dockerfile:
FROM php:7.1.8-apache
COPY . /srv/app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
WORKDIR /srv/app
RUN docker-php-ext-install mbstring pdo pdo_mysql \
&& chown -R www-data:www-data /srv/app
RUN a2enmod rewrite
which is my configuration to run a Laravel container with MySQL and Redis. Everything works perfectly, but I'm encountering problems when I try to add (or update) a new route: it doesn't appear until I don't stop all containers and restart them with --build tag.
Is there a way to add and update routes without restart my containers?
ssh to the app container and from the project directory run this command:
php artisan route:clear
Based on the Dockerfile your app lives at /srv/app, yet in the yml file you list /var/www as the mount target. Change that to /srv/app
Explanation:
Building the Dockerfile results in an immutable image. The software inside the image was configured to serve your application from /srv/app. Since COPY . /srv/app added your app to the image at the right location, it could be served from there just fine, but that command adds it when the image is built, and then it becomes an immutable part of the image, so the changes you make on the host are not going to be visible inside. What you want to do is bind mounting your project directory to /srv/app, and that will obscure (temporarily "replace") the contents of that directory with the one on your host, and this is what that yml line does. (Btw the fact that mounts obscure the existing directory is not docker-specific.)
https://docs.docker.com/storage/bind-mounts/#mounting-into-a-non-empty-directory-on-the-container
The reason why we often both COPY and bind mount our project directories is that this practice allows us to use the same Dockerfile for both development (without frequent image rebuilds) and production.
I'd be curious to know if your change is properly propagating to your volume. It could be a permissions issue inside the container. What happens if you connect to the container and "cat" the contents of the routes file? Does it match the file outside the container? What OS are you running docker on? How locked down is the OS's file system? Are there any constraints that would make volumes work funky? Also, what file system sync process are you using? Are you just using the default?

deploy docker to heroku without using heroku docker plugin

Say I'm working on a web project that runs gitlab-ci shell runner on my own ci server to build docker and deploy it to heroku, and I've gone through some docs from both gitlab and heroku like gitlab-ci: using docker build and heroku:Build and Deploy with Docker. Can I deploy the docker project without using heroku-docker plugin, which seems not so flexible to me? However I tried, the following approach build succeeded in deploying to heroku, but the app crash. Heroku logs says start script is missing in package.json, but since I'm deploying docker project, I couldn't do "start": "docker-compose up" there, could I?
#.gitlab-ci.yml
stages:
- deploy
before_script:
- npm install
- bower install
dev:
stage: deploy
script:
- docker-compose run nginx run-test
- gem install dpl
- dpl --provider=heroku --app=xixi-web-dev --api-key=$HEROKU_API_KEY
only:
- dev
# docker-compose.yml
app:
build: .
volumes:
- .:/code:ro
expose:
- "3000"
working_dir: /code
command: pm2 start app.dev.json5
nginx:
build: ./setup/nginx
restart: always
volumes:
- ./setup/nginx/sites-enabled:/etc/nginx/sites-enabled:ro
- ./dist:/var/app:ro
ports:
- "$PORT:80"
links:
- app
I don't want to use heroku docker plugin, because it seems less flexible, I can't create a app.json because I don't want to use an existing docker image for my app. Instead, I define custom Dockerfiles for app and nginx used in docker-compose.yml
Now it seems that heroku wouldn't detect my project as a docker project unless I deploy it by using heroku docker plugin, but as I mentioned above, I can't do that. Then is there any docs I'm missing on heroku or gitlab could help me out? Or do you have any idea that might be helpful? Thanks a lot!
OK, seems that heroku docker:release is required. I ended up installing heroku cli and heroku docker plugin on my CI server and use heroku docker:release --app app to release my app

Resources