Docker compose build | Image after pull does not work / is empty - laravel

I have a laravel app.
I used laravel sail to generate the docker-compose.yml file.
Sail also created a Dockerfile in ./vendor/laravel/sail/runtimes/8.0
I see this also in the .yml file.
When I run on my local maschine docker-compose up it will create an image and run a container.
This container works fine.
Now I use docker-compose build to create an image which I want to share with a colleague.
Docker will create the image successfully. I push it to HUB via Docker Desktop and my colleague can pull it.
But the pulled image does not work. When I observe the container through the CLI the app code is not even there. It looks like an empty image. Same also happens when I pull it on my local maschine.
Something gets messed up when I push the image to docker. Does anyone have an idea whats happening?
As per request here are the steps to reproduce:
Add laravel sail to an existing laravel project (You do not need the execute this if you create a new laravel project, which is laravel 8+):
composer require laravel/sail --dev
Then follow the laravel sail guide and call:
php artisan sail:install
The above call will generate a docker-compose.yml
Now you can run a container with docker-compose up.
This will create an image and run a container.
If you want to push this image to the HUB, then the issues appear.
Here is my docker-compose.yml
version: '3'
services:
laravel.app:
build:
context: ./vendor/laravel/sail/runtimes/8.0
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: dockeruser/feedback_app:1.5
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
networks:
sail:
driver: bridge

Related

How to push entirely project of image to the docker hub?

I have a project Spring Boot MVC and Mysql Database with Dockerfile and docker-compose.yml and I want to push this project to the hub docker that to run every client as you know. I pushed to the docker hub successfully with the docker-compose push command, but after that when I pull my image from the hub docker it doesn't work because there are some errors occurs for an instance connection refuesed and etc error happens. but in my device it work perfectly I mean I am runing my project successfully with the docker container.
This is my Dockerfile:
FROM maven:3.8.2-jdk-11
WORKDIR /empmanagment-app
COPY . .
RUN mvn clean install
CMD mvn spring-boot:run
and this is my docker-compose.yml file
version: '3'
services:
mysql-standalone:
image: 'mysql:5.7'
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_ROOT_USER=root
- MYSQL_PASSWORD=root
- MYSQL_DATABASE=elvin_emp_managment
ports:
- "3307:3306"
networks:
- common-network
volumes:
- mysql-standalone:/var/lib/mysql
springboot-docker-container:
build: ./
image: anar1501/emp-managment
ports:
- "8080:8080"
networks:
- common-network
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql-standalone:3306/elvin_emp_managment?autoReconnect=true&useSSL=false
SPRING_DATASOURCE_USERNAME: "root"
SPRING_DATASOURCE_PASSWORD: "root"
depends_on:
- mysql-standalone
volumes:
- .m2:/root/.m2
volumes:
mysql-standalone:
networks:
common-network:
driver: bridge
can anyone prefer any suggest, that what I am doing?

Phoenix server doesn't render css and images with Docker

I'm new to Elixir and Phoenix, and having to work in CI/CD environment I'm trying to figure out how to use Phoenix with Docker.
I've tried various tutorials and videos out there, many of them doesn't work, but those who do work, they have the same result.
Phoenix server doesn't seems to find some resources (the assets folder?).
But inside my Dockerfile I'm copying the entire app folder, and I can confirm that /assets is inside the container by attaching to it.
Dockerfile:
FROM elixir:alpine
RUN apk add --no-cache build-base git
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
COPY . .
RUN mix do deps.get, deps.compile
CMD ["mix", "phx.server"]
Docker-compose
version: '3.6'
services:
db:
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_HOST_AUTH_METHOD: trust
image: 'postgres:11-alpine'
restart: always
volumes:
- 'pgdata:/var/lib/postgresql/data'
web:
build: .
depends_on:
- db
environment:
MIX_ENV: dev
env_file:
- .env
ports:
- '4000:4000'
volumes:
- .:/app
volumes:
pgdata:
Steps I'm doing to create the containers and running the server:
docker-compose build
docker-compose run web mix ecto.create
docker-compose up
The database is created successfully in the db container.
What can be happening here?
Sorry if it's straightforward, I don't use Docker for a while and I still didn't understood Phoenix boilerplate completely.
If you know some good resources about Docker and CI/CD pipelines with Phoenix, I also appreciate so I can study it.
You also need to build the assets. npm install --prefix assets This needs to be done after after mix deps.get but can be done after the mix deps.compile which isn't really needed. You can start the server after mix deps.get and it will compile the deps and your app automatically.

Docker shared volume not working in MacOs

I have a docker-compose.yml file. It works fine in Windows 10 but whenever I try to run that in MacOs it doesnt work especially the shared volume.
Here is the content of my docker-compose.yml file and directory structure
version: '3'
services:
database:
image: mongo
container_name: pcore-database
ports:
- '27017:27017'
node-server:
image: node
container_name: pcore-node-server
volumes:
- ./node-services :/usr/app/node-services
working_dir: /usr/app/node-services
command: npm run dev
ports:
- '3000:3000'
links:
- database
- nginx-server
depends_on:
- database
apache-server:
image: webdevops/php-apache
container_name: pcore-apache-server
working_dir: /app
volumes:
- ./php-services :/app
ports:
- '8000:80'
Check the node-server service and nginx-server
Now when i run command docker-compose up it creates additional directories with same name and throws error.
Check the error and additional directories it created.
I dont know whats going on. Its working fine in windows 10 but in MacOs it creates additional directories and does not share the volumes. Can someone guid me?

Docker deployment works with MacOs but not with Ubuntu 16.04

I'm trying to dockerise my laravel app: https://github.com/xoco70/kendozone/tree/docker-local
My dev env is working, now I am working on a deployable app in local environment.
In MacOs, Everything is ok.
I build it with:
docker build . -f app.dockerfile.local -t kendozone:local-1.0.0
And run it with
docker-compose -f docker-compose-local.yml up --force-recreate
The problem is with npm run dev with is a webpack build command
It will just compile Sass, combine Js and CSS, and copy it to /var/www/public folder
But when I run my app in ubuntu, I can access login page but it seems to load without any css / js.
With MacOs, I can see them with no problem....
Here is my docker-compose:
version: '2'
services:
# The Application
app:
image: kendozone:local-1.0.0
working_dir: /var/www
volumes:
- codevolume:/var/www
environment:
- "DB_DATABASE=homestead"
- "DB_USERNAME=homestead"
- "DB_PASSWORD=secret"
- "DB_PORT=3306"
- "DB_HOST=database"
depends_on:
- database
# The Web Server
web:
build:
context: ./
dockerfile: nginx.dockerfile
working_dir: /var/www
volumes:
- codevolume:/var/www
ports:
- 8090:80
depends_on:
- app
# The Database
database:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=root"
ports:
- "33061:3306"
volumes:
dbdata:
codevolume:
Any Idea ???
One way to fix this is to make node available in your docker base image, and then actually run npm install and npm run production to build a production ready image of your application.

Docker compose, link file outside container

I'm working with docker-compose for a laravel project and nginx.
This is my docker-compose.yml :
version: '2'
services:
backend:
image: my_image
depends_on:
- elastic
- mysql
mysql:
image: mysql:8.0.0
nginx:
depends_on:
- backend
image: my_image
ports:
- 81:80
So, my Laravel project is in the container backend, and If I run the command : docker-compose up -d it's ok, all containers are created and my project is running on port 81.
My problem is, In the Laravel project in my container backend, I have a .env file with database login, password and other stuff.
How can I edit this file after docker-compose up ? Directly in the container is not a good idea, is there a way to link a file outside a container with docker-compose ?
Thanks
One approach to this is to use the 'env_file' directive on the docker-compose.yml, in there you can put a list of key value pairs that will be exported into the container. For example:
web:
image: nginx:latest
env_file:
- .env
ports:
- "8181:80"
volumes:
- ./code:/code
Then you can configure your application to use these env values.
One catch with this approach is that you need to recreate the containers if you change any value or add a new one (docker-compose down && docker-compose up -d).

Resources