Dockerfile doesn't write files to host only to container - macos

What I want is for this dockerfile to clone into the host machine (mine) and I'll copy it over as a volume, but instead it's cloning it directly into the container instead.
This is the dockerfile:
FROM php:7.4-apache
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -y
RUN a2enmod ssl && a2enmod rewrite
RUN a2enmod include
# Install software
RUN apt-get install -y git
WORKDIR /
RUN git clone mygitrepo.git /test
I also have a different dockerfile that used to write to host, but doesn't anymore:
FROM nginx:1.19.1-alpine
RUN apk update && \
apk add --no-cache openssl && \
openssl req -x509 -nodes -days 365 \
-subj "/C=CA/ST=QC/O=Company Inc/CN=example.com" \
-newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key \
-out /etc/ssl/certs/nginx-selfsigned.crt;
I'm not sure where the root of the problem is. here's the docker-compose file that I use to start this:
version: '3.7'
services:
build:
build:
context: .
dockerfile: build.Dockerfile
networks:
- web
apache:
container_name: Apache
build:
context: ./apache
dockerfile: apache.Dockerfile
ports:
- '127.0.0.1:80:80'
- '127.0.0.1:443:443'
networks:
- web
networks:
web:
volumes:
dump:
I removed a lot of extra stuff so it may appear it doesn't start at all. The containers run fine. The servers run fine. I just want it to write to host and not just the container which is what it's doing. I'm having difficulty googling this.
I'm running a macos.
Thank you in advance! :D

You need to use volumes mapping to be able to do this. Edit your docker-compose file:
volumes:
-${PWD}/:/[container_dir]/
Once this volume mapping is done changes on the host machine is automatically written to the docker container. So, there is no need to think how you can write back from container to the host machine as this fits the purpose well.

Related

How to run db migration when Laravel container is up?

I have problem with running db migration on when container is up.
Problems:
cant set app key because gitlab-ci didn't copy .env file (getting err in gitlab ci console), so setting key needs to happen later
running migration with wait-for-it because container exits with success code 0 (migrations is up)
I will put code only for my db and web container.
db:
container_name: db
image: mysql:5.7.22
restart: unless-stopped
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_USER: ${DB_USERNAME}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
networks:
- my-network
backend:
image: registry image
container_name: "backend"
build:
context: ./backend
dockerfile: Dockerfile
depends_on:
- db
ports:
- 1020:80
networks:
- my-network
gitlab-ci:
build-backend:
tags:
- vps
variables:
GIT_CLEAN_FLAGS: none
stage: dockerize
image: docker:latest
services:
- docker:dind
dependencies: []
script:
- docker build -t backend backend
- cp .env ./backend/.env
- cd backend
- docker build -t $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_BRANCH .
- docker tag $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_BRANCH $CI_REGISTRY_IMAGE/frontend:$CI_COMMIT_REF_NAME
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker push $CI_REGISTRY_IMAGE/backend:$CI_COMMIT_REF_NAME
...deploying code
I'm using https://github.com/vishnubob/wait-for-it
Dockerfile:
FROM webdevops/php-nginx:7.4-alpine
# Install Laravel framework system requirements (https://laravel.com/docs/8.x/deployment#optimizing-configuration-loading)
RUN apk add oniguruma-dev postgresql-dev libxml2-dev
RUN docker-php-ext-install \
bcmath \
ctype \
fileinfo \
json \
mbstring \
pdo_mysql \
pdo_pgsql \
tokenizer \
xml
# Copy Composer binary from the Composer official Docker image
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
ENV WEB_DOCUMENT_ROOT /var/www/html/public
ENV APP_ENV production
WORKDIR /var/www/html
COPY . .
RUN composer install --no-interaction --optimize-autoloader
RUN chown -R root:root .
RUN chmod -R ugo+rw storage
RUN chmod 777 wait-for-it.sh
RUN chmod 777 migrate.sh
EXPOSE 80
CMD ["./wait-for-it.sh", "db:3306", "--", "./migrate.sh"]
migrate.sh:
#!/bin/sh
php artisan key:generate
# Optimizing Configuration loading
php artisan config:cache
# Optimizing Route loading
php artisan route:cache
# Optimizing View loading
php artisan view:cache
echo "finished cashes"
php artisan migrate --force &
exec "$#"
So how can I solve exit code 0, meant to say how to prevent container from stopping?
Thanks
Solution is to use Supervisor which is useful for having all the jobs in background and wont close your container while running migrations.
I can't believe that this configuration from this repo is working perfectly it saves my time. I spent more than 7 days in searching for the best solution, but this guy who posted this saves me!
Refer to this repo please https://github.com/harshalone/laravel-9-production-ready
I didn't have to change a line of code, hope you don't have too. Simply works!
for anyone struggling with running migrations but your database isn't up before your Laravel application, just want to mention that I've used wait-for-it script and changed last line of code in my Dockerfile like this:
CMD ["/var/www/docker/wait-for-it.sh", "db:3306", "--", "/var/www/docker/run.sh"]
So now my migrations will first wait for database to be up and running.
Just put wait-for-it.sh inside of your docker folder or use it from github directly.

docker compose and swarm for laravel app in production

everyone, I am confused I am new to the DevOps world and I have no idea how to use docker-compose or swarm in production I mean what are the best practices in production for both I followed up with this article on the digital ocean How To Install and Set Up Laravel with Docker Compose on Ubuntu 20.04
all works like a charm in local, test, and dev environments, and I tried to take this to the next step for production env, and I noticed some things should be changed for production mode like so
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside.
Binding to different ports on the host. check the link for more info use compose in production
I don't know how to achieve #1 point
here's my DockerFile below to build my custom laravel image and docker-compose for my services
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Download php extension installer
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
# Give php extension installer a permission
RUN chmod +x /usr/local/bin/install-php-extensions
# Install php extensions via php extension installer
RUN install-php-extensions zip
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www/html
USER $user
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile
image: app
container_name: app
restart: always
working_dir: /var/www/html
volumes:
- ./:/var/www/html
networks:
- backend
db:
image: mysql:8.0
container_name: db
restart: always
environment:
MYSQL_DATABASE: ROUTE
MYSQL_ROOT_PASSWORD: 2020
MYSQL_PASSWORD: 2020
MYSQL_USER: sqluser
volumes:
- db:/var/lib/mysql
networks:
- backend
nginx:
image: nginx:1.21.6
container_name: nginx
restart: always
ports:
- 8000:80
networks:
- backend
volumes:
- ./:/var/www/html
- ./docker-compose/nginx:/etc/nginx/conf.d/
phpmyadmin:
image: phpmyadmin
container_name: pma
restart: always
ports:
- 8283:80
environment:
PMA_HOSTS: db
PMA_ARBITRARY: 1
PMA_USER: sqluser
PMA_PASSWORD: 2020
networks:
- backend
networks:
backend:
driver: bridge
volumes:
db:
Note:-
in Nginx service, I Created two shared volumes. The first one will synchronize contents from the current directory to /var/www inside the container. This way, when you make local changes to the application files, they will be quickly reflected in the application being served by Nginx inside the container (Which is not good for production). The second volume will make sure our Nginx configuration file, located at docker-compose/nginx/, is copied to the container’s Nginx configuration folder.
i tried to remove the first volume but keep the second one to use my custom configuration but it did not work at all why?

Connect two containers between them failed

I have some issue when I try to manage docker-compose and dockerfile together.
I researched and I know that is possible to use docker-compose without a dockerfile, but I think for me is better to use Dockerfile too because I want an environment to be easy to be modified.
The problem is that I want to have a container with postgres, which is a dependent component to another container, container named api which is used to run the application.
This container contains Java 17 and Maven 3 and take using docker-compose, image from Dockerfile. The problem is: while I use Dockerfile, everything is fine, but actually when I use docker-compose, I got this error:
2021-12-08T08:36:37.221247254Z /usr/local/bin/mvn-entrypoint.sh: line 50: exec: mvn test: not found
Configuration files are:
Dockerfile
FROM openjdk:17-jdk-slim
ARG MAVEN_VERSION=3.8.4
ARG USER_HOME_DIR="/root"
ARG SHA=a9b2d825eacf2e771ed5d6b0e01398589ac1bfa4171f36154d1b5787879605507802f699da6f7cfc80732a5282fd31b28e4cd6052338cbef0fa1358b48a5e3c8
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN apt-get update && \
apt-get install -y \
curl procps \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
COPY mvn-entrypoint.sh /usr/local/bin/mvn-entrypoint.sh
COPY settings-docker.xml /usr/share/maven/ref/
COPY . .
RUN ["chmod", "+x", "/usr/local/bin/mvn-entrypoint.sh"]
ENTRYPOINT ["/usr/local/bin/mvn-entrypoint.sh"]
CMD ["mvn", "test"]
And docker-compose file:
services:
api_service:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: api_core_backend
ports:
- 8080:8080
depends_on:
- postgres_db
postgres_db:
image: "postgres:latest"
container_name: postgres_core_backend
restart: always
ports:
- 5432:5432
environment:
POSTGRES_DB: postgres
POSTGRES_PASSWORD: root
Can anyone explain me why I got errors when I execute with docker-compose but everything is fine if I use dockerfile instead?
thank you.
Update: Link error while I try to connect to another container:
Caused by: org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL State : 08001
Error Code : 0
Message : Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
The issue
Based on the logs, it looks like the issue is that you're using localhost as the hostname when you connect.
Docker compose creates an internal network where the hostnames are mapped to the service names. So in your case, the hostname is postgres_db.
Please see the docker compose docs for more information.
Solution
Try specifying postgres_db as the hostname :)

Laravel Not Found, Docker, Apache2

I am traying to connect to my container but I am getting the following error. Before my container works without problems. I made a new build but it doesn’t work.
My Docker file is the following:
FROM php:7.2-apache
LABEL maintainer="christianahvilla#gmail.com"
# Install PHP
RUN apt-get update && apt-get install -y \
curl \
zlib1g-dev \
libzip-dev \
nano
# Add and Enable PHP-PDO Extenstions
RUN docker-php-ext-install mysqli pdo pdo_mysql
RUN docker-php-ext-enable pdo_mysql
RUN docker-php-ext-install zip
# # Install PHP Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
#set our application folder as an environment variable
ENV APP_HOME /var/www/html
#change uid and gid of apache to docker user uid/gid
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data
COPY --chown=www-data:www-data . $APP_HOME
#Expose Port 8000 since this is our dev environment
EXPOSE 8000
My Docker-Compose:
version: "3.7"
services:
#Laravel App
web:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:80
volumes:
- ./:/var/www
- ./public:/var/www/html
networks:
- mynet
depends_on:
- db
#MySQL Service
db:
image: mysql:5.7
container_name: db
ports:
- 3306:3306
environment:
MYSQL_DATABASE:
MYSQL_USER:
MYSQL_PASSWORD:
MYSQL_ROOT_PASSWORD:
volumes:
- mysqldata:/var/lib/mysql/
networks:
- mynet
#Docker Networks
networks:
mynet:
driver: bridge
#Volumes
volumes:
mysqldata:
driver: local
When I try to access to http:localhost:8000/ I can do it but if I try to access to another route I get the error.
You have to configure the apache2.conf in /etc/apache2/apache2.conf from Dockerfile, and also a2endmode rewrite, finally you need to restart apache2:
RUN sed -i '/<Directory \/var\/www\/>/,/<\/Directory>/ s/AllowOverride None/AllowOverride All/' /etc/apache2/apache2.conf
RUN a2enmod rewrite
RUN service apache2 restart
Then run docker-compose build and docker-compose up -d

Laravel docker: adsist-docker_app_1 exited with code 1

im trying to set up a Laravel project.
The setup is:
Centos
Apache
MySql
I have split containers for App, Web, Database
And app container is fail to run :(
This is my docker-compose.yml:
version: '2'
services:
# The Application
app:
build:
context: ./
dockerfile: app.dockerfile
working_dir: /var/www
volumes:
- ./:/var/www
environment:
- "DB_PORT=3306"
- "DB_HOST=database"
# The Web Server
web:
build:
context: ./
dockerfile: web.dockerfile
working_dir: /var/www
volumes_from:
- app
ports:
- 8080:80
# The Database
database:
image: mysql:5.6
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
ports:
- "33061:3306"
volumes:
dbdata:
This is my app.dockerfile
FROM centos:7.5.1804
RUN yum update -y
RUN yum install epel-release http://rpms.remirepo.net/enterprise/remi-release-7.rpm yum-utils -y && \
yum-config-manager --enable remi-php72 && \
yum update -y
RUN yum install vim wget curl unzip \
php php-json php-gd php-mbstring php-pdo php-xml php-mysqlnd -y
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php --install-dir=/usr/local/bin --filename=composer && \
rm -rf composer-setup.php
RUN curl -sL https://rpm.nodesource.com/setup_8.x | bash - && \
yum install nodejs -y
RUN yum clean all
EXPOSE 80
# Keep container active
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And this is my web.dockerfile:
FROM httpd:2.4
COPY httpd.conf /etc/httpd/conf/httpd.conf
When I run docker-compose up shows some messages:
app_1 | AH00526: Syntax error on line 119 of
/etc/httpd/conf/httpd.conf: app_1 | DocumentRoot '/var/www/html'
is not a directory, or is not readable
adsist-docker_app_1 exited with code 1
/var/www/html set as apache documentroot is popular in most linux distributions, but this is not the thing in httpd docker image. Additional, even you modify http.conf to use /var/www/html, you still should mount to /var/www/html, not /var/www.
And back to the httpd image, you could see next:
$ docker run -idt httpd:2.4
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
182c63e24082 httpd:2.4 "httpd-foreground" 3 seconds ago Up 1 second 80/tcp clever_bassi
$ docker exec -it clever_bassi /bin/bash
root#182c63e24082:/usr/local/apache2/conf# cat httpd.conf
......
DocumentRoot "/usr/local/apache2/htdocs"
......
So, if you do not change default settings, you should mount your host's files to /usr/local/apache2/htdocs to serve your service.
And more, COPY httpd.conf /etc/httpd/conf/httpd.conf in your Dockerfile is also not correct, it should be COPY httpd.conf /usr/local/apache2/conf/httpd.conf.
It seems that your app will be deployed in /var/www/ while you configure Apache to use /var/www/html/ as a document root so while starting Apache it checks first for configured document root but it can't find it so it fails to start and generate the mentioned error.
So you have to decide to change document root in httpd.conf to point to /var/www/ or configure docker-compose file to work and mount volumes to /var/www/html.

Resources