User in docker can't create folders and files - ruby

I have a problem with creating folders and files within a docker container. I have a Ruby and Hanami web app deployed using Dockerfile and docker-compose yaml. Here is the content of both files for reference.
Dockerfile:
FROM ruby:2.7.5-bullseye
RUN apt-get update && apt-get install vim -y
RUN bundle config --global frozen 1
RUN adduser --disabled-login app_owner
USER app_owner
WORKDIR /usr/src/app
COPY --chown=app_owner Gemfile Gemfile.lock ./
COPY --chown=app_owner . ./
RUN gem install bundler:1.17.3
RUN bundle install
ENV HANAMI_HOST=0.0.0.0
ENV HANAMI_ENV=production
EXPOSE 2300
docker-compose.yml:
version: '3'
services:
postgres:
image: postgres
restart: unless-stopped
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
ports:
- 5432
volumes:
- postgres_staging:/var/lib/postgresql/data
web:
build: .
restart: unless-stopped
command: >
bash -c "bundle exec hanami db migrate
&& bundle exec rake initial_settings:add_default_language
&& bundle exec rake initial_settings:add_session_validity
&& bundle exec rake import_user:create
&& bundle exec rake super_admin:create
&& bundle exec rake create_parser_rules:start
&& bundle exec hanami assets precompile
&& cp -r apps/myapp/assets/webfonts public/webfonts
&& cp -r apps/myapp/assets/webfonts public/assets/webfonts
&& cp -r apps/myapp/assets/images/sort*.png public/assets
&& cp -r apps/myapp/assets/images/sort*.png public
&& cp -r apps/myapp/assets/images/ui-icons*.png public/assets/wordrocket
&& mkdir public/assets/images
&& cp -r apps/myapp/assets/images/sort*.png public/assets/images
&& bundle exec hanami server"
volumes:
- ./hanami_log:/usr/src/app/hanami_log
links:
- postgres
depends_on:
- postgres
nginx:
image: nginx:alpine
restart: unless-stopped
tty: true
ports:
- "${NGINX_PORT}:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- web
volumes:
postgres_staging:
web:
nginx:
The docker-compose has some initial commands that need to be run and the ones creating folders and files are failing. For example bundle exec hanami assets precompile fails with: Permission denied # dir_s_mkdir - /usr/src/app/public
The command created the folder and then copies over some files. I've confirmed this error by omitting the command from docker-compose and then tried running the command manually in the running container. I get the same error.
Is my configuration incorrect?
EDIT: Forgot one important thing: the problem occurs on the client's server running Ubuntu 20.04, while it works without issues on my dev laptop.
Thank you.
Seba

Related

Laravel Docker "Unable to create a directory at /var/www/storage/app/documents"

I use Laravel 9 with Docker if I want to upload images like this:
$document["file_object"]->store('documents')
I get the following error: Unable to create a directory at /var/www/storage/app/documents
It looks like it ist some kind of Docker permission error.
I use the local Filesystems Disk because none of my files should be public.
If I change the 'root' => storage_path('app') to 'root' => storage_path('') inside the filesystems config I don't get any error but the files are saved in here: /storage/documents but they should be in /storage/app/documents.
I think I need to modify some docker user permission, but im unsure how as I'm not the one who made the config an my docker skills are limited.
Dockerfile:
FROM php:8.1-apache
WORKDIR /var/www
RUN apt-get update && apt-get install -y \
libfreetype6-dev \
libjpeg-dev \
libpng-dev \
libwebp-dev \
--no-install-recommends \
&& docker-php-ext-enable opcache \
&& docker-php-ext-configure gd --with-freetype --with-jpeg \
&& docker-php-ext-install pdo_mysql -j$(nproc) gd \
&& apt-get autoclean -y \
&& rm -rf /var/lib/apt/lists/*
RUN pecl install redis && docker-php-ext-enable redis
# Update apache conf to point to application public directory
ENV APACHE_DOCUMENT_ROOT=/var/www/public
RUN sed -ri -e 's!/var/www/html!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf
# Update uploads config
RUN echo "file_uploads = On\n" \
"memory_limit = 1024M\n" \
"upload_max_filesize = 512M\n" \
"post_max_size = 512M\n" \
"max_execution_time = 1200\n" \
> /usr/local/etc/php/conf.d/uploads.ini
# Enable headers module
RUN a2enmod rewrite headers
ADD . /var/www
RUN chown -R www-data:www-data /var/www
docker-compose.yml:
# https://waihein.medium.com/configuring-redis-on-docker-in-laravel-58a39556ff97
# https://medium.com/#chewysalmon/laravel-docker-development-setup-an-updated-guide-72842dfe8bdf
# https://shouts.dev/articles/dockerize-a-laravel-app-with-apache-mariadb
# FIRST Start:
# 1. Run ON WINDOWS: docker run --rm -v ${pwd}:/app composer install
# or on UNIX: docker run --rm -v “$(pwd)”:/app composer install
# 2. Run: npm run setup
# npm run setup is doing: "docker-compose up -d --build && docker-compose exec app php artisan key:generate && docker-compose exec app php artisan migrate:fresh --seed && npm install && npm run dev"
# NORMAL Start: npm start
# npm start is doing: "docker-compose up -d && npm install && npm run dev"
# To stop: docker-compose down
version: '3.8'
services:
# Application & web server
app:
build:
context: .
working_dir: /var/www
container_name: immo-app
volumes:
- ./:/var/www
depends_on:
- "database"
ports:
- 80:80
networks:
- immonet
# Database
database:
image: 'mariadb:latest'
container_name: immo-database
restart: unless-stopped
expose:
- 3306
environment:
MYSQL_DATABASE: ${DB_DATABASE}
MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
MYSQL_PASSWORD: ${DB_PASSWORD}
MYSQL_USER: ${DB_USERNAME}
volumes:
- dbdata:/var/lib/mysql
networks:
- immonet
# Database management
pma:
image: phpmyadmin:5.1
container_name: immo-phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=${DB_HOST}
- PMA_USER=${DB_USERNAME}
- PMA_PASSWORD=${DB_PASSWORD}
- PMA_PORT=${DB_PORT}
depends_on:
- database
ports:
- 8888:80
networks:
- immonet
# Redis
redis:
image: redis:alpine
container_name: immo-redis
volumes:
- ./data/redis:/data
expose:
- 6379
networks:
- immonet
volumes:
dbdata:
networks:
immonet:
driver: bridge
You should not be using www-data as the owner and group of your /var/www folder. It should be your nomal user (possibly WSL user?). If the docker container does not work without www-data, then you need to refactor it a bit, but the issue is arround the permissions.
Just try having your normal user as the owner and group of /var/www

docker container failing to start after running install.sh script [duplicate]

This question already has answers here:
Docker-Compose + Command
(2 answers)
Closed 9 months ago.
I am using this docker-compose file:
version: '3.8'
# Services
services:
# Nginx Service
nginx:
image: nginx:1.21
ports:
- 80:80
volumes:
- ./src:/var/www/php
- ./.docker/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php
# PHP Service
php:
build: ./.docker/php
working_dir: /var/www/php
volumes:
- ./src:/var/www/php
command: /bin/bash -c "./install.sh"
depends_on:
mysql:
condition: service_healthy
# MySQL Service
mysql:
image: mysql/mysql-server:8.0
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: demo
volumes:
- ./.docker/mysql/my.cnf:/etc/mysql/conf.d/my.cnf
- mysqldata:/var/lib/mysql
healthcheck:
test: mysqladmin ping -h 127.0.0.1 -u root --password=$$MYSQL_ROOT_PASSWORD
interval: 2s
retries: 10
# PhpMyAdmin Service
phpmyadmin:
image: phpmyadmin/phpmyadmin:5
ports:
- 8080:80
environment:
PMA_HOST: mysql
depends_on:
mysql:
condition: service_healthy
# Volumes
volumes:
mysqldata:
I am trying to run a bash script (install.sh) after the container is created to run apt-get update install wget etc, but the php container fails when I try to run it.
My bash script is:
#!/bin/bash
mkdir testdir && apt-get update && apt-get install wget -y
(this file is here: ./src/install.sh)
It creates the folder correctly and the logs suggest it is trying to install wget (but never seems to finish) but the container never starts correctly.
If I remove the command: /bin/bash -c "./install.sh" line everything works correctly (but wget is not installed).
I have tried moving the command to a Dockerfile as a RUN command but it never seems to run
Any ideas why this is happening?
Thanks
As Hans Kilian said in the comments, docker-compose commands replace anything set by CMD or ENTRYPOINT. These commands are necessary for the container to function, and thus it never does anything more than installing wget.
You appear to be trying to run a file located under "./install.sh," which is not an absolute path. Try running the command using the absolute path of the file, as dockerfiles do not, in my experience, recognize changing directory after each command, so:
RUN cd /xyz
RUN /bin/bash -c "./install.sh"
does not have the same result as
RUN /bin/bash -c "/xyz/install.sh"
(where /xyz is the directory where install.sh is located)
Additionally, make sure the file is marked as executable with chmod when it is copied into your container.
However, if all you desire to do is create a directory and install wget, I would simply do this in the Dockerfile:
RUN mkdir testdir
RUN apt-get update && apt-get install -y wget

docker compose and swarm for laravel app in production

everyone, I am confused I am new to the DevOps world and I have no idea how to use docker-compose or swarm in production I mean what are the best practices in production for both I followed up with this article on the digital ocean How To Install and Set Up Laravel with Docker Compose on Ubuntu 20.04
all works like a charm in local, test, and dev environments, and I tried to take this to the next step for production env, and I noticed some things should be changed for production mode like so
Removing any volume bindings for application code, so that code stays inside the container and can’t be changed from outside.
Binding to different ports on the host. check the link for more info use compose in production
I don't know how to achieve #1 point
here's my DockerFile below to build my custom laravel image and docker-compose for my services
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Download php extension installer
ADD https://github.com/mlocati/docker-php-extension-installer/releases/latest/download/install-php-extensions /usr/local/bin/
# Give php extension installer a permission
RUN chmod +x /usr/local/bin/install-php-extensions
# Install php extensions via php extension installer
RUN install-php-extensions zip
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www/html
USER $user
version: "3.7"
services:
app:
build:
args:
user: sammy
uid: 1000
context: ./
dockerfile: Dockerfile
image: app
container_name: app
restart: always
working_dir: /var/www/html
volumes:
- ./:/var/www/html
networks:
- backend
db:
image: mysql:8.0
container_name: db
restart: always
environment:
MYSQL_DATABASE: ROUTE
MYSQL_ROOT_PASSWORD: 2020
MYSQL_PASSWORD: 2020
MYSQL_USER: sqluser
volumes:
- db:/var/lib/mysql
networks:
- backend
nginx:
image: nginx:1.21.6
container_name: nginx
restart: always
ports:
- 8000:80
networks:
- backend
volumes:
- ./:/var/www/html
- ./docker-compose/nginx:/etc/nginx/conf.d/
phpmyadmin:
image: phpmyadmin
container_name: pma
restart: always
ports:
- 8283:80
environment:
PMA_HOSTS: db
PMA_ARBITRARY: 1
PMA_USER: sqluser
PMA_PASSWORD: 2020
networks:
- backend
networks:
backend:
driver: bridge
volumes:
db:
Note:-
in Nginx service, I Created two shared volumes. The first one will synchronize contents from the current directory to /var/www inside the container. This way, when you make local changes to the application files, they will be quickly reflected in the application being served by Nginx inside the container (Which is not good for production). The second volume will make sure our Nginx configuration file, located at docker-compose/nginx/, is copied to the container’s Nginx configuration folder.
i tried to remove the first volume but keep the second one to use my custom configuration but it did not work at all why?

docker is not found when running a docker command in entrypoint.sh

I'm getting
app_1 | ./entrypoint.sh: line 2: docker: command not found
when running this line of code in entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
How would i properly execute this command ?
entrypoint.sh
# entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
npm run seed # my attempt to run seed first before server kicks in. but doesnt work
npm run server
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
PSQL_HOST: database
PSQL_PORT: 5430
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-elitypescript}
entrypoint: ["/bin/bash", "./entrypoint.sh"]
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5439
volumes:
database:
Try this Dockerfile :
FROM node:10.6.0
COPY . /home/app
WORKDIR /home/app
COPY package.json ./
RUN npm install
ENV DOCKERVERSION=18.03.1-ce
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
EXPOSE 5000
You trying to run docker container inside of the docker container. In most cases it is very bad approach and you should to avoid it. But in case if you really need it and if you really understand what are you doing, you have to apply Docker-in-Docker(dind).
As far as I understand you, you need to run script CREATE DATABASE elitypescript, the better option will be to apply sidecar pattern - to run another one container with PostgreSQL client that will run your script.
Link the containers together and connect using the hostname.
# docker-compose
services:
app:
links:
- database
...
then just:
# entrypoint.sh
# the database container is available under the hostname database
psql -h database -p 3030 -U postgres -c "CREATE DATABASE elitypescript"
Links are a legacy option, but easier to use then networks.

Docker compose working_dir issue

I am trying to run a golang app using docker-compose, below is my compose configuration.
version: '2'
services:
#Application container
go:
image: golang:1.8-alpine
ports:
- "80:8080"
links:
- mongodb
environment:
DEBUG: 'true'
PORT: '8080'
working_dir: /go/src/simple-golang-app
command: go run main.go
volumes:
- ./simple-golang-app:/go/src/simple-golang-app
mongodb:
image: mvertes/alpine-mongo:3.2.3
restart: unless-stopped
ports:
- "27017:27017"
On running the compose using command "docker-compose up" i get error "stat main.go: no such file or directory" even when main.go is available in working directory.
it works fine when your host dir layout is
oxo#thor ~/Dropbox/Documents/code/docker/golang_working_dir $ find .
.
./docker-compose.yaml
./simple-golang-app
./simple-golang-app/main.go
so here we
cd ~/Dropbox/Documents/code/docker/golang_working_dir
docker-compose up
for a more complex build involving dependancies I use a Dockerfile :
FROM golang:1.8-alpine
RUN mkdir -p /go/src/simple-golang-app/
COPY simple-golang-app/main.go /go/src/simple-golang-app
WORKDIR /go/src/simple-golang-app
RUN apk add --no-cache git mercurial && go get -v -t ./... && apk del git mercurial
RUN go install ./...
RUN go build
ENV PORT 9000
now update your docker-compose.yaml to use this new image :
old
image: golang:1.8-alpine
new
image: nirmal_golang_alpine:latest
so your commands are
docker build --tag nirmal_golang_alpine
docker-compose up

Resources