Gitlab runner CI/CD do not checkout and pull last commit - continuous-integration

I want to have a CI/CD with gitlab-runner and docker swarm. I have problem when i deploy the commit will not checkout or checkout without changes, I wonder to know if problem is gitlab or docker or docker build. my .gitlab-ci.yml, look like:
stages:
- build
- deploy
build_image:
stage: build
image: docker:git
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG} ./DockerFiles/Worker
- docker push registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG}
only:
- branches
deploy_staging:
stage: deploy
image: rastasheep/ubuntu-sshd:latest
script:
# add the server as a known host
- ssh-keyscan 46.4.151.121 >> ~/.ssh/known_hosts
- chmod 600 ~/.ssh/known_hosts
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- eval $(ssh-agent -s)
- touch key.txt
- echo "$SSH_PRIVATE_KEY" >> key.txt
- chmod 600 key.txt
- ssh-add key.txt
# log into Docker registry
- ssh alireza#46.4.151.121 "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com"
# stop container, remove image.
- ssh alireza#46.4.151.121 "docker stop dockergitlab_${CI_COMMIT_REF_SLUG}" || true
- ssh alireza#46.4.151.121 "docker rm dockergitlab_${CI_COMMIT_REF_SLUG}" || true
- ssh alireza#46.4.151.121 "docker rmi registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG}" || true
# start new container
- ssh alireza#46.4.151.121 "docker run --name dockergitlab_${CI_COMMIT_REF_SLUG} -d registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG}"
only:
- branches
except:
- master
and I also I put my pipeline log below, that might help to describe more:
$ eval "$CI_PRE_CLONE_SCRIPT"
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/insuretech1/backend/.git/
Created fresh repository.
From https://gitlab.com/insuretech1/backend
* [new ref] refs/pipelines/124187268 -> refs/pipelines/124187268
* [new branch] develop -> origin/develop
Checking out 735209a2 as develop...
Skipping Git submodules setup
$ docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
03:43
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
the content of dockerfile which I use for my build
FROM debian:buster
MAINTAINER Alireza Rahmani Khalili "alirezarahmani#live.com"
ENV TERM xterm
RUN apt-get update --fix-missing && apt-get install -y --force-yes curl sudo vim
RUN apt-get install -y --force-yes wget apt-transport-https lsb-release ca-certificates
RUN wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
RUN echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list
RUN echo "deb http://ftp.uk.debian.org/debian buster-backports main" >> /etc/apt/sources.list
RUN wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
RUN echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list
RUN DEBIAN_FRONTEND="noninteractive" apt-get update && apt-get install -y --force-yes \
nginx \
php7.3 \
php7.3-cli \
php7.3-fpm \
php7.3-curl \
php7.3-json \
php7.3-mysql \
php7.3-sqlite \
php7.3-xml \
php7.3-intl \
php7.3-mbstring \
php7.3-xdebug \
php-memcached \
git \
openssh-server \
php7.3-gd \
zip \
php7.3-zip
# configure php-fpm
RUN sed -i 's/^;*clear_env = .*/clear_env = no/' /etc/php/7.3/fpm/pool.d/www.conf
RUN curl -sS https://getcomposer.org/installer | php && \
mv composer.phar /usr/local/bin/composer && chmod +x /usr/local/bin/composer
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN echo "UseDNS no" >> /etc/ssh/sshd_config
RUN echo "KexAlgorithms diffie-hellman-group1-sha1" >> /etc/ssh/sshd_config
RUN echo "fastcgi_param PATH_TRANSLATED \$document_root\$fastcgi_script_name;" >> /etc/nginx/fastcgi_params
RUN mkdir /etc/nginx/ssl
RUN openssl ecparam -out /etc/nginx/ssl/nginx.key -name prime256v1 -genkey
RUN openssl req -new -batch -key /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/csr.pem
RUN openssl req -x509 -nodes -days 365 -key /etc/nginx/ssl/nginx.key -in /etc/nginx/ssl/csr.pem -out /etc/nginx/ssl/nginx.pem
RUN chmod 600 /etc/nginx/ssl/*
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
ADD docker-entrypoint.sh /usr/bin/docker-entrypoint
RUN chmod +x /usr/bin/docker-entrypoint
RUN sed -i 's/^user nginx;/user www-data;/' /etc/nginx/nginx.conf
RUN echo "apc.enable_cli=1" >> /etc/php/7.3/cli/php.ini
RUN echo "apc.shm_size=128M" >> /etc/php/7.3/fpm/conf.d/20-apcu.ini
RUN sed -i "s/\(max_execution_time *= *\).*/\1180/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(upload_max_filesize *= *\).*/\1100M/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(post_max_size *= *\).*/\1100M/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(^.*max_input_vars *= *\).*/max_input_vars = 10000/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(pm.max_children = 5\).*/\pm.max_children = 50/" /etc/php/7.3/fpm/pool.d/www.conf
RUN sed -i "s/\(pm.max_spare_servers = 3\).*/\pm.max_spare_servers = 10/" /etc/php/7.3/fpm/pool.d/www.conf
RUN echo "xdebug.default_enable=1" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN echo "xdebug.profiler_output_dir=/var/www/cachegrind/" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN echo "xdebug.profiler_enable_trigger=1" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN echo "xdebug.profiler_output_name= cachegrind.out" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN mkdir /root/.ssh/
ADD default.conf /etc/nginx/conf.d/default.conf
ADD default.conf /etc/nginx/sites-enabled/default
ADD default.conf /etc/nginx/sites-available/default
EXPOSE 22 443 80
WORKDIR /var/www/
ENTRYPOINT ["docker-entrypoint"]
CMD ["nginx", "-g", "daemon off;"]
and also content of my docker compose file which i use when I build in my ci/cd:
version: '3'
services:
worker:
image: registry.gitlab.com/insuretech1/backend:develop
ports:
- 0.0.0.0:80:80
depends_on:
- mysql
deploy:
mode: replicated
replicas: 3
# service resource management
resources:
# Hard limit - Docker does not allow to allocate more
limits:
cpus: '0.25'
memory: 512M
# Soft limit - Docker makes best effort to return to it
reservations:
cpus: '0.25'
memory: 256M
# service restart policy
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# service update configuration
update_config:
parallelism: 1
delay: 10s
failure_action: continue
monitor: 60s
max_failure_ratio: 0.3
volumes:
- /var/www/backend:/var/www
mysql:
image: mariadb:10.4
ports:
- 0.0.0.0:3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- /opt/mysql_data:/var/lib/mysql
deploy:
placement:
constraints: [node.role == manager]
redis:
image: redis
deploy:
placement:
constraints: [node.role == manager]
the issue is I can not see my last changes of my commit in my server (I mean i should manually git pull to fetch last changes), is there anything wrong?

first of all in your Dockerfile you should copy content of directory into docker container. that will help you keep git changes with your container, for example:
COPY . /var/www/
and other problem is in your docker compose file you have:
volumes:
- /var/www/backend:/var/www
this will override changes that git made on your container and that is why you are not able to see git changes.

Related

Gitlab remote server push on the deploy step does not push new files

I have the below ci/cd script, (removed the test steps from it which works fine)
Our repo has two branches ready_4_release and master(prod), any feature branch gets merged into ready_4_release and at the end of the week we push the changes into master.
Any feature branch that gets merged into ready_4_release gets deployed into the remote airflow servers,
problem:
When a new MR gets merged into ready_4_release branch the changes are not getting deployed into remote servers but the changes on the feature branch gets merged into the ready_4_release branch, i am not able to figure out why?
When we rerun the deploy step the changes does appear on the remote server(Airflow),
OR when a new MR is submitted the previous changes gets deployed into remote servers.
Below is how the branches are setup in gitlab.
This is how the merge request settings are on the repo
stages:
- test
- deploy
.ssh-connection: &ssh-connection
- 'which git || ( apt-get update -y && apt-get install git -y )'
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
- eval $(ssh-agent -s)
- echo "$LOTBOT_SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh /config'
- ssh-keyscan ip-10-0-1-24.ec2.internal >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
deploy-dev:
stage: deploy
tags:
- prd-runner
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/ubuntu
before_script:
- echo "Deploying repo dev-data-analytics to Airkube EFS"
- *ssh-connection
script:
- ssh lotbot#10.0.1.24 "sudo rm -r /mnt/airflow_dags/airkube/external_repos/dev- -data-analytics/"
- echo "${PWD}"
- cd ..
- mkdir dev-lotlinx-data-analytics && cp -R "${PWD}"/data-analytics/* "${PWD}"/dev-data-analytics
- cd dev-data-analytics
# - echo "Git commit id"
# - git show -s --format=%h
- echo "${PWD}"
- ssh lotbot#10.0.1.24 "sudo mkdir -p /mnt/airflow_dags/airkube/external_repos/dev- data-analytics"
- ssh lotbot#10.0.1.24 "sudo chown -R lotbot:sudo /mnt/airflow_dags/airkube/external_repos/dev-lotlinx-data-analytics/ && sudo chmod -R g+w /mnt/airflow_dags/airkube/external_repos/dev-data-analytics/"
- rsync -avu --delete --exclude "*__pycache__" "${PWD}"/ lotbot#10.0.1.24:/mnt/airflow_dags/airkube/external_repos/lotlinx-data-analytics;
- ssh lotbot#10.0.1.24 "sudo sleep 3s"
- ssh lotbot#10.0.1.24 "sudo chown -R lotbot:sudo /mnt/airflow_dags/airkube/external_repos/dev-lotlinx-data-analytics/ && sudo chmod -R g+w /mnt/airflow_dags/airkube/external_repos/lotlinx-data-analytics/"
rules:
- if: '$CI_COMMIT_REF_NAME == "ready_for_release" && $CI_PIPELINE_SOURCE == "push"'

Docker/K8 : OpenSSL SSL_connect: SSL_ERROR_SYSCALL

Running a k8 cronjob on an endpoint. Test works like a charm locally and even when I sleep infinity at the end of my entrypoint then curl inside the container. However once the cron kicks off I get some funky error:
[ec2-user#ip-10-122-8-121 device-purge]$ kubectl logs appgate-device-cron-job-1618411080-29lgt -n device-purge
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 52.61.245.214:444
docker-entrypoint.sh
#! /bin/sh
export api_vs_hd=$API_VS_HD
export controller_ip=$CONTROLLER_IP
export password=$PASSWORD
export uuid=$UUID
export token=$TOKEN
# should be logged in after token export
# Test API call: list users
curl -k -H "Content-Type: application/json" \
-H "$api_vs_hd" \
-H "Authorization: Bearer $token" \
-X GET \
https://$controller_ip:444/admin/license/users
# test
# sleep infinity
Dockerfile
FROM harbor/privateop9/python38:latest
# Use root user for packages installation
USER root
# Install packages
RUN yum update -y && yum upgrade -y
# Install curl
RUN yum install curl -y \
&& curl --version
# Install zip/unzip/gunzip
RUN yum install zip unzip -y \
&& yum install gzip -y
# Install wget
RUN yum install wget -y
# Install jq
RUN wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
RUN chmod +x ./jq
RUN cp jq /usr/bin
# Install aws cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
## set working directory
WORKDIR /home/app
# Add user
RUN groupadd --system user && adduser --system user --no-create-home --gid user
RUN chown -R user:user /home/app && chmod -R 777 /home/app
# Make sure that your shell script file is in the same folder as your dockerfile while running the docker build command as the below command will copy the file to the /home/root/ folder for execution
# COPY . /home/root/
COPY ./docker-entrypoint.sh /home/app
RUN chmod +x docker-entrypoint.sh
# Switch to non-root user
USER user
# Run service
ENTRYPOINT ["/home/app/docker-entrypoint.sh"]
Cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: device-cron-job
namespace: device-purge
spec:
#Cron Time is set according to server time, ensure server time zone and set accordingly.
schedule: "*/2 * * * *" # test
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: appgate-cron
containers:
- name: device-cron-pod
image: harbor/privateop9/python38:device-purge
env:
- name: API_VS_HD
value: "Accept:application/vnd.appgate.peer-v13+json"
- name: CONTROLLER_IP
value: "value"
- name: UUID
value: "value"
- name: TOKEN
value: >-
curl -H "Content-Type: application/json" -H "${api_vs_hd}" --request POST
--data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$password\",\"deviceId\":\"$uuid\"}"
https://$controller_ip:444/admin/login --insecure | jq -r '.token'
- name: PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
Please help! I am running out of ideas....
The issue with my post was on the server itself due to some firewall with the IP whitelisting set up on the AWS cloud account. After that problem was addressed by the security team on the account I was able to pass the blocker.

Phalcon Installation In Docker

I am trying to install phalcon in docker and I cannot figure out how to do it.
I am searching through the web for solutions and couldn't manage to make it work.
I successfully installed docker for windows and it seems to work fine but i cannot find a way to install docker.
Can anyone help me to install phalcon in docker ?
Thanks in advance
You can create a Dockerfile which is compiling Phalcon for you:
FROM php:7.2-fpm
ENV PHALCON_VERSION=3.4.2
RUN curl -sSL "https://codeload.github.com/phalcon/cphalcon/tar.gz/v${PHALCON_VERSION}" | tar -xz \
&& cd cphalcon-${PHALCON_VERSION}/build \
&& ./install \
&& cp ../tests/_ci/phalcon.ini $(php-config --configure-options | grep -o "with-config-file-scan-dir=\([^ ]*\)" | awk -F'=' '{print $2}') \
&& cd ../../ \
&& rm -r cphalcon-${PHALCON_VERSION}
if you are looking for php7+apache+mysql with phalcon 3.4.2 then here is my solution. for phalcon 4 there needs to be done addition steps like installing psr otherwise it will give error for required dependencies.
CONSIDER FOLLOWING STUCTURE AND PUT FILES ACCORDINGLY
docker-compose.yml
www(directory where you will put your code)
index.php
.docker(directory)
Dockerfile
dump(directory just to persist your mysql data)
here is Dockerfile which you will put in .docker directory
FROM php:7.1.2-apache
RUN docker-php-ext-install pdo_mysql
RUN sed -i '/jessie-updates/d' /etc/apt/sources.list # Now archived
RUN apt update
RUN apt install -y \
apt-transport-https \
lsb-release \
ca-certificates \
wget \
curl \
nano \
dialog \
net-tools \
git \
sudo \
openssl \
libpcre3-dev
RUN wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
RUN sh -c 'echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list'
RUN apt update && apt install -y \
php7.1-curl \
php7.1-mbstring \
php7.1-gettext \
php7.1-gd \
php7.1-fileinfo \
php7.1-json \
php7.1-mcrypt \
php7.1-redis \
php7.1-intl \
php7.1-xml \
php7.1-zip
ARG PHALCON_VERSION=3.4.2
ARG PHALCON_EXT_PATH=php7/64bits
RUN set -xe && \
# Compile Phalcon
curl -LO https://github.com/phalcon/cphalcon/archive/v${PHALCON_VERSION}.tar.gz && \
tar xzf ${PWD}/v${PHALCON_VERSION}.tar.gz && \
docker-php-ext-install -j $(getconf _NPROCESSORS_ONLN) ${PWD}/cphalcon-${PHALCON_VERSION}/build/${PHALCON_EXT_PATH} && \
# Remove all temp files
rm -r \
${PWD}/v${PHALCON_VERSION}.tar.gz \
${PWD}/cphalcon-${PHALCON_VERSION}
RUN a2enmod rewrite
this will be for you php and phalcon settings along with apache
and you can use docker compose to run all your required container to make up your application
here is your docker-compose file
version: "2"
services:
www:
build: ./.docker
ports:
- "8001:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:5.7.13
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
after this just add index.php in www
<?php echo phpinfo();?>
after this localhost:8001 should b accessible.. happy coding. and if there is any improvements need to be done in this. please let me know. as of now this one is working super cool for me and my first phalcon configuration. wasted lot of time on it.....
You can install it through Dockerfile
Kindly, fetch this link according to your operating system
https://github.com/phalcon/dockerfiles
Plus you can make it through repo
https://hub.docker.com/u/phalconphp
For phalcon its important to compile on same hardware/cpu. Or you pass the flags to
phpize \
&& ./configure CFLAGS="-O2 -g" \
&& make -B \
&& make install
not sure if still relevant but here
https://keepforyourself.com/coding/php/how-to-setup-phalcon-framework-in-a-docker-container-v2/
there is this solution
FROM alpine:latest
RUN apk add --no-cache \
apache2-proxy \
apache2-ssl \
apache2-utils \
curl \
git \
logrotate \
openssl \
git bash php php7-dev apache2 gcc \
libc-dev make php7-pdo php7-json \
php7-session php7-pecl-psr \
php7-apache2
WORKDIR /
RUN git clone --depth=1 "git://github.com/phalcon/cphalcon.git"
WORKDIR /cphalcon/build
RUN ./install
RUN echo "extension=phalcon.so" > /etc/php7/conf.d/phalcon.ini
RUN apk del libc-dev zlib-dev php7-dev libedit-dev musl-dev pcre2-dev ncurses-dev \
expat xz-libs curl musl-utils make libedit zlib ncurses-libs libstdc++ pcre git bash musl argon2-libs
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
RUN rm -rf /cphalcon
WORKDIR /var/www/localhost/htdocs
RUN echo "<?php phpinfo(); ?>" > /var/www/localhost/htdocs/index.php
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]

Dockerfile with entrypoint for executing bash script

I downloaded docker files from official repository (version 2.3), and now I want to build the image and upload some local data (test.json) into the container. It is not enough just to run COPY test.json /usr/share/elasticsearch/data/, because in this case the indexing of data is not done.
What I want to achieve is to be able to run sudo docker run -d -p 9200:9200 -p 9300:9300 -v /home/gosper/tests/tempESData/:/usr/share/elasticsearch/data test/elasticsearch, and after its execution I want to be able to see the mapped data on http://localhost:9200/tests/test/999.
If I use the below-given Dockerfile and *sh script, then I get the following error: Failed to connect to localhost port 9200: Connection refused
This is the Dockerfile from which I build the image:
FROM java:8-jre
# grab gosu for easy step-down from root
ENV GOSU_VERSION 1.7
RUN set -x \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)" \
&& wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu \
&& rm -r "$GNUPGHOME" /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
# https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html
# https://packages.elasticsearch.org/GPG-KEY-elasticsearch
RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 46095ACC8548582C1A2699A9D27D666CD88E42B4
ENV ELASTICSEARCH_VERSION 2.3.4
ENV ELASTICSEARCH_REPO_BASE http://packages.elasticsearch.org/elasticsearch/2.x/debian
RUN echo "deb $ELASTICSEARCH_REPO_BASE stable main" > /etc/apt/sources.list.d/elasticsearch.list
RUN set -x \
&& apt-get update \
&& apt-get install -y --no-install-recommends elasticsearch=$ELASTICSEARCH_VERSION \
&& rm -rf /var/lib/apt/lists/*
ENV PATH /usr/share/elasticsearch/bin:$PATH
WORKDIR /usr/share/elasticsearch
RUN set -ex \
&& for path in \
./data \
./logs \
./config \
./config/scripts \
; do \
mkdir -p "$path"; \
chown -R elasticsearch:elasticsearch "$path"; \
done
COPY config ./config
VOLUME /usr/share/elasticsearch/data
COPY docker-entrypoint.sh /
EXPOSE 9200 9300
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
COPY template.json /usr/share/elasticsearch/data/
RUN /bin/bash -c "source /docker-entrypoint.sh"
This is the docker-entrypoint.sh in which I added the line curl -XPOST http://localhost:9200/uniko-documents/document/978-1-60741-503-9 -d "/usr/share/elasticsearch/data/template.json":
#!/bin/bash
set -e
# Add elasticsearch as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- elasticsearch "$#"
fi
# Drop root privileges if we are running elasticsearch
# allow the container to be started with `--user`
if [ "$1" = 'elasticsearch' -a "$(id -u)" = '0' ]; then
# Change the ownership of /usr/share/elasticsearch/data to elasticsearch
chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data
set -- gosu elasticsearch "$#"
#exec gosu elasticsearch "$BASH_SOURCE" "$#"
fi
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
# As argument is not related to elasticsearch,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$#"
Remove the following from your docker-entrypoint.sh:
curl -XPOST http://localhost:9200/tests/test/999 -d "/usr/share/elasticsearch/data/test.json"
It's running before you exec the service at the end.
In your Dockerfile, move the following after any commands that modify the directory:
VOLUME /usr/share/elasticsearch/data
Once you create a volume, future changes to the directory are typically ignored.
Lastly, in your Dockerfile, this line at the end likely doesn't do what you think, I'd remove it:
RUN /bin/bash -c "source /docker-entrypoint.sh"
The entrypoint.sh should be run when you start the container, not when you're building it.
#Klue in case you still need it.. you need to change the -d option on your curl command to --data-binary. -d strips the newlines. that's why you are getting the errors.

Wrong permissions in volume in Docker container

I run Docker 1.8.1 in OSX 10.11 via an local docker-machine VM.
I have the following docker-compose.yml:
web:
build: docker/web
ports:
- 80:80
- 8080:8080
volumes:
- $PWD/cms:/srv/cms
My Dockerfile looks like this:
FROM alpine
# install nginx and php
RUN apk add --update \
nginx \
php \
php-fpm \
php-pdo \
php-json \
php-openssl \
php-mysql \
php-pdo_mysql \
php-mcrypt \
php-ctype \
php-zlib \
supervisor \
wget \
curl \
&& rm -rf /var/cache/apk/*
RUN mkdir -p /etc/nginx && \
mkdir -p /etc/nginx/sites-enabled && \
mkdir -p /var/run/php-fpm && \
mkdir -p /var/log/supervisor && \
mkdir -p /srv/cms
RUN rm /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
ADD thunder.conf /etc/nginx/sites-enabled/thunder.conf
ADD nginx-supervisor.ini /etc/supervisor.d/nginx-supervisor.ini
WORKDIR "/srv/cms"
VOLUME "/srv/cms"
EXPOSE 80
EXPOSE 8080
EXPOSE 22
CMD ["/usr/bin/supervisord"]
When I run everything with docker-compose up everything works fine, my volumes are mounted at the correct place.
But the permissions in the mounted folder /srv/cms look wrong. The user is "1000" and the group is "50" in the container. The webserver could not create any files in this folder, because it runs with the user "root".
1) General idea: Docker it is not Vagrant. It is wrong to put two different services into one container! Split it into two different images and link them together. Don't do this shitty image.
Check and follow https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
Avoid installing unnecessary packages
Run only one process per container
Minimize the number of layers
If you do it:
you will remove your supervisor
your can decrease numbers of layers
It should be something like (example):
FROM alpine
RUN apk add --update \
wget \
curl
RUN apk add --update \
php \
php-fpm \
php-pdo \
php-json \
php-openssl \
php-mysql \
php-pdo_mysql \
php-mcrypt \
php-ctype \
php-zlib
RUN usermod -u 1000 www-data
RUN rm -rf /var/cache/apk/*
EXPOSE 9000
For nginx it is enough to use default image and mount configs.
docker-compose file like:
nginx:
image: nginx
container_name: site.dev
volumes:
- ./myconf1.conf:/etc/nginx/conf.d/myconf1.conf
- ./myconf2.conf:/etc/nginx/conf.d/myconf2.conf
- $PWD/cms:/srv/cms
ports:
- "80:80"
links:
- phpfpm
phpfpm:
build: ./phpfpm/
container_name: phpfpm.dev
command: php5-fpm -F --allow-to-run-as-root
volumes:
- $PWD/cms:/srv/cms
2)
Add RUN usermod -u 1000 www-data into Dockerfile for php container, it will fix problem with permission.
For alpine version you need to use:
RUN apk add shadow && usermod -u 1000 www-data && groupmod -g 1000 www-data

Resources