I'm taking a stab at doing an automatic deployment using GitLab's CI/CD.
My project has a couple dependencies managed through Composer and I read somewhere that these dependencies (vendor directory) ideally should be added to the .gitignore file so that they're not uploaded to the repository and that's what I did.
When I tested the automatic deployment, the modified files are getting uploaded but I received errors regarding missing vendor files which I expected - so now the question is how do I install these dependencies in the context of the remote server from the GitLab CI/CD environment?
My .gitlab-ci.yml file looks like this:
staging:
stage: staging
before_script:
- apt-get update -qq && apt-get install -y -qq lftp
script:
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -Rev . /public_html --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
environment:
name: staging
url: http://staging.example.com
only:
- staging
If you look at GitLab's documentation for caching PHP dependencies you'll notice that it installs Composer through the CI. I think you could leverage this to download the project dependencies before uploading them through lftp.
staging:
stage: staging
before_script:
# Install git since Composer usually requires this if installing from source
- apt-get update -qq && apt-get install -y -qq git
# Install lftp to upload files to remote server
- apt-get update -qq && apt-get install -y -qq lftp
# Install Composer
- curl --show-error --silent https://getcomposer.org/installer | php
# Install project dependencies through Composer (downloads the vendor directory)
- php composer.phar install
script:
# Upload files including the vendor directory
- lftp -c "set ftp:ssl-allow no; open -u $USERNAME,$PASSWORD $HOST; mirror -Rev . /public_html --ignore-time --parallel=10 --exclude-glob .git* --exclude .git/"
environment:
name: staging
url: http://staging.example.com
only:
- staging
Related
My team and I are familiarizing ourselves with Azure DevOps. I'd like to set up a CI pipeline for the standard Laravel 9 project as a proof of concept, but haven't been successful.
I also haven't been able to find a template.
All that needs to happen in our pipeline is for the new code to be built, tested and containerised as a docker container which can then be pushed to a repo for later deployment.
If anyone could help point me in the right direction, I'd greatly appreciate it!
Using the below YAML file, I've continously run into errors that I do not understand. With the version below, failing a unit test that only asserts whether true is true.
trigger:
- main
pool:
vmImage: 'Ubuntu-Latest'
variables:
PHP_VERSION: '8.0.2'
PHPUNIT_VERSION: '9.5.10'
steps:
- task: NodeTool#0
inputs:
versionSpec: '14.x'
- script: |
sudo apt-get update
sudo apt-get install -y php${{ variables.PHP_VERSION }} php${{ variables.PHP_VERSION }}-cli php${{ variables.PHP_VERSION }}-mbstring unzip
curl -sS https://getcomposer.org/installer -o composer-setup.php
sudo php composer-setup.php --install-dir=/usr/local/bin --filename=composer
sudo composer global require "phpunit/phpunit:${{ variables.PHPUNIT_VERSION }}"
displayName: 'Install PHP and Composer'
- script: |
sudo apt-get install -y git
git clone https://github.com/RadicalRumin/fluffy-fiesta.git
cd /fluffy-fiesta
composer install
displayName: 'Clone and install dependencies'
- script: |
phpunit
displayName: 'Run PHPUnit tests'
I'm running a gitlab ci pipeline with a Centos image.
The pipeline has a before script that runs a set of commands.
gitlab-ci.yaml
variables:
WORKSPACE_HOME: '$CI_PROJECT_DIR'
DELIVERY_HOME: delivery
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
default:
image: centos:latest
cache:
paths:
- .cache/pip
before_script:
- chmod u+x devops/scripts/*.sh
- devops/scripts/install-ci.sh
- python3 -m ensurepip --upgrade
- cp .env.docker.dist .env
- pip3 install --upgrade pip
- pip3 install -r requirements.txt
install-ci.yaml
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-* &&\
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
yum -y update
yum -y install gcc gcc-c++ make
yum -y install python3.8
yum install python3-setuptools
yum -y groupinstall "Development Tools"
yum -y install python3-devel
yum -y install postgresql-server
yum -y install postgresql-devel
yum -y install postgresql-libs
yum -y install python3-pip
timedatectl set-timezone Europe/Paris
yum -y install sqlite-devel
The issue is that everytime I run the ci pipeline it takes time to install centos and all it's packages.
Is there a way to avoid this ? or cache this operation somewhere ?
You could create your own image in which all your dependencies are installed and use this in your job instead of installing the dependencies all over again. I would create a dedicated project on your gitlab instance, something like "centos-python-postgress" and within this project you create a Dockerfile in which you install everything you need. (You can either copy your install-ci.sh or RUN the commands directly within your dockerfile) :
FROM centos:latest
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-* && sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
RUN yum -y update
RUN yum -y install gcc gcc-c++ make
...
You can now either build the Dockerfile on your machine and push it manually to the container registry in this project or you create a CI Pipeline that builds and pushes that image automatically:
stages:
- build
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:latest"
Now, Instead of using centos:latest in your origin project/job, you can use your own image:
variables:
WORKSPACE_HOME: '$CI_PROJECT_DIR'
DELIVERY_HOME: delivery
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
default:
image: registry.gitlab.com/snowfire/centos-python-postgress:latest
cache:
paths:
- .cache/pip
before_script:
- ...
So i have written my automated Robot Framework tests and they are in a GitLab repo. I want to run these automatically once a day.
Is this possible?
Do I need a .gitlab-ci.yml file for it? (if yes what do I put in it?)
Yes you can totally run the robot tests in gitlab ci.
so answer
Yes its very much possible , infact that is how you execute pipeline tests . You just need to build a Dockerfile that has the things you need to execute the framework inside docker. Here's the sample dockerfile. I would suggest you wrap the .robot script to run from bash script (like robot -d *.robot).
FROM ubuntu:18.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y python3-setuptools wget git bzip2 ca-certificates curl bash chromium-browser chromium-chromedriver firefox python3.8 python3-pip nano && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz
RUN tar xvf geckodriver*
RUN chmod +x geckodriver
RUN mv geckodriver /usr/bin
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2
RUN pip3 install --upgrade pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN pip install rpaframework
COPY . /usr/src/
ADD robot.sh /usr/local/bin/robot.sh
RUN chmod +x /usr/local/bin/robot.sh
WORKDIR /usr/src
Now you need .gitlab-ci.yml in your repository to have a content like this.
stages:
- build
- run
variables:
ARTIFACT_REPORT_PATH: "${CI_PROJECT_DIR}/reports"
build_image:
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
script:
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
robot_tests:
stage: run
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- robot-test.sh
artifacts:
paths:
- $ARTIFACT_REPORT_PATH
when: always
That should be it and once the job finishes you would see the output in the job at the path location in the repository.
Goal: to be able to run a phoenix, mix release in an ec2 instance (this machine: https://hub.docker.com/_/amazonlinux/)
Problem: Running my release produces the following error:
/my_app/erts-11.0.3/bin/beam.smp: /lib64/libtinfo.so.6: no version information available (required by /my_app/erts-11.0.3/bin/beam.smp)
2020-09-08 13:17:33.469328
args: [load_failed,"Failed to load NIF library /my_app/lib/crypto-4.7/priv/lib/crypto: 'libcrypto.so.1.1: cannot open shared object file: No such file or directory'","OpenSSL might not format: label: be installed on this system.\n"]
"Unable to load crypto library. Failed with error:~n\"~p, ~s\"~n~s"
{error_logger,error_msg}
but I have openssl installed in each scenario (OpenSSL 1.0.2k-fips 26 Jan 2017).
Setup:
I create a new phoenix project with:
yes | mix phx.new my_app --no-webpack --no-ecto --no-dashboard --no-gettext
cd my_app
and uncomment the config :my_app, MyAppWeb.Endpoint, server: true line in config/prod.secret.exs to start the server when running the app.
I create the following Dockerfile to build my project:
FROM debian:buster
# Install essential build packages
RUN apt-get update
RUN apt-get install -y wget git locales curl gnupg-agent
# Set locale
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV HOME=/opt/app
WORKDIR /opt/app
# Install erlang and elixir
RUN wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb
RUN dpkg -i erlang-solutions_2.0_all.deb
RUN apt-get update
RUN apt-get install -y esl-erlang
RUN apt-get install -y elixir
# Install hex and rebar
RUN mix do local.hex --force, local.rebar --force
# Install phoenix
RUN mix archive.install hex phx_new 1.5.4 --force
COPY mix.exs mix.lock ./
COPY config config
COPY priv priv
COPY lib lib
RUN mix deps.get
ENV SECRET_KEY_BASE='secretExampleQrzdplBPdbHHhr2bpELjiGVGVqmjvFl2JEXdkyla8l6+b2CCcvs'
ENV MIX_ENV=prod
RUN mix phx.digest
RUN mix compile
RUN mix release
CMD ["_build/prod/rel/my_app/bin/my_app", "start"]
and build the image with:
docker build . -t my_app
We can check that everything is running as expected with:
docker run -p 4000:4000 -i my_app:latest
and visiting localhost:4000.
I copy the _build/prod/rel/my_app directory from the built docker container (as this is all I'll be transferring across to my ec2 instance).
# list all containers
docker container ls -a
# locate the container with image tag: my_app:latest. It should look like:
# f9c46df97e55 my_app:latest "_build/prod/rel/my_…"
# note the container_id, and copy across the build release
docker cp f9c46df97e55:/opt/app/_build/prod/rel/my_app .
We create an instance.Dockerfile to run the commands of our ec2 instance:
FROM amazonlinux:latest
COPY my_app my_app
CMD ["my_app/bin/my_app", "start"]
and run it:
docker build . -f instance.Dockerfile -t my_app_instance && docker run -p 4000:4000 -i my_app_instance:latest
This fails to run, with the error:
[load_failed,"Failed to 2020-09-08 13:27:49.980715
args: load NIF library /my_app/lib/crypto-4.7/priv/lib/crypt format: label: 2020-09-08 13:27:49.981847 supervisor_report o: 'libcrypto.so.1.1: cannot open shared object file: No such file or directory'","OpenSSL might not be installed on this system.\n"]
"Unable to load crypto library. Failed with error:~n\"~p, ~s\"~n~s"
{error_logger,error_msg}
Note:
I am able to replicate the error on a debian:buster machine with the above docker build ... && docker run ... command, but with this instance.Dockerfile:
FROM debian:buster
RUN apt-get update
RUN apt-get install -y locales
# Set locale
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
COPY my_app my_app
CMD ["my_app/bin/my_app", "start"]
and fix the error by changing: RUN apt-get install -y locales to RUN apt-get install -y locales curl.
I have tried yum install curl and yum install openssl on the amazonlinux:latest machine, but still experience the same error.
Question:
Where should I look to make progress on this? It seems to be an erlang/otp requirement issue, but the above is hardly a sscce, so difficult to raise.
I have struggled with finding what crypto or openssl library the apt-get curl package installs which causes the error to be fixed.
Any pointers to a particular forum to ask for help, or what to try next would be greatly appreciated.
Thanks in advance for the help.
Thanks to the suggestion to build it on CentOs from #VenkatakumarSrinivasan,
I managed to get it working on an amazonlinux machine with the following Dockerfiles.
Building the release:
FROM centos:7
RUN yum update -y
RUN yum clean -y all
RUN echo 'LC_ALL="en_US.UTF-8"' >> /etc/locale.conf
ENV LC_ALL="en_US.UTF-8"
RUN yum install -y epel-release
RUN yum install -y gcc gcc-c++ glibc-devel make ncurses-devel openssl-devel \
autoconf java-1.8.0-openjdk-devel git wget wxBase.x86_64
WORKDIR /opt
RUN wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
RUN rpm -Uvh erlang-solutions-1.0-1.noarch.rpm
RUN yum update -y
RUN yum install -y erlang
WORKDIR /opt/elixir
RUN git clone https://github.com/elixir-lang/elixir.git /opt/elixir
RUN make clean test
ENV PATH=/opt/elixir/bin:${PATH}
RUN mix do local.hex --force, local.rebar --force
RUN mix archive.install hex phx_new 1.5.4 --force
WORKDIR /opt/app
COPY mix.exs mix.lock ./
COPY config config
COPY priv priv
COPY lib lib
RUN mix deps.get
ENV SECRET_KEY_BASE='secretExampleQrzdplBPdbHHhr2bpELjiGVGVqmjvFl2JEXdkyla8l6+b2CCcvs'
ENV MIX_ENV=prod
RUN mix phx.digest
RUN mix compile
RUN mix release
CMD ["_build/prod/rel/my_app/bin/my_app", "start"]
Running the release:
FROM amazonlinux:latest
RUN yum -y update
ENV LANG="en_US.UTF-8"
ENV LC_ALL="en_US.UTF-8"
RUN ln -s /usr/lib64/libtinfo.so.{6,5}
COPY my_app my_app
CMD ["my_app/bin/my_app", "start"]
I'm not sure if this is a satisfying solution though, I'm open to a more elegant solution.
Im fairly new to docker and so im trying to learn more about it using a laravel project, im following this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose
Ive adjusted the Dockerfile a bit from what the tutorial has but even the tutorial file causes the same result.
FROM php:7.3-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Install dependencies
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && \
apt-get update && apt-get install -y mysql-client \
RUN npm install -g npm
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo pdo_mysql
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Set working directory
WORKDIR /var/www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
But i keep getting the following error when i run docker-compose up -d:
E: Package 'mysql-client' has no installation candidate
ERROR: Service 'app' failed to build: The command '/bin/sh -c curl -sL https://deb.nodesource.com/setup_10.x | bash - && apt-get update && apt-get install -y mysql-client nodejs build-essential vim git curl' returned a non-zero code: 100
Am i missing something?
I expected this to work since i am running apt-get update before installing mysql-client.
Thanks.
php:7.3-fpm now use Debian 10 (Buster) as its base image and Buster ships with MariaDB, so just replace mysql-client with mariadb-client should fix it.
If you still want to use the mysql client, it's called default-mysql-client now.
php:7.2-apache triggers the error as well, but I resolve it using php:7.2.18-apache
it worked for me: sudo apt-get update && apt-get install -y git curl libmcrypt-dev default-mysql-client
or alternatively apt-cache search mysql-server
find out your servers then sudo apt-get install default-mysql-server default-mysql-server-core mariadb-server-10.6 mariadb-server-core-10.6
in my case it was the above codes