How to Recreate Travis CI Environment inside GitLab CI - continuous-integration

I'm attempting to migrate a project from Travis CI to GitLab CI. I believe that the bash scripts shouldn't need to change - aside from swapping a few env variables provided by default. But, I've been unable to recreate the environment inside the GitLab yaml file.
sudo: required
services:
- docker
env:
DOCKER_COMPOSE_VERSION: 1.21.1
before_install:
- sudo rm /usr/local/bin/docker-compose
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
script:
- bash test-ci.sh $TRAVIS_BRANCH
after_success:
- bash ./docker-push.sh
- bash ./docker-deploy-stage.sh
- bash ./docker-deploy-prod.sh
Here's my most recent failed attempt:
image: ubuntu:14.04
services:
- docker:dind
variables:
DOCKER_COMPOSE_VERSION: 1.21.1
before_script:
- apt-get update -qq && apt-get install -y -qq apt-transport-https ca-certificates curl software-properties-common unzip python3 python3-pip docker.io libcgroup1
- pip3 install awscli
- curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
- chmod +x docker-compose
- sudo mv docker-compose /usr/local/bin
stages:
- build
build:
stage: build
script:
- bash ./docker-push.sh
- bash ./docker-deploy-stage.sh
- bash ./docker-deploy-prod.sh
This is my first attempt to setup CI. Anyone know what I'm missing?

Related

When Deploying CDK to AWS via Docker, bash: cdk: command not found

Present are two files, Dockerfile.infra and docker-compose-infra.yml. Firstly, docker-compose-infra.yml is built via the following command:
docker-compose --file docker-compose-infra.yml build
This results in no errors and finishes as expected.
The problem arises when trying to deploy this to AWS. The following command:
docker-compose --file docker-compose-infra.yml run cdk
Produces this error:
bash: cdk: command not found
This appears to be triggered when the docker-compose-infra.yml attempts to run the cdk deploy bash command.
The command should run because within the Dockerfile.infra build, cdk is installed via npm install -g aws-cdk-lib.
Dockerfile.infra file:
FROM node:16-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN npm install -g aws-cdk-lib \
&& apt-get update -y \
&& apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
# install Python
python3-pip \
# install Poetry via curl
curl \
&& curl -k https://install.python-poetry.org | python3 - \
&& apt-get remove curl -y \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
COPY pyproject.toml poetry.lock /
ENV PATH=/root/.local/bin:$PATH
RUN poetry config virtualenvs.create false \
&& poetry install --no-dev
WORKDIR /app/
COPY app.py cdk.json cdk.context.json /app/
COPY stacks/ /app/stacks/
docker-compose-infra.yml:
version: "3"
services:
cdk:
command: bash -c "cdk deploy --require-approval never --all --parameters my-app-${ENVIRONMENT}-service:MyServiceImageTag=${IMAGE_TAG}"
build:
context: ./
dockerfile: Dockerfile.infra
environment:
- AWS_PROFILE=${AWS_PROFILE}
- ENVIRONMENT=${ENVIRONMENT}
- DEPLOY_ACCOUNT=${DEPLOY_ACCOUNT}
volumes:
- ~/.aws/credentials:/root/.aws/credentials
You need to install aws-cdk not aws-cdk-lib
RUN npm install -g aws-cdk \
This might be a bit confusing because aws-cdk-lib is also the name of the required Python dependency when writing Python CDK apps and a valid npm package.

How to avoid installing all centos packages everytime I run gitlab ci pipeline?

I'm running a gitlab ci pipeline with a Centos image.
The pipeline has a before script that runs a set of commands.
gitlab-ci.yaml
variables:
WORKSPACE_HOME: '$CI_PROJECT_DIR'
DELIVERY_HOME: delivery
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
default:
image: centos:latest
cache:
paths:
- .cache/pip
before_script:
- chmod u+x devops/scripts/*.sh
- devops/scripts/install-ci.sh
- python3 -m ensurepip --upgrade
- cp .env.docker.dist .env
- pip3 install --upgrade pip
- pip3 install -r requirements.txt
install-ci.yaml
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-* &&\
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
yum -y update
yum -y install gcc gcc-c++ make
yum -y install python3.8
yum install python3-setuptools
yum -y groupinstall "Development Tools"
yum -y install python3-devel
yum -y install postgresql-server
yum -y install postgresql-devel
yum -y install postgresql-libs
yum -y install python3-pip
timedatectl set-timezone Europe/Paris
yum -y install sqlite-devel
The issue is that everytime I run the ci pipeline it takes time to install centos and all it's packages.
Is there a way to avoid this ? or cache this operation somewhere ?
You could create your own image in which all your dependencies are installed and use this in your job instead of installing the dependencies all over again. I would create a dedicated project on your gitlab instance, something like "centos-python-postgress" and within this project you create a Dockerfile in which you install everything you need. (You can either copy your install-ci.sh or RUN the commands directly within your dockerfile) :
FROM centos:latest
RUN sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-Linux-* && sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Linux-*
RUN yum -y update
RUN yum -y install gcc gcc-c++ make
...
You can now either build the Dockerfile on your machine and push it manually to the container registry in this project or you create a CI Pipeline that builds and pushes that image automatically:
stages:
- build
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:latest"
Now, Instead of using centos:latest in your origin project/job, you can use your own image:
variables:
WORKSPACE_HOME: '$CI_PROJECT_DIR'
DELIVERY_HOME: delivery
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
default:
image: registry.gitlab.com/snowfire/centos-python-postgress:latest
cache:
paths:
- .cache/pip
before_script:
- ...

The command '/bin/sh -c yum install yum-utils' returned a non-zero code: 1

i am trying to setup laravel php setup using docker.
Is something wrong with the docker file or the network configurations !!!!
Using DockerFile:
FROM centos:7
# Install some must-haves
RUN yum -y install vim wget sendmail
RUN yum -y install libtool make automake autoconf nasm libpng-static
RUN yum -y install git
RUN git --version
# Install PHP 7.1 on CentOS
RUN rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
&& rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum install yum-utils
RUN yum install epel-release
RUN yum-config-manager --enable remi-php73
RUN yum --enablerepo=remi-php73 -y install php php-bcmath php-cli php-common php-gd php-intl php-ldap php-mbstring \
php-mysqlnd php-pear php-soap php-xml php-xmlrpc php-zip php-fpm
RUN php -v
# Prepare PHP environment
COPY config/php/php-fpm.conf /etc/php-fpm.conf
COPY config/php/www.conf /etc/php-fpm.d/www.conf
COPY config/php/php.ini /usr/local/etc/php/php.ini
COPY config/php/xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/bin/composer
RUN composer --version
# Install Node.js
RUN curl -sL https://rpm.nodesource.com/setup_7.x | bash -
RUN yum -y install nodejs
RUN yum list installed nodejs
RUN node -v
# Final update and clean up
RUN yum -y update --skip-broken
RUN yum clean all
# Define work directory
WORKDIR /var/www/laravel-boilerplate
# Expose ports
EXPOSE 9000
CMD ["php-fpm", "-F", "-O"]
# CMD ["/bin/sh", "-l", "-c", "php-fpm"]
# CMD ["php-fpm", "-F"]
Command which i had run to setup the instances are,
docker-compose up -d
any idea what went wrong?
Adding docker compose file
version: '2'
services:
mysql:
image: mysql:latest
volumes:
- "./data/db:/var/lib/mysql"
ports:
- "3306:3306"
restart: always
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=laravel_boilerplate
- MYSQL_USER=root
- MYSQL_PASSWORD=secret
laravel-env:
build: ./dockerfiles
depends_on:
- mysql
volumes:
- ".:/var/www/laravel-boilerplate"
- "./dockerfiles/config/php/php-fpm.conf:/etc/php-fpm.conf"
- "./dockerfiles/config/php/www.conf:/etc/php-fpm.d/www.conf"
- "./dockerfiles/config/php/php.ini:/usr/local/etc/php/php.ini"
- "./dockerfiles/config/php/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini"
nginx:
image: nginx:latest
depends_on:
- laravel-env
volumes:
- ".:/var/www/laravel-boilerplate"
- "./dockerfiles/config/nginx/default.conf:/etc/nginx/conf.d/default.conf"
ports:
- "80:80"
restart: always
Let me know if I missed anything!!!!
Is something has to get removed while building or is something had to deleted like cleanup, am pretty new setting up the code so. Your help is much appreciated.
Thanks folks.

Setup Robot Framework pipeline with GitLab CI / CD

So i have written my automated Robot Framework tests and they are in a GitLab repo. I want to run these automatically once a day.
Is this possible?
Do I need a .gitlab-ci.yml file for it? (if yes what do I put in it?)
Yes you can totally run the robot tests in gitlab ci.
so answer
Yes its very much possible , infact that is how you execute pipeline tests . You just need to build a Dockerfile that has the things you need to execute the framework inside docker. Here's the sample dockerfile. I would suggest you wrap the .robot script to run from bash script (like robot -d *.robot).
FROM ubuntu:18.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y python3-setuptools wget git bzip2 ca-certificates curl bash chromium-browser chromium-chromedriver firefox python3.8 python3-pip nano && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz
RUN tar xvf geckodriver*
RUN chmod +x geckodriver
RUN mv geckodriver /usr/bin
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2
RUN pip3 install --upgrade pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN pip install rpaframework
COPY . /usr/src/
ADD robot.sh /usr/local/bin/robot.sh
RUN chmod +x /usr/local/bin/robot.sh
WORKDIR /usr/src
Now you need .gitlab-ci.yml in your repository to have a content like this.
stages:
- build
- run
variables:
ARTIFACT_REPORT_PATH: "${CI_PROJECT_DIR}/reports"
build_image:
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
script:
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
robot_tests:
stage: run
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- robot-test.sh
artifacts:
paths:
- $ARTIFACT_REPORT_PATH
when: always
That should be it and once the job finishes you would see the output in the job at the path location in the repository.

Docker compose work on linux environment but not windows environment

I am using 2 environment for development one is a linux VM at home while another is the windows laptop at office. The dockerfile of angular environment work fine until a few days ago, it show the following error when I tried to start the docker container with docker compose on the laptop:
ng | /bin/sh: 1: sudo: not found
ng exited with code 127
However, the same issue does not occurs on my linux VM.
Dockerfile:
FROM node:12
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
RUN mkdir /app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json package-lock.json /app/
RUN npm install
#RUN npm install -g #angular/cli
COPY . /app
EXPOSE 4200
CMD ng serve --host 0.0.0.0
docker-compose.yaml:
version: "3"
services:
dj:
container_name: dj
build: Backend
command: python manage.py runserver 0.0.0.0:80
volumes:
- ./Backend:/code
ports:
- "80:80"
ng:
container_name: ng
build: Frontend/SPort
volumes:
- ./Frontend/SPort:/app
ports:
- "4200:4200"
I think you want to fix the sh script in your Dockerfile
add this:
RUN apt-get update && apt-get install -y dos2unix && dos2unix /path/to/the/script
hope that will help since the error comes from CRLF characters in windows

Resources