GitLab CI/CD: Do not destroy a docker container after building - continuous-integration

My use case is the following: I want to deploy a PHP app to docker after a branch is updated and then leave it running for manual testing.
So I would like to access it under a certain URL.
What I did so far is to create a gitlab docker runner
gitlab-ci-multi-runner register --url "https://git.example.com/ci" --registration-token xxxx \
--description "dockertest" \
--executor docker \
--docker-image "php:7.0-apache" \
--docker-services mariadb:latest
And I have a .gitlabci.yml
stages:
- deploy
job-deploy-docker:
only:
- docker-ci-test
stage: deploy
environment: docker
tags:
- docker
image: php:7.0-apache
services:
- mariadb
script:
- apt-get update && apt-get --assume-yes install mysql-client
- mysql --user=root --password="$MYSQL_ROOT_PASSWORD" --host=mariadb -e "create database ci"
The deploy runs Job succeeded and then the container is destroyed.
How can I avoid destroying the container. Of course it should be cleaned up or reused if a new push is done to a branch.
How can I make the web app accessible under a certain external URL (branchname.ci.example.com)
Is that even the right approach or should I do it differently?

Related

Laravel Vapor Docker Runtime with Gitlab CI want not to be work

I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.

See Gitlab CI/CD pipeline on my local machine [duplicate]

If a GitLab project is configured on GitLab CI, is there a way to run the build locally?
I don't want to turn my laptop into a build "runner", I just want to take advantage of Docker and .gitlab-ci.yml to run tests locally (i.e. it's all pre-configured). Another advantage of that is that I'm sure that I'm using the same environment locally and on CI.
Here is an example of how to run Travis builds locally using Docker, I'm looking for something similar with GitLab.
Since a few months ago this is possible using gitlab-runner:
gitlab-runner exec docker my-job-name
Note that you need both docker and gitlab-runner installed on your computer to get this working.
You also need the image key defined in your .gitlab-ci.yml file. Otherwise won't work.
Here's the line I currently use for testing locally using gitlab-runner:
gitlab-runner exec docker test --docker-volumes "/home/elboletaire/.ssh/id_rsa:/root/.ssh/id_rsa:ro"
Note: You can avoid adding a --docker-volumes with your key setting it by default in /etc/gitlab-runner/config.toml. See the official documentation for more details. Also, use gitlab-runner exec docker --help to see all docker-based runner options (like variables, volumes, networks, etc.).
Due to the confusion in the comments, I paste here the gitlab-runner --help result, so you can see that gitlab-runner can make builds locally:
gitlab-runner --help
NAME:
gitlab-runner - a GitLab Runner
USAGE:
gitlab-runner [global options] command [command options] [arguments...]
VERSION:
1.1.0~beta.135.g24365ee (24365ee)
AUTHOR(S):
Kamil TrzciƄski <ayufan#ayufan.eu>
COMMANDS:
exec execute a build locally
[...]
GLOBAL OPTIONS:
--debug debug mode [$DEBUG]
[...]
As you can see, the exec command is to execute a build locally.
Even though there was an issue to deprecate the current gitlab-runner exec behavior, it ended up being reconsidered and a new version with greater features will replace the current exec functionality.
Note that this process is to use your own machine to run the tests using docker containers. This is not to define custom runners. To do so, just go to your repo's CI/CD settings and read the documentation there. If you wanna ensure your runner is executed instead of one from gitlab.com, add a custom and unique tag to your runner, ensure it only runs tagged jobs and tag all the jobs you want your runner to be responsible of.
I use this docker-based approach:
Edit: 2022-10
docker run --entrypoint bash --rm -w $PWD -v $PWD:$PWD -v /var/run/docker.sock:/var/run/docker.sock gitlab/gitlab-runner:latest -c 'git config --global --add safe.directory "*";gitlab-runner exec docker test'
For all git versions > 2.35.2. You must add safe.directory within the container to avoid fatal: detected dubious ownership in repository at.... This also true for patched git versions < 2.35.2. The old command will not work anymore.
Details
0. Create a git repo to test this answer
mkdir my-git-project
cd my-git-project
git init
git commit --allow-empty -m"Initialize repo to showcase gitlab-runner locally."
1. Go to your git directory
cd my-git-project
2. Create a .gitlab-ci.yml
Example .gitlab-ci.yml
image: alpine
test:
script:
- echo "Hello Gitlab-Runner"
3. Create a docker container with your project dir mounted
docker run -d \
--name gitlab-runner \
--restart always \
-v $PWD:$PWD \
-v /var/run/docker.sock:/var/run/docker.sock \
gitlab/gitlab-runner:latest
(-d) run container in background and print container ID
(--restart always) or not?
(-v $PWD:$PWD) Mount current directory into the current directory of the container - Note: On Windows you could bind your dir to a fixed location, e.g. -v ${PWD}:/opt/myapp. Also $PWD will only work at powershell not at cmd
(-v /var/run/docker.sock:/var/run/docker.sock) This gives the container access to the docker socket of the host so it can start "sibling containers" (e.g. Alpine).
(gitlab/gitlab-runner:latest) Just the latest available image from dockerhub.
4. Execute with
Avoid fatal: detected dubious ownership in repository at... More info
docker exec -it -w $PWD gitlab-runner git config --global --add safe.directory "*"
Actual execution
docker exec -it -w $PWD gitlab-runner gitlab-runner exec docker test
# ^ ^ ^ ^ ^ ^
# | | | | | |
# (a) (b) (c) (d) (e) (f)
(a) Working dir within the container. Note: On Windows you could use a fixed location, e.g. /opt/myapp.
(b) Name of the docker container
(c) Execute the command "gitlab-runner" within the docker container
(d)(e)(f) run gitlab-runner with "docker executer" and run a job named "test"
5. Prints
...
Executing "step_script" stage of the job script
$ echo "Hello Gitlab-Runner"
Hello Gitlab-Runner
Job succeeded
...
Note: The runner will only work on the commited state of your code base. Uncommited changes will be ignored. Exception: The .gitlab-ci.yml itself does not have be commited to be taken into account.
Note: There are some limitations running locally. Have a look at limitations of gitlab runner locally.
I'm currently working on making a gitlab runner that works locally.
Still in the early phases, but eventually it will become very relevant.
It doesn't seem like gitlab want/have time to make this, so here you go.
https://github.com/firecow/gitlab-runner-local
If you are running Gitlab using the docker image there: https://hub.docker.com/r/gitlab/gitlab-ce, it's possible to run pipelines by exposing the local docker.sock with a volume option: -v /var/run/docker.sock:/var/run/docker.sock. Adding this option to the Gitlab container will allow your workers to access to the docker instance on the host.
The GitLab runner appears to not work on Windows yet and there is an open issue to resolve this.
So, in the meantime I am moving my script code out to a bash script, which I can easily map to a docker container running locally and execute.
In this case I want to build a docker container in my job, so I create a script 'build':
#!/bin/bash
docker build --pull -t myimage:myversion .
in my .gitlab-ci.yaml I execute the script:
image: docker:latest
services:
- docker:dind
before_script:
- apk add bash
build:
stage: build
script:
- chmod 755 build
- build
To run the script locally using powershell I can start the required image and map the volume with the source files:
$containerId = docker run --privileged -d -v ${PWD}:/src docker:dind
install bash if not present:
docker exec $containerId apk add bash
Set permissions on the bash script:
docker exec -it $containerId chmod 755 /src/build
Execute the script:
docker exec -it --workdir /src $containerId bash -c 'build'
Then stop the container:
docker stop $containerId
And finally clean up the container:
docker container rm $containerId
Another approach is to have a local build tool that is installed on your pc and your server at the same time.
So basically, your .gitlab-ci.yml will basically call your preferred build tool.
Here an example .gitlab-ci.yml that i use with nuke.build:
stages:
- build
- test
- pack
variables:
TERM: "xterm" # Use Unix ASCII color codes on Nuke
before_script:
- CHCP 65001 # Set correct code page to avoid charset issues
.job_template: &job_definition
except:
- tags
build:
<<: *job_definition
stage: build
script:
- "./build.ps1"
test:
<<: *job_definition
stage: test
script:
- "./build.ps1 test"
variables:
GIT_CHECKOUT: "false"
pack:
<<: *job_definition
stage: pack
script:
- "./build.ps1 pack"
variables:
GIT_CHECKOUT: "false"
only:
- master
artifacts:
paths:
- output/
And in nuke.build i've defined 3 targets named like the 3 stages (build, test, pack)
In this way you have a reproducible setup (all other things are configured with your build tool) and you can test directly the different targets of your build tool.
(i can call .\build.ps1 , .\build.ps1 test and .\build.ps1 pack when i want)
I am on Windows using VSCode with WSL
I didn't want to register my work PC as a runner so instead I'm running my yaml stages locally to test them out before I upload them
$ sudo apt-get install gitlab-runner
$ gitlab-runner exec shell build
yaml
image: node:10.19.0 # https://hub.docker.com/_/node/
# image: node:latest
cache:
# untracked: true
key: project-name
# key: ${CI_COMMIT_REF_SLUG} # per branch
# key:
# files:
# - package-lock.json # only update cache when this file changes (not working) #jkr
paths:
- .npm/
- node_modules
- build
stages:
- prepare # prepares builds, makes build needed for testing
- test # uses test:build specifically #jkr
- build
- deploy
# before_install:
before_script:
- npm ci --cache .npm --prefer-offline
prepare:
stage: prepare
needs: []
script:
- npm install
test:
stage: test
needs: [prepare]
except:
- schedules
tags:
- linux
script:
- npm run build:dev
- npm run test:cicd-deps
- npm run test:cicd # runs puppeteer tests #jkr
artifacts:
reports:
junit: junit.xml
paths:
- coverage/
build-staging:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build:stage
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-dev:
stage: deploy
needs: [build-staging]
tags: [linux]
only:
- schedules
# # - branches#gitlab-org/gitlab
before_script:
- apt-get update && apt-get install -y lftp
script:
# temporarily using 'verify-certificate no'
# for more on verify-certificate #jkr: https://www.versatilewebsolutions.com/blog/2014/04/lftp-ftps-and-certificate-verification.html
# variables do not work with 'single quotes' unless they are "'surrounded by doubles'"
- lftp -e "set ssl:verify-certificate no; open mediajackagency.com; user $LFTP_USERNAME $LFTP_PASSWORD; mirror --reverse --verbose build/ /var/www/domains/dev/clients/client/project/build/; bye"
# environment:
# name: staging
# url: http://dev.mediajackagency.com/clients/client/build
# # url: https://stg2.client.co
when: manual
allow_failure: true
build-production:
stage: build
needs: [prepare]
only:
- schedules
before_script:
- apt-get update && apt-get install -y zip
script:
- npm run build
- zip -r build.zip build
# cache:
# paths:
# - build
# <<: *global_cache
# policy: push
artifacts:
paths:
- build.zip
deploy-client:
stage: deploy
needs: [build-production]
tags: [linux]
only:
- schedules
# - master
before_script:
- apt-get update && apt-get install -y lftp
script:
- sh deploy-prod
environment:
name: production
url: http://www.client.co
when: manual
allow_failure: true
The idea is to keep check commands outside of .gitlab-ci.yml. I use Makefile to run something like make check and my .gitlab-ci.yml runs the same make commands that I use locally to check various things before committing.
This way you'll have one place with all/most of your commands (Makefile) and .gitlab-ci.yml will have only CI-related stuff.
I have written a tool to run all GitLab-CI job locally without have to commit or push, simply with the command ci-toolbox my_job_name.
The URL of the project : https://gitlab.com/mbedsys/citbx4gitlab
Years ago I build this simple solution with Makefile and docker-compose to run the gitlab runner in docker, you can use it to execute jobs locally as well and should work on all systems where docker works:
https://gitlab.com/1oglop1/gitlab-runner-docker
There are few things to change in the docker-compose.override.yaml
version: "3"
services:
runner:
working_dir: <your project dir>
environment:
- REGISTRATION_TOKEN=<token if you want to register>
volumes:
- "<your project dir>:<your project dir>"
Then inside your project you can execute it the same way as mentioned in other answers:
docker exec -it -w $PWD runner gitlab-runner exec <commands>..
I recommend using gitlab-ci-local
https://github.com/firecow/gitlab-ci-local
It's able to run specific jobs as well.
It's a very cool project and I have used it to run simple pipelines on my laptop.

Laravel Vapoor on aws codepipeline

Laravel vapor is completely developed on aws platform but not used aws code pipeline to deploy codes. Has anyone tried aws code pipeline to deploy vapor codes?
I can deploy ubuntu server and install required PHP extensions and run vapor deploy staging command in aws codedeploy. Wondering is there any better way to deploy laravel vapor.
Finally, I made the below changes in the buildspec.yml file and used the ubuntu instance.
install:
commands:
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2 &
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
#- command
#- echo Logging in to Amazon ECR
pre_build:
commands:
- docker run --name myvapor -d -e VAPOR_API_TOKEN=MY_VAPOR_API_TOKEN --volume $(pwd):/~ --workdir /~ teamnovu/laravel-vapor
- docker exec myvapor vapor deploy staging

docker-compose deployment configuration for Circle CI

I am using Circle CI to deploy a microservice to a Digital Ocean droplet and had a few questions about whether my approach is the right one.
My microservice is built using docker-compose, and therefore requires a docker-compose.yml file to pull, start the images that constitute it.
In a nutshell, my deployment approach would be:
Merge branch to master will kick off a CircleCI build
CircleCI will run unit tests
Upon all tests passing, docker-compose build and docker-compose push to Docker Hub
Stop all running images of that service on remote server.
Remove dangling images, and local networks.
Download the relevant docker-compose.yml, Dockerfile and docker-compose.env files.
Pull using docker-compose pull
Start images using docker-compose up
I am using this configuration in CircleCI:
version: 2.1
jobs:
build:
docker:
- image: "circleci/node:10.16.0"
steps:
- checkout
- run:
name: Update to latest npm version
command: "sudo npm install -g npm#latest"
- restore_cache:
key: dependency-cache-{{ checksum "package-lock.json" }}
- run:
name: Install dependencies
command: npm install
- run:
name: Install `docker-compose`
command: |
curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` > ~/docker-compose
chmod +x ~/docker-compose
sudo mv ~/docker-compose /usr/local/bin/docker-compose
- setup_remote_docker:
docker_layer_caching: false
- run:
name: Build using `docker-compose`
command: |
docker-compose build
- run:
name: Login for Docker Hub
command: |
echo "$DOCKER_PASSWORD" | docker login --username $DOCKER_USERNAME --password-stdin
- run:
name: Push to Docker Hub
command: |
docker-compose push
- run: ssh-keyscan $DIGITALOCEAN_HOST >> ~/.ssh/known_hosts
- add_ssh_keys:
fingerprints:
- fo:of:fe:ef:af
- run:
name: Remove currently running containers
command: |
ssh root#$DIGITALOCEAN_HOST ./deploy_image.sh
I am planning on creating a bash script to handle steps 4 to 8 from my list above.
Is it a good idea to have a script take care of the Docker steps?
Or is there a better way to have a more "native" CircleCI configuration?

How to auto deploy Docker Image on own server with GitLab?

I am trying to Google it for few hours, but can't find it.
I have Java/Spring application (+MySQL if it matters) and I am looking to create CI for that.
I know what to do and how:
I know that I have to move my Git repo to Gitlab.
Push to repo will trigger CI script.
Gitlab will build my docker image into Gitlab Docker Registry.
Question is:
What do I have to do to force docker compose on my VPS to pull the new image from Gitlab and restart the server?
I know (correct me if I am wrong) that on my VPS I should run docker-compose pull && docker-compose up inside my app folder, but I have literally no idea how to make it automatically with Gitlab?
What do I have to do to force docker compose on my VPS to pull the new image from Gitlab and restart the server?
#m-uu, you don't need restart the server at all, just do docker-compose up to pull new image and restart service
I know (correct me if I am wrong) that on my VPS I should run docker-compose pull && docker-compose up inside my app folder, but I have literally no idea how to make it automatically with Gitlab?
Yes, you are on the right way. Look at my Gitlab CI configuration file, I think it doesn't difficult to change it for Java project. Just give you ideas how to build, push to your registry and deploy an image to your server. One thing you need to do is generate SSH keys and push public to server (.ssh/authorized_keys) and private to GITLAB pipeline secret variable (https://docs.gitlab.com/ee/ci/variables/#secret-variables)
cache:
key: "cache"
paths:
- junte-api
stages:
- build
- build_image
- deploy
build:
image: golang:1.7
stage: build
script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" > ~/key && chmod 600 ~/key
- ssh-add ~/key
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- go get -u github.com/kardianos/govendor
- mkdir -p $GOPATH/src/github.com/junte/junte-api
- mv * $GOPATH/src/github.com/junte/junte-api
- cd $GOPATH/src/github.com/junte/junte-api
- govendor sync
- go build -o junte-api
- cd -
- cp $GOPATH/src/github.com/junte/junte-api .
build_image:
image: docker:latest
stage: build_image
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
deploy-dev:
stage: deploy
image: junte/ssh-agent
variables:
# should be set up at Gitlab CI env vars
SSH_PRIVATE_KEY: $SSH_DEV_PRIVATE_KEY
script:
# copy docker-compose yml to server
- scp docker-compose.dev.yml root#SERVER_IP:/home/junte/junte-api/
# login to gitlab registry
- ssh root#SERVER_IP docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
# then we cd to folder with docker-compose, run docker-compose pull to update images, and run services with `docker-compose up -d`
- ssh root#SERVER_IP "cd /home/junte/junte-api/ && docker-compose -f docker-compose.dev.yml pull api-dev && HOME=/home/dev docker-compose -f docker-compose.dev.yml up -d"
environment:
name: dev
only:
- dev
You also need Gitlab runner with Docker support. How install it look at in Gitlab doc, please.
About stages:
build - just change it to build what you need
build_image - very simple, just login to gitlab registry, build new image and push it to registry. Look at cache part, it need to cache files between stages and can be different for you.
deploy-dev - that part more about what you asked. Here first 6 commands just install ssh and create your private key file to have access to your VPS. Just copy it and add your SSH_PRIVATE_KEY to secret vars in Gitlab UI. Last 3 SSH commands more interesting for you.

Resources