Gitlab-CI multi-project-pipeline - continuous-integration

currently I'm trying to understand the Gitlab-CI multi-project-pipeline.
I want to achieve to run a pipeline if another pipeline has finshed.
Example:
I have one project nginx saved in namespace baseimages which contains some configuration like fast-cgi-params. The ci-file looks like this:
stages:
- release
- notify
variables:
DOCKER_HOST: "tcp://localhost:2375"
DOCKER_REGISTRY: "registry.mydomain.de"
SERVICE_NAME: "nginx"
DOCKER_DRIVER: "overlay2"
release:
stage: release
image: docker:git
services:
- docker:dind
script:
- docker build -t $SERVICE_NAME:latest .
- docker tag $SERVICE_NAME:latest $DOCKER_REGISTRY/$SERVICE_NAME:latest
- docker push $DOCKER_REGISTRY/$SERVICE_NAME:latest
only:
- master
notify:
stage: notify
image: appropriate/curl:latest
script:
- curl -X POST -F token=$CI_JOB_TOKEN -F ref=master https://gitlab.mydomain.de/api/v4/projects/1/trigger/pipeline
only:
- master
Now I want to have multiple projects to rely on this image and let them rebuild if my baseimage changes e.g. new nginx version.
baseimage
|
---------------------------
| | |
project1 project2 project3
If I add a trigger to the other project and insert the generated token at $GITLAB_CI_TOKEN the foreign pipeline starts but there is no combined graph as shown in the documentation (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html)
How is it possible to show the full pipeline graph?
Do I have to add every project which relies on my baseimage to the CI-File of the baseimage or is it possible to subscribe the baseimage-pipline in each project?

The Multi-project pipelines is a paid for feature introduced in GitLab Premium 9.3, and can only be accessed using GitLab's Premium or Silver models.
A way to see this is to the right of the document title:

Well after some more digging into the documentation I found a little sentence which states that Gitlab CE provides features marked as Core-Feature.

We have 50+ Gitlab packages where this is needed. What we used to do was push a commit to a downstream package, wait for the CI to finish, then push another commit to the upstream package, wait for the CI to finish, etc. This was very time consuming.
The other thing you can do is manually trigger builds and you can manually determine the order.
If none of this works for you or you want a better way, I built a tool to help do this called Gitlab Pipes. I used it internally for many months and realized that people need something like this, so I did the work to make it public.
Basically it listens to Gitlab notifications and when it sees a commit to a package, it reads the .gitlab-pipes.yml file to determine that projects dependencies. It will be able to construct a dependency graph of your projects and build the consumer packages on downstream commits.
The documentation is here, it sort of tells you how it works. And then the primary app website is here.

If you click the versions history ... from multi_project_pipelines it reveals.
Made available in all tiers in GitLab 12.8.
Multi-project pipeline visualizations as of 13.10-pre is marked as premium however in my ee version the visualizations for down/upstream links are functional.
So reference Triggering a downstream pipeline using a bridge job
Before GitLab 11.8, it was necessary to implement a pipeline job that was responsible for making the API request to trigger a pipeline in a different project.
In GitLab 11.8, GitLab provides a new CI/CD configuration syntax to make this task easier, and avoid needing GitLab Runner for triggering cross-project pipelines. The following illustrates configuring a bridge job:
rspec:
stage: test
script: bundle exec rspec
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger: my/deployment

Related

Monorepo in GitLab - Java/Spring:boot/MVN - CI/CD issue - How to run jobs in order

I'm a DevOps student and I have a project to run and I've stuck on something, I hope I can solve it with your help.
On our project we use Java monorepo which means we have multiple services in one repository, each in its own directory, six in total.
I have one main CI/CD scenario file .gitlab-ci.yml and dedicated scenario files for each microservice in its directory.
/
.gitlab-ci.yml
/EurecaServer/.gitlab-ci.yml
/ApiGateway/.gitlab-ci.yml
/UserService/.gitlab-ci.yml
/DepositService/.gitlab-ci.yml
/CreditService/.gitlab-ci.yml
/InfoService/.gitlab-ci.yml
In the root scenario file .gitlab-ci.yml I'm using 'include' to collect all microservices .gitlab-ci.yml files.
stages:
- docker-compose-start
- test
- build
- deploy
include:
- EurekaServer/.gitlab-ci.yml
- ApiGateway/.gitlab-ci.yml
- UserService/.gitlab-ci.yml
- DepositService/.gitlab-ci.yml
- CreditService/.gitlab-ci.yml
- InfoService/.gitlab-ci.yml
docker-compose-start:
stage: docker-compose-start
tags:
- shell-runner-1
script:
- docker-compose -f docker-compose.yml up --force-recreate -d
In those microservices scenario files I'm using 'needs' to make stages follow in order. First I have a test stage, then build and deploy at the end.
When the pipeline starts (from the root .gitlab-ci.yml), every microservice scenario runs randomly which fails in some stages build and deploy.
Is there a possibility to make separated microservices scenario files .gitlab-ci.yml runs in specific order? First - EurecaServer, second - ApiGateway, etc.
gitlab pipeline <--picture is here, sorry not enough reputation.
My fellow students advised me to try to make one job to build all six microservices and another to deploy all ms's but I'm not quite sure if it is the right way because I want to see every microservice job specifically to make troubleshooting more useful.
Since needs is already used to make the jobs follow the order of test, build, then deploy, stages are not needed for that.
Rather, stages can be used to specify the order to run microservices like so:
stages:
- docker-compose-start
- EurekaServer
- ApiGateway
And the jobs can include their stages like stage:ApiGateway.

What does ubuntu-latest mean for GitHub Actions?

Today I am dealing with the topic of Github Actions. I am not familiar with the topic of CI.
At GitHub I want to create an action. For the time being I use the boilplate of GitHub. I don't understand what ubuntu-latest jobs: build: runs-on: ubuntu-latest means. In another tutorial I saw self-hosted. On the server I want to deploy is also ubuntu, but that has nothing to do with it, right?
Thank you very much for an answer, feedback, comments and ideas.
GitHub workflow yml
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo Hello, world!
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test, and deploy your project.
The runner is the application that runs a job and its steps from a GitHub Actions workflow.
It is used by GitHub Actions in the hosted virtual environments, or you can self-host the runner in your own environment.
Basically, GitHub-hosted runners offer a quicker, simpler way to run your workflows, while self-hosted runners are a highly configurable way to run workflows in your own custom environment.
Quoting the Github documentation:
GitHub-hosted runners:
- Receive automatic updates for the operating system, preinstalled packages and tools, and the self-hosted runner application.
- Are managed and maintained by GitHub.
- Provide a clean instance for every job execution.
- Use free minutes on your GitHub plan, with per-minute rates applied after surpassing the free minutes.
Self-hosted runners:
- Receive automatic updates for the self-hosted runner application only. You are responsible for updating the operating system and all other software.
- Can use cloud services or local machines that you already pay for.
- Are customizable to your hardware, operating system, software, and security requirements.
- Don't need to have a clean instance for every job execution.
Are free to use with GitHub Actions, but you are responsible for the cost of maintaining your runner machines.
You can also see on the link shared above the following table showing the available Github hosted runners with their associated label (such as ubuntu-latest):
So when you informed ubuntu-latest on your workflow, you asked Github to provide a runner to execute all the steps contained in your job implementation (it is not related to the server you wish to deploy, but to the pipeline that will perform the deploy operation (in your case)).

Effective GitLab CI/CD workflow with multiple Terraform configurations?

My team uses AWS for our infrastructure, across 3 different AWS accounts. We'll call them simply sandbox, staging, and production.
I recently set up Terraform against our AWS infrastructure, and its hierarchy maps against our accounts, then by either application, or AWS service itself. The repo structure looks something like this:
staging
iam
groups
main.tf
users
main.tf
s3
main.tf
sandbox
iam
...
production
applications
gitlab
main.tf
route53
main.tf
...
We're using separate configurations per AWS service (e.g., IAM or S3) or application (e.g., GitLab) so we don't end up with huge .tf files per account that would take a long time to apply updates for any one change. Ideally, we'd like to move away from the service-based configuration approach and move towards more application-based configurations, but the problem at hand remains the same either way.
This approach has been working fine when applying updates manually from the command line, but I'd love to move it to GitLab CI/CD to better automate our workflow, and that's where things have broken down.
In my existing setup, if I make an single change to, say, staging/s3/main.tf, GitLab doesn't seem to have a good way out of the box to only run terraform plan or terraform apply for that specific configuration.
If I instead moved everything into a single main.tf file for an entire AWS account (or multiple files but tied to a single state file), I could simply have GitLab trigger a job to do plan or apply to just that configuration. It might take 15 minutes to run based on the number of AWS resources we have in each account, but it's a potential option I suppose.
It seems like my issue might be ultimately related to how GitLab handles "monorepos" than how Terraform handles its workflow (after all, Terraform will happily plan/apply my changes if I simply tell it what has changed), although I'd also be interested in hearing about how people structure their Terraform environments given -- or in order to avoid entirely -- these limitations.
Has anyone solved an issue like this in their environment?
The nice thing about Terraform is that it's idempotent so you can just apply even if nothing has changed and it will be a no-op action anyway.
If for some reason you really only want to run a plan/apply on a specific directory when things change then you can achieve this by using only.changes so that Gitlab will only run the job if the specified files have changed.
So if you have your existing structure then it's as simple as doing something like this:
stages:
- terraform plan
- terraform apply
.terraform_template:
image: hashicorp/terraform:latest
before_script:
- LOCATION=$(echo ${CI_JOB_NAME} | cut -d":" -f2)
- cd ${LOCATION}
- terraform init
.terraform_plan_template:
stage: terraform plan
extends: .terraform_template
script:
- terraform plan -input=false -refresh=true -module-depth=-1 .
.terraform_apply_template:
stage: terraform apply
extends: .terraform_template
script:
- terraform apply -input=false -refresh=true -auto-approve=true .
terraform-plan:production/applications/gitlab:
extends: .terraform_plan_template
only:
refs:
- master
changes:
- production/applications/gitlab/*
- modules/gitlab/*
terraform-apply:production/applications/gitlab:
extends: .terraform_apply_template
only:
refs:
- master
changes:
- production/applications/gitlab/*
- modules/gitlab/*
I've also assumed the existence of modules that are in a shared location to indicate how this pattern can also look for changes elsewhere in the repo than just the directory you are running Terraform against.
If this isn't the case and you have a flatter structure and you're happy to have a single apply job then you can simplify this to something like:
stages:
- terraform
.terraform_template:
image: hashicorp/terraform:latest
stage: terraform
before_script:
- LOCATION=$(echo ${CI_JOB_NAME} | cut -d":" -f2)
- cd ${LOCATION}
- terraform init
script:
- terraform apply -input=false -refresh=true -auto-approve=true .
only:
refs:
- master
changes:
- ${CI_JOB_NAME}/*
production/applications/gitlab:
extends: .terraform_template
In general though this can just be avoided by allowing Terraform to run against all of the appropriate directories on every push (probably only applying on push to master or other appropriate branch) because, as mentioned, Terraform is idempotent so it won't do anything if nothing has changed. This also has the benefit that if your automation code hasn't changed but something has changed in your provider (such as someone opening up a security group) then Terraform will go put it back to how it should be the next time it is triggered.

How to deploy web application to AWS instance from GitLab repository

Right now, I deploy my (Spring Boot) application to EC2 instance like:
Build JAR file on local machine
Deploy/Upload JAR via scp command (Ubuntu) from my local machine
I would like to automate that process, but:
without using Jenkins + Rundeck CI/CD tools
without using AWS CodeDeploy service since that does not support GitLab
Question: Is it possible to perform 2 simple steps (that are now done manualy - building and deploying via scp) with GitLab CI/CD tools and if so, can you present simple steps to do it.
Thanks!
You need to create a .gitlab-ci.yml file in your repository with CI jobs defined to do the two tasks you've defined.
Here's an example to get you started.
stages:
- build
- deploy
build:
stage: build
image: gradle:jdk
script:
- gradle build
artifacts:
paths:
- my_app.jar
deploy:
stage: deploy
image: ubuntu:latest
script:
- apt-get update
- apt-get -y install openssh-client
- scp my_app.jar target.server:/my_app.jar
In this example, the build job run a gradle container and uses gradle to build the app. GitLab CI artifacts are used to capture the built jar (my_app.jar), which will be passed on to the deploy job.
The deploy job runs an ubuntu container, installs openssh-client (for scp), then executes scp to open my_app.jar (passed from the build job) to the target server.
You have to fill in the actual details of building and copying your app. For secrets like SSH keys, set project level CI/CD variables that will be passed in to your CI jobs.
Create shell file with the following contents.
#!/bin/bash
# Copy JAR file to EC2 via SCP with PEM in home directory (usually /home/ec2-user)
scp -i user_key.pem file.txt ec2-user#my.ec2.id.amazonaws.com:/home/ec2-user
#SSH to EC2 Instnace
ssh -T -i "bastion_keypair.pem" ec2-user#y.ec2.id.amazonaws.com /bin/bash <<-'END2'
#The following commands will be executed automatically by bash.
#Consdier this as remote shell script.
killall java
java -jar ~/myJar.jar server ~/config.yml &>/dev/null &
echo 'done'
#Once completed, the shell will exit.
END2
In 2020, this should be easier with GitLab 13.0 (May 2020), using an older feature Auto DevOps (introduced in GitLab 11.0, June 2018)
Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically detect, build, test, deploy, and monitor your applications.
Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
Overview
But now (May 2020):
Auto Deploy to ECS
Until now, there hasn’t been a simple way to deploy to Amazon Web Services. As a result, Gitlab users had to spend a lot of time figuring out their own configuration.
In Gitlab 13.0, Auto DevOps has been extended to support deployment to AWS!
Gitlab users who are deploying to AWS Elastic Container Service (ECS) can now take advantage of Auto DevOps, even if they are not using Kubernetes. Auto DevOps simplifies and accelerates delivery and cloud deployment with a complete delivery pipeline out of the box. Simply commit code and Gitlab does the rest! With the elimination of the complexities, teams can focus on the innovative aspects of software creation!
In order to enable this workflow, users need to:
define AWS typed environment variables: ‘AWS_ACCESS_KEY_ID’ ‘AWS_ACCOUNT_ID’ and ‘AWS_REGION’, and
enable Auto DevOps.
Then, your ECS deployment will be automatically built for you with a complete, automatic, delivery pipeline.
See documentation and issue

Is it possible to add CI info in push?

We are using Gitlab CE and Gitlab Runner for our CI/CD on our Stage Servers. We got a branch for lets say dev1 where we need to do different tasks for different changes.
E.g. for frontend stuff we need a compiler to start and for backend we need to run php-unit.
Can I decide in the push what kind of Pipeline I want to start? I saw tags but they are different in git (for versioning) and gitlab (for runners) I suppose.
Is there a best practive for that use case or do I have to use 2 different branches?
You can define two manual tasks for dev1 branch, and decide on your own which task to invoke.
run-php-unit:
stage: build
script:
- echo "Running php unit"
when: manual
only: dev1
start-compiler:
stage: build
script:
- echo "Starting compiler"
when: manual
only: dev1

Resources