How to create a manual trigger for multiple parallel jobs - continuous-integration

I'd like to have a blocking manual action that will trigger multiple parallel jobs in the next stage. How can I achieve this? For example:
deploy-int runs on merge
The pipeline waits on a single manual trigger
deploy-prd-1 and deploy-prd-2 run in parallel
Here's what I've tried:
1
stages:
- deploy-dev
- deploy-prd-1
- deploy-prd-2
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd-1
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd-2
script:
- echo deploy-prd-2
This achieves 1 and 2, but fails on 3, as deploy-prd-1 and deploy-prd-2 are in series, not parallel
2
stages:
- deploy-dev
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd
script:
- echo deploy-prd-2
This fails on 2, as deploy-prd-2 will run automatically without waiting on the manual trigger
3
stages:
- deploy-dev
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-2
This isn't ideal as it requires manually triggering each parallel job separately.
4
stages:
- deploy-dev
- approve
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
approve:
when: manual
allow_failure: false
deploy-prd-1:
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd
script:
- echo deploy-prd-2
This is my attempt to set up a manual "gate", but gitlab rejects the file with Error: jobs:approve script can't be blank.
I can set a no-op like script: [":"], but now gitlab will spin up a container to do nothing, which wastes time and resources.

I’m assuming the reason you want a manual gate is because you want to do some testing before you publish your app.
Instead of doing everything in a single go, a change in mindset might work better.
Use the gitlab ci rules keyword to make each of the two groups of jobs to run on their own schedule (these can have their times turned off, so they become a manual trigger requiring you to push a button).
Then break your ci down, where you build and deploy to your dev environment first. Do your tests and then when you are happy trigger the deploy to prod jobs.

Related

How to run CI stage conditionally

I have few stages to run which are:
stages:
- build
- test-sast
- deploy
in the above stage test-sast I have around 8 jobs to run but I want to run this stage only on a particular branch
So one possible solution is to go to each job of the stage test-sast and add condition i.e.
- if: '$CI_COMMIT_TAG =~ /^release-.*/ || $CI_COMMIT_BRANCH == "master"'
when: never
but If I do so then I need to make changes to each job i.e. at 8 places, instead is it possible to add a similar condition to the stage test-sast directly so that I can make the change only at a single place and it would be easy to maintain for me.
An option is to use extends templates and add them to each job, and have one central place in the .test-rules:template to handle the rules.
.test-rules:template:
rules:
- if: '$CI_COMMIT_TAG =~ /^release-.*/ || $CI_COMMIT_BRANCH == "master"'
when: never
test-sast-1:
stage: test-sast
extends:
- .test-rules:template
script:
- echo "test"
You can also use Merge Key Language-Independent Type for YAML version 1.1. as shown below;
.master_only: &master_only
rules:
- if: '$CI_COMMIT_TAG =~ /^release-.*/ || $CI_COMMIT_BRANCH == "master"'
when: never
test-sast-1:
stage: test-sast
# put your relevant statements here
<<: *master_only

using for-loop in azure pipeline jobs

I'm gonna use a for-loop which scans the files (value-f1.yaml, values-f2.yaml,...) in a folder and each time use a filename as a varibale and run the job in Azure pipeline job to deploy the helmchart based on that values file. The folder is located in the GitHub repository. So I'm thinking of something like this:
pipeline.yaml
stages:
- stage: Deploy
variables:
azureResourceGroup: ''
kubernetesCluster: ''
subdomain: ''
jobs:
${{ each filename in /myfolder/*.yaml}}:
valueFile: $filename
- template: Templates/deploy-helmchart.yaml#pipelinetemplates
deploy-helmchart.yaml
jobs:
- job: Deploy
pool:
vmImage: 'ubuntu-latest'
steps:
- task: HelmInstaller#1
displayName: 'Installing Helm'
inputs:
helmVersionToInstall: '2.15.1'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: HelmDeploy#0
displayName: 'Initializing Helm'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'init'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: PowerShell#2
displayName: 'Fetching GitTag'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Fetching the latest GitTag"
$gt = git describe --abbrev=0
Write-Host "##vso[task.setvariable variable=gittag]$gt"
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: Bash#3
displayName: 'Fetching repo-tag'
inputs:
targetType: 'inline'
script: |
echo GitTag=$(gittag)
echo BuildID=$(Build.BuildId)
echo SourceBranchName=$(Build.SourceBranchName)
echo ClusterName= $(kubernetesCluster)
- task: HelmDeploy#0
displayName: 'Upgrading helmchart'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'upgrade'
chartType: 'FilePath'
chartPath: $(chartPath)
install: true
releaseName: $(releaseName)
valueFile: $(valueFile)
arguments: '--set image.tag=$(gittag) --set subdomain=$(subdomain)'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
Another thing is that if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
Besides how can I use for-loop in the job for this case?
Any help would be appreciated.
Updated after getting comments from #Leo
Here is a PowerShell task that I added in deploy-helmchart.yaml for fetching the files from a folder in GitHub.
- task: PowerShell#2
displayName: 'Fetching Files'
inputs:
targetType: 'inline'
script: |
Write-Host "Fetching values files"
cd myfolder
$a=git ls-files
foreach ($i in $a) {
Write-Host "##vso[task.setvariable variable=filename]$i"
Write-Host "printing"$i
}
Now the question is how can I run the task: HelmDeploy#0 for each files using parameters?
if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
The answer is yes.
We could add a command line task in the jobs, like job1 to clone the GitHub repository by Github PAT, then we could access those files (value-f1.yaml, values-f2.yaml,...) in $(Build.SourcesDirectory):
git clone https://<GithubPAT>#github.com/XXXXX/TestProject.git
Besides how can I use for-loop in the job for this case?
You could create a template which will have a set of actions, and pass parameters across during your build, like:
deploy-helmchart.yaml:
parameters:
param : []
steps:
- ${{each filename in parameters.param}}:
- scripts: 'echo ${{ filename }}'
pipeline.yaml:
steps:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
Check the document Solving the looping problem in Azure DevOps Pipelines for some more details.
Command line get the latest file name in the foler:
FOR /F "delims=|" %%I IN ('DIR "$(Build.SourcesDirectory)\*.txt*" /B /O:D') DO SET NewestFile=%%I
echo "##vso[task.setvariable variable=NewFileName]NewestFile"
Update:
Now the question is how can I run the task: HelmDeploy#0 for each
files using parameters?
Its depends on whether your HelmDeploy` task has options to accept the filename parameter.
As I said before, we could use following yaml to invoke the template yaml with parameters:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
But, if the task HelmDeploy has no options to accept parameters, we could not run the task HelmDeploy#0 for each files using parameters.
Then I check the HelmDeploy#0, I found there is only one option that can accept Helm command parameters:
So, the answer for this question is depends on whether your file name can be used as a Helm command, if not, you could not run the task HelmDeploy#0 for each files using parameters. If yes, you can do it.
Please check the official document Templates for some more details.
Hope this helps.

Multiple on_failure events for each job in gitlab?

Gitlab ci offers 'when: on_failure' error handler. What if I want to have a different handler for each job? Because on failure is triggered if any preceeding job has failed. So that's basically if I set that handler for each job if one job will fail then all handlers will be launched. How to avoid that?
I don't believe there's any way to do this currently, but there is an open issue on the gitlab repo asking for some similar functionality: https://gitlab.com/gitlab-org/gitlab/issues/19400. If implemented, it would let you specify which step should run for which failed step.
Until then though, the only way you could control the error handling is to do it in the step script like this:
./script_that_fails.sh || FAILED=true
if [ $FAILED ]; then
./error_handling_script.sh
fi
I just succeed to handle only one failure job for a specific job failure.
It's a little tricky, I create an artifact on failure, in the on_failure job I try to call the artifact and execute a script to handle the failure. If the artifact doesn't exist then I echo a simple message. I found this workaround in the last 10 minutes so could be optimise but it do the job.
With that you can have specific on_failure job to handle the failure of different jobs.
stages:
- stage1
- stage1:failure
- stage2
- stage2:failure
job1:
stage: stage1
script:
- echo "FAIL" > JOB1
- ...fail something
artifacts:
when: on_failure
paths:
- JOB1
job1:failure:
stage: stage2:failure
script:
- cat JOB1 && some_script_to_handle_failure || echo "Not a JOB1 Failure"
when: on_failure
job2:
stage: stage2
script:
- echo "FAIL" > JOB2
# Won't be execute since job1 fail
artifacts:
when: on_failure
paths:
- JOB2
job2:failure:
stage: stage2:failure
script:
- cat JOB2 && some_script_to_handle_failure || echo "Not a JOB2 Failure"
# Will echo "Not a JOB2 Failure" since JOB2 artifact doesn't exist.
when: on_failure
Hope it will help.

Can't share global variable value between jobs in gitlab ci yaml file

I'm trying to build an application using GitLab CI.
The name of the generated file is depending on the time, in this format
DEV_APP_yyyyMMddhhmm
(example: DEV_APP_201810221340, corresponding to the date of today 2018/10/22 13h40).
How can I store this name in a global variable inside the .gitlab-ci.yml file?
Here is my .gitlab-ci.yml file:
image: docker:latest
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
# TIME: ""
# BRANCH: ""
# REC_BUILD_NAME: ""
TIME: "timex"
BRANCH: "branchx"
DEV_BUILD_NAME: "DEV_APP_x"
stages:
- preparation
- build
- package
- deploy
- manual_rec_build
- manual_rec_package
job_preparation:
stage: preparation
script:
- echo ${TIME}
- export TIME=$(date +%Y%m%d%H%M)
- "BRANCH=$(echo $CI_BUILD_REF_SLUG | sed 's/[^[[:alnum:]]/_/g')"
- "DEV_BUILD_NAME=DEV_APP_${BRANCH}_${TIME}"
- echo ${TIME}
maven-build:
image: maven:3-jdk-8
stage: build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
# when: manual
docker-build:
stage: package
script:
- echo ${TIME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
when: manual
k8s-deploy-production:
image: google/cloud-sdk
stage: deploy
script:
- echo ${TIME}
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west1-c
- gcloud config set project actuator-sample
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials actuator-example
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=myUserName--docker-password=$REGISTRY_PASSWD --docker-email=myEmail#gmail.com
- kubectl apply -f deployment.yml --namespace=production
environment:
name: production
url: https://example.production.com
when: manual
job_manual_rec_build:
image: maven:3-jdk-8
stage: manual_rec_build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
when: manual
# allow_failure: false
job_manual_rec_package:
stage: manual_rec_package
variables:
script:
- echo ${TIME}
- echo ${DEV_BUILD_NAME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple:${DEV_BUILD_NAME} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
artifacts:
paths:
- target/*.jar
when: on_success
#test 1
When I call
echo ${TIME}
It displays "timex".
echo faild
Could you tell me how to store a global variable and set it in each job?
Check if GitLab 13.0 (May 2020) could help in your case:
Inherit environment variables from other jobs
Passing environment variables (or other data) between CI jobs is now possible.
By using the dependencies keyword (or needs keyword for DAG pipelines), a job can inherit variables from other jobs if they are sourced with dotenv report artifacts.
This offers a more graceful approach for updating variables between jobs compared to artifacts or passing files.
See documentation and issue.
You can inherit environment variables from dependent jobs.
This feature makes use of the artifacts:reports:dotenv report feature.
Example with dependencies keyword.
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Example with the needs keyword:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
needs:
- job: build
artifacts: true
You can use artifacts for passing data between jobs. Here's example from Flant to check previous pipeline manual decision:
approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/approved
artifacts:
paths:
- .ci_status/
NOT approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/not_approved
artifacts:
paths:
- .ci_status/
deploy to production:
script:
- if [[ $(cat .ci_status/not_approved) > $(cat .ci_status/approved) ]]; then echo "Need approve from release engineer!"; exit 1; fi
- echo "deploy to production!"
There's an open issue 47517 'Pass variables between jobs' on Gitlab CE..
CI/CD often needs to pass information from one job to another and
artifacts can be used for this, although it's a heavy solution with
unintended side effects. Workspaces is another proposal for passing
files between jobs. But sometimes you don't want to pass files at all,
just a small bit of data.
I have faced the same issue, and workaround this by storing DATA in file, then access to it in other Jobs..
You kind of can....The way I went about it:
You can send a POST request to the project to save a variable:
export VARIABLE=secret
curl --request POST --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.example.com/api/v4/projects/$CI_PROJECT_ID/variables/" --form "key=VARIABLE" --form "value=$VARIABLE"
and cleanup after the work/trigger is finished
curl --request DELETE --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.seznam.net/api/v4/projects/$CI_PROJECT_ID/variables/VARIABLE"
I'm not sure it suppose to be used this way, but it does the trick. You have the variable accessible for all the following jobs (specially when you use trigger and script in that job is not an option.)
Just please make sure, you run the cleanup job even if previous once fail...

Is it possible to change Gitlab CI variable value after pipeline has started?

I'm trying to create a dynamic gitlab pipeline based on it's own execution progress. For example I have 2 environments and deployment to each of them will be enabled/disabled based on the execution of the script in before_script. It doesn't work for me, seems that pipeline variable value can't be changed after pipeline has started. Any suggestions? (please see my gitlab-ci.yml below)
variables:
RELEASE: limited
stages:
- build
- deploy
before_script:
- export RELEASE=${check-release-type-dynamically.sh}
build1:
stage: build
script:
- echo "Do your build here"
## DEPLOYMENT
deploy_production_ga:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
allow_failure: false
only:
- branches
only:
variables:
- $RELEASE == "general_availability"
deploy_production_limited:
stage: update_prod_env
script:
- echo "deploy environment for limited customers"
allow_failure: false
only:
- branches
only:
variables:
- $RELEASE == "limited"
Variables can't be evaluated in the definition. If you really want to use a shell script to decide what get's deployed you could use an bash if clause:
stages:
- build
- update_prod_env
build1:
stage: build
script:
- echo "Do your build here"
deploy_production_ga:
stage: update_prod_env
script:
- if [ "$(./check-release-type-dynamically.sh)" == "general_availability" ]; then
echo "deploy environment for all customers"
fi
only:
- branches
deploy_production_limited:
stage: update_prod_env
script:
- if [ "$(./check-release-type-dynamically.sh)" == "limited" ]; then
echo "deploy environment for all customers"
fi
only:
- branches
However this is really bad design. Both jobs will get executed on every commit, but only one will do something. It is better to distinguish them by branch. Only commit things to the branch you want to deploy to:
stages:
- build
- update_prod_env
build1:
stage: build
script:
- echo "Do your build here"
deploy_production_ga:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
only:
- branches-general_availability
deploy_production_limited:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
only:
- branches-limited
This way only the build job gets executed that you want to be executed.
A couple of other things I noticed:
export RELEASE=${check-release-type-dynamically.sh} use () instead of {} for subshells. Also if the shell script is in the same directory you must prepend an ./. It should look like:
export RELEASE=$(./check-release-type-dynamically.sh)
allow_failure: false this is the default in gitlab-ci and not necessary.
variables:
- $RELEASE == "general_availability"
Wrong syntax for variables, use:
variables:
VARIABLE_NAME: "Value of Variable"
Have a look at https://docs.gitlab.com/ee/ci/yaml/

Resources