Check that gitlab branch has changes before running jobs - bash

In order to stop the current default Gitlab behaviour of starting pipelines on branch creation I am trying to add a check in each job so that only merge requests trigger jobs when they have changes.
This is what I got so far:
rules:
- if: '[$CI_PIPELINE_SOURCE == "merge_request_event"] && [! git diff-index --quiet HEAD --]'
I am not quite familiar with bash which is surely the problem because I am currently encountering a 'yaml invalid' error :d
PS: Is there maybe a better way to do this instead of adding the check to each task?

i don't know if it can be useful, but Gitlab-ci provide the only job keyword that you can combine with changes and insert a path to files, in this way you can execute jobs only if there are changes on the code you are interested on.
Example
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
refs:
- branches
changes:
- Dockerfile
- docker/scripts/*
- dockerfiles/**/*
- more_scripts/*.{rb,py,sh}
- "**/*.json"
DOC: https://docs.gitlab.com/ee/ci/yaml/#onlychanges--exceptchanges

I am not quite familiar with bash which is surely the problem because
I am currently encountering a 'yaml invalid' error :d
The issue seems to be with
[! git diff-index --quiet HEAD --]
You can not use bash syntax in Gitlab rules but to script section you can, as the name implies
In order to stop the current default Gitlab behaviour of starting
pipelines on branch creation
If this is your goal I would recommend the following rules
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BEFORE_SHA == "0000000000000000000000000000000000000000"
when: never
- if: $CI_PIPELINE_SOURCE == "push"
Let's break down the aforementioned rules
The following rule will be true for merge requests
if: $CI_PIPELINE_SOURCE == "merge_request_event"
The predefined variable CI_COMMIT_BEFORE_SHA will be populated with this value 0000000000000000000000000000000000000000, when you create a new pipeline or a merge request
Therefore, the following rule will stop the execution of a new pipeline and of a merge request.
- if: $CI_COMMIT_BEFORE_SHA == "0000000000000000000000000000000000000000"
when: never
BUT the merge requests are accepted from the previous rule, taking into account how gitlab evaluates them.
Quoting from https://docs.gitlab.com/ee/ci/jobs/job_control.html#specify-when-jobs-run-with-rules
Rules are evaluated in order until the first match. When a match is
found, the job is either included or excluded from the pipeline
Finally, the following rule will be true for a new pushed commit
- if: $CI_PIPELINE_SOURCE == "push"
PS: Is there maybe a better way to do this instead of adding the check
to each task?
The aforementioned rules dont have to be added for each job, but instead are configured once for each pipeline take a look
https://docs.gitlab.com/ee/ci/yaml/workflow.html
Basicaly, add the workflow rules statement at the start of your gitlab.yml and you are good to go

Related

Pass file variable to gitlab job

I am having trouble with dynamically passing one of two file based variables to a job.
I have defined two file variables in my CI/CD settings that contain my helm values for deployments to developement and production clusters. They are typical yaml syntax, their content does not really matter.
baz:
foo: bar
I have also defined two jobs for the deployment that depend on a general deployment template .deploy.
.deploy:
variables:
DEPLOYMENT_NAME: ""
HELM_CHART_NAME: ""
HELM_VALUES: ""
before_script:
- kubectl ...
script:
- helm upgrade $DEPLOYMENT_NAME charts/$HELM_CHART_NAME
--install
--atomic
--debug
-f $HELM_VALUES
The specialization happens in two jobs, one for dev and one for prod.
deploy:dev:
extends: .deploy
variables:
DEPLOYMENT_NAME: my-deployment
HELM_CHART_NAME: my-dev-chart
HELM_VALUES: $DEV_HELM_VALUES # from CI/CD variables
deploy:prod:
extends: .deploy
variables:
DEPLOYMENT_NAME: my-deployment
HELM_CHART_NAME: my-prod-chart
HELM_VALUES: $PROD_HELM_VALUES # from CI/CD variables
The command that fails is the one in the script tag of .deploy. If I pass in the $DEV_HELM_VALUES or $PROD_HELM_VALUES, the deployment is triggered. However if I put in the $HELM_VALUES as described above, the command fails (Error: "helm upgrade" requires 2 arguments, which is very misleading).
The problem is that the $HELM_VALUES that are accessed in the command are already the resolved content of the file, whereas passing the $DEV_HELM_VALUES or the $PROD_HELM_VALUES directly works with the -f syntax.
This can be seen using echo in the job's output:
echo "$DEV_HELM_VALUES"
/builds/my-company/my-deployment.tmp/DEV_HELM_VALUES
echo "$HELM_VALUES"
baz:
foo: bar
How can I make sure the $HELM_VALUES only point to one of the files, and do not contain the files' content?

Jenkins Pipeline throws "syntax error: bad substitution" when Passing in Parameter

I have a Terraform project that I was trying to use Jenkin's Custom Checkbox plugin (Custom Checkbox Parameter) with so that I can build separate applications dynamically using the same IaC, however, I'm getting the following error when passing in the name parameter for that plugin into the Terraform plan and apply commands.
syntax error: bad substitution
The idea for all this is just to click on "select all" or each individual app and run the build, and this will create the IaC for the given application(s).
I have a terraform plan that I am running as a smoke test to verify the parameters above are being passed in correctly before running the apply. This looks like the following:
sh 'terraform plan -var-file="terraform-dev.tfvars" -var "app_name=[${params[${please-work}]}]" -input=false'
The documentation for the plugin states that you can reference the items checked by using this format: "${params['please-work']}" which is what I've done above. That said, one caveat to this is that Im having to set the values in quotes for this to work since the variables are being set in the Terraform using list(string).
NOTE: I have tested that all this works if I just hardcode the app names with the escapes as following:
sh 'terraform plan -var-file="terraform-dev.tfvars" -var "app_name=[\\"app-1\\",\\"app-2\\"]" -input=false'
Again, what I need is for this to work with the -var "app_name=[${params[${please-work}]}]" without throwing that error.
If needed, here is the setup for the JSON that the plugin is using:
Additionally, I can see the values are being set the way I need them to be set when running the echo of echo "${params['please-work']}" on the initial build step. So these are coming back as "app-1", "app-2"
Again, all but that one bit is working and I've tried various ways to escape the needed strings to get this work and I need insight on a path forward. This would be greatly appreciated.
You are casting the script argument in your sh step method as a literal string, and therefore it will not interpolate the pipeline variable of type object params within the Groovy pipeline interpreter. You also are passing the variable value for the app_name with [] syntax (attempted list constructor?), which is not syntactically valid for shell, Terraform, or JSON, but is for Jenkins Pipeline and Groovy with undesired behavior (unclear what is desired here). Finally, please-work is a literal string and not a Jenkins Pipeline or Groovy variable, and since params is technically an object and not a Map, you must use the . syntax and not the [] syntax for accessors. You must update with:
sh(label: 'Execute Terraform Plan', script: "terraform plan -var-file='terraform-dev.tfvars' -var 'app_name=${params.please-work}' -input=false")
If another issue arises after fixing all of this, then it would be recommended to convert the plugin usage to the pipeline with a parameters directive, and also to probably remove the unusual characters e.g. - from the parameter name.
Thanks for helping me think through this, Matt. I was able to resolve the issue with the following shell script in the declarative pipeline:
sh "terraform plan -var-file='terraform-dev.tfvars' -var 'app_name=[${params['please-work']}]' -input=false"
This is working now.

Gitlab CI/CD Trigger only a single stage in gitlab-ci.yml file on a scheduled pipeline

I want to run a single stage in gitlab from a yml file that contains a lot of stages. I don't want to have to add this to every single stage to avoid running all the stages.
except:
refs:
- schedules
Instead of explicitly defining the except tag for each job. You can define it once as an anchor.
.job_template: &job_definition
except:
refs:
- schedules
test1:
<<: *job_definition
script:
- test1 project
If you don't want to add except in each job, use only instead of except
https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-advanced
below there is an example with variables
only_with_variable:
script: ls -la
only:
variables:
- $VAR == "1234"
after that if you schedule a pipeline, you have the option to add variables to them.
in the example, you just need to add the VAR variable with value = 1234
You can use the following to run the stage only on a scheduled job
build-app:
stage: build-app
only:
- schedules

GitLab-CI: run job only when a branch is created

I want to setup gitlab to run a job when a branch when the branch name matches some criteria.
This is my current yml and the job is run when a branch is created that end in '-rc'.
However it also runs if I create a tag ending '-rc'. How can I stop the job running when a tag is created (I have tried exclude - tags).
stages:
- release_to_qa
qa:
stage: release_to_qa
when:
manual
#except:
# - tags
only:
- branches
- /^*-rc$/
tags:
- pro1
- shared
script:
echo "hello world"
you can use only & except
job:
# use regexp
only:
- /^issue-.*$/
# use special keyword
except:
- branches
https://docs.gitlab.com/ee/ci/yaml/#only-and-except-simplified

GitLab continuous integration to run jobs in several branches

My .gitlab-ci.yml file contains the following job:
job1:
script:
- pwd
only:
- master
Using only I make this dummy job job1 run the command pwd only when it gets a push into the branch master. From the docs:
only and except are two parameters that set a refs policy to limit
when jobs are built:
only defines the names of branches and tags for which the job will be built.
Now I would like to run this on multiple tags, so following the docs:
only and except allow the use of regular expressions.
I tried to say:
job1:
script:
- pwd
only:
- (master|my_test_branch)
But it does not work at all: neither in master nor in my_test_branch. What is wrong with the regular expression?
Why not just to use multiple values (more readable in my opinion):
only:
- master
- my_test_branch
I did not find any documentation about it, but apparently regular expressions in .gitlab-ci.yml need to be enclosed in / /. Thus, /(master|my_test_branch)/ works.
All together:
job1:
script:
- pwd
only:
- /(master|my_test_branch)/

Resources