GitLab CI: Export variable in before_script build job - continuous-integration

I try to implement a conditional versioning depending on if the CI script runs for a tagged branch or not.
However the version var is not resolved. Instead it is printed as a string.
The relevant jobs of the GitLab CI script:
# build template
.build_base_template: &build_base_template
image: registry.gitlab.com/xxxxxxx/npm:latest
tags:
- docker
stage: LintBuildTest
script:
- export CUR_VERSION='$(cat ./version.txt)$BUILD_VERSION_SUFFIX'
- npm ci
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist/
# default build job
build:
before_script:
- export BUILD_VERSION_SUFFIX='-$CI_COMMIT_REF_SLUG-SNAPSHOT-$CI_COMMIT_SHORT_SHA'
<<: *build_base_template
except:
refs:
- tags
only:
variables:
- $FEATURE_NAME == null
# specific build job for tagged versions
build_tag:
before_script:
- export BUILD_VERSION_SUFFIX=''
<<: *build_base_template
only:
refs:
- tags

Variables which are exported within before_script are visible within script.
before:
before_script:
- export HELLOWELT="hi martin"
script:
- echo $HELLOWELT # prints "hi martin"

in general you can't export variables from child to parent processes.
As workaround you can use text file to write/read variable value. Also maybe it's possible to pass variable via yaml template.

Related

Use different Azure Subscription ID per environment in a Gitlab CI pipeline

We have a gitlab pipeline which I am trying to configure to use a different Azure subscription per environment without much luck.
Basically what I need to be able to do is set the environment variables ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_SUBSCRIPTION_ID, ARM_TENANT_ID to different values depending on the environment being built.
In the cicd settings I have variables set for development_ARM_SUBSCRIPTION_ID, test_ARM_SUBSCRIPTION_ID etc with the idea being I assign the values from these variables to the ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_SUBSCRIPTION_ID, ARM_TENANT_ID variables in the pipeline.
This is what my pipeline looks like
stages:
- infrastructure-validate
- infrastructure-deploy
- infrastructure-destroy
variables:
DESTROY_INFRA: "false"
development_ARM_SUBSCRIPTION_ID: $development_ARM_SUBSCRIPTION_ID
development_ARM_TENANT_ID: $development_ARM_TENANT_ID
development_ARM_CLIENT_ID: $development_ARM_CLIENT_ID
development_ARM_CLIENT_SECRET: $development_ARM_CLIENT_SECRET
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
before_script:
- rm -rf .terraform
- terraform --version
- terraform init
.terraform-validate:
script:
- export ARM_SUB_ID=${CI_ENVIRONMENT_NAME}_ARM_SUBSCRIPTION_ID
- export ARM_SUBSCRIPTION_ID=${!ARM_SUB_ID}
- export ARM_CLI_ID=${CI_ENVIRONMENT_NAME}_ARM_CLIENT_ID
- export ARM_CLIENT_ID=${!ARM_CLI_ID}
- export ARM_TEN=${CI_ENVIRONMENT_NAME}_ARM_TENANT_ID
- export ARM_TENANT_ID=${!ARM_TEN_ID}
- export ARM_CLI_SECRET=${CI_ENVIRONMENT_NAME}_ARM_CLIENT_SECRET
- export ARM_CLIENT_SECRET=${!ARM_CLI_SECRET")
- echo $development_ARM_SUBSCRIPTION_ID
- echo ${ARM_SUBSCRIPTION_ID}
- terraform workspace select ${CI_ENVIRONMENT_NAME}
- terraform validate
- terraform plan -out "terraform-plan-file"
only:
variables:
- $DESTROY_INFRA != "true"
development-validate-and-plan-terraform:
stage: infrastructure-validate
environment: development
extends: .terraform-validate
only:
refs:
- main
- develop
artifacts:
paths:
- terraform-plan-file
The variable substitution works fine when I test locally but in the pipeline it fails with
/bin/sh: eval: $ export ARM_SUBSCRIPTION_ID=${!ARM_SUB_ID}
line 139: syntax error: bad substitution
I think the problem is the terraform image does not have bash available, only sh but I can't for the life of me work out how I do the same substitution in sh. If anyone has any suggestions, or knows a better way of using different Azure subscriptions for different environments in the pipeline I would really appreciate it.
I would define different jobs for each environment that extend your main .terraform-validate job template, and define the environment variables on that job. This way you don't have to do the indirect substitution that seems to be giving you trouble. That would look something like this:
.terraform-validate:
stage: infrastructure-validate
script:
- echo ${ARM_SUBSCRIPTION_ID}
- terraform workspace select ${CI_ENVIRONMENT_NAME}
- terraform validate
- terraform plan -out "terraform-plan-file"
only:
variables:
- $DESTROY_INFRA != "true"
artifacts:
paths:
- terraform-plan-file
development-validate-and-plan-terraform:
extends: .terraform-validate
environment: development
only:
refs:
- main
- develop
variables:
ARM_SUBSCRIPTION_ID: $development_ARM_SUBSCRIPTION_ID
ARM_TENANT_ID: $development_ARM_TENANT_ID
ARM_CLIENT_ID: $development_ARM_CLIENT_ID
ARM_CLIENT_SECRET: $development_ARM_CLIENT_SECRET
production-validate-and-plan-terraform:
extends: .terraform-validate
environment: production
only:
refs:
- main
variables:
ARM_SUBSCRIPTION_ID: $production_ARM_SUBSCRIPTION_ID
ARM_TENANT_ID: $production_ARM_TENANT_ID
ARM_CLIENT_ID: $production_ARM_CLIENT_ID
ARM_CLIENT_SECRET: $production_ARM_CLIENT_SECRET
Then you define all the development_* and production_* vars in the GitLab CI/CD settings.
Note that I also moved the stage: infrastructure-validate and artifacts: ... directives to the template since I'd imagine they're the same for all environments.

GitLab CI/CD: Trigger pipeline only when a specific extension was added to a folder AND Merge Request

On my gitlab repo I have to trigger pipeline only when a specific folder have changes AND when it's a Merge request (both condition). Especially on .zip file extension, i-e, add a new zip file in this folder, create a Merge request, then run the pipeline.
This is my initial pipeline yaml code:
trigger-ci-zip-file-only:
stage: prebuild
extends:
- .prebuild
- .preprod-tags
variables:
PROJECT_FOLDER: "my-specific-folder"
before_script:
- echo "Job currently run in $CI_JOB_STAGE"
- touch ${CI_PROJECT_DIR}/$PROJECT_FOLDER/prebuild.env
- cd $PROJECT_FOLDER
only:
refs:
- merge_requests
changes:
- ${PROJECT_FOLDER}/*.zip
artifacts:
reports:
dotenv: ${CI_PROJECT_DIR}/${PROJECT_FOLDER}/prebuild.env
allow_failure: false
As you can see, my pipeline have to be triggered only when there is a change in a specific folder on zip files and only on MR. But in this state, the pipeline is always running when MR is creatd or when I push on an existing MR, even if there is no changes or adding in the specific folder.
I also tried to make some changes on the pipeline yaml code like this:
only:
refs:
- merge_requests
changes:
- ${PROJECT_FOLDER}/**/*.zip
But the pipeline always running.
Also I tried this:
trigger-ci-zip-file-only:
stage: prebuild
extends:
- .prebuild
- .preprod-tags
variables:
PROJECT_FOLDER: "my-specific-folder"
before_script:
- echo "Job currently run in $CI_JOB_STAGE"
- touch ${CI_PROJECT_DIR}/$PROJECT_FOLDER/prebuild.env
- cd $PROJECT_FOLDER
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- changes:
- ${PROJECT_FOLDER}/**/*.zip
when: always
artifacts:
reports:
dotenv: ${CI_PROJECT_DIR}/${PROJECT_FOLDER}/prebuild.env
allow_failure: false
But the pipeline always running too.
How to make sure the pipeline only run on Merge Request AND only when a .zip file was added to the specific folder ?

Gitlab CI run when commit message matches the regex

I am trying to only trigger the pipeline when commit message has the conditional phrase. I know this has been asked a lot of times and there are helpful answers available. I have also checked gitlab ci documentation and it also provide the right ways to do it.
Still the stage is built no matter the required phrase is in commit message or not. Here is the .yml code.
before_script:
- export LC_ALL=en_US.UTF-8
- export LANG=en_US.UTF-8
- export BUILD_TIME=$(date '+%Y-%m-%d %H:%M:%S')
- echo $branch
stages:
- build
build_job:
stage: build
only:
variables:
- $branch
- $CI_COMMIT_MESSAGE =~ /\[ci build]/
script:
- bundle fastlane
- fastlane build
Anyone have any idea that what is wrong with it?
Maybe you can remove the variable $branch and use only: refs
here some example
before_script:
- export LC_ALL=en_US.UTF-8
- export LANG=en_US.UTF-8
- export BUILD_TIME=$(date '+%Y-%m-%d %H:%M:%S')
stages:
- build
build_job:
stage: build
script:
- bundle fastlane
- fastlane build
only:
variables:
- $CI_COMMIT_MESSAGE =~ /\[ci build]/
refs:
- /^develop*.*$/
you can use regex in refs , in my example meaning : when branch name contain develop and commit message contain [ci build] then run the stages
you can modify thats regex.
thats method used in my production.
consider the next solution:
before_script:
- export LC_ALL=en_US.UTF-8
- export LANG=en_US.UTF-8
- export BUILD_TIME=$(date '+%Y-%m-%d %H:%M:%S')
stages:
- build
build_job:
stage: build
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci build]/
script:
- bundle fastlane
- fastlane build

variables substitution (or overriding) when extends jobs from gitlab templates unsing include

Using gitlab ci, I have a repo where all my templates are created.
For example, I have a sonar scanner job template named .sonar-scanner.yml:
sonar-analysis:
stage: quality
image:
name: sonar-scanner-ci:latest
entrypoint: [""]
script:
- sonar-scanner
-D"sonar.projectKey=${SONAR_PROJECT_NAME}"
-D"sonar.login=${SONAR_LOGIN}"
-D"sonar.host.url=${SONAR_SERVER}"
-D"sonar.projectVersion=${CI_COMMIT_SHORT_SHA}"
-D"sonar.projectBaseDir=${CI_PROJECT_DIR}"
I have include this template as a project like this in main gitlab ci file:
include:
- project: 'organization/group/ci-template'
ref: master
file: '.sonar-scanner.yml'
So as you can understand I have a repo named ci-templates where all my templates are created. And in another repo, I extends using include these templates.
Finally, in a repo, when a new merge request is created, my job for sonar is running under another file in my project test/quality.yml:
sonar:
stage: quality
extends:
- sonar-analysis
allow_failure: true
only:
refs:
- merge_requests
All is working well except the substitution or the overriding of my env var. Indeed of my template. I have many sonar server or project names. I would like to know how to override these variables SONAR_SERVER and SONAR_PROJECT_NAME when I extend the job from a template.
In my main .gitlab-ci.yml file I have a variables section declaration, and when I override these variables in, it works.
But it's not really what I want. Using many stages and many micro service it is possible to reuse the same extending job in a different way. That I really want to do is to override these variables directly in the file test/quality.yml.
This, for example does not work:
sonar:
stage: quality
extends:
- sonar-analysis
variables:
SONAR_PROJECT_NAME: foo
SONAR_SERVER: bar
allow_failure: true
only:
refs:
- merge_requests
This not work too:
variables:
SONAR_PROJECT_NAME: foo
SONAR_SERVER: bar
sonar:
stage: quality
extends:
- sonar-analysis
allow_failure: true
only:
refs:
- merge_requests
What is the best way to make this work ?
Since this question was asked in Feburary 2020, a new MR Use non-predefined variables inside CI include blocks has been merged into Gitlab 14.2, which resolves the issue for the overridden job.
The project that does the include can redefine the variables when extending a job:
include:
- project: 'organization/group/ci-template'
ref: master
file: '.sonar-scanner.yml'
sonar:
stage: quality
extends:
- sonar-analysis
variables:
SONAR_PROJECT_NAME: foo
SONAR_SERVER: bar
allow_failure: true
But in this case you probably want the job in the template to start with a dot .sonar-analysis instead of sonar-analysis to not create a real sonar-analysis job in the template (see hidden jobs).
Or you can also directly set the variables values (to redefine them) of an existing job in the project that does the include:
include:
- project: 'organization/group/ci-template'
ref: master
file: '.sonar-scanner.yml'
sonar-analysis:
variables:
SONAR_PROJECT_NAME: foo
SONAR_SERVER: bar
I verified this with a test project, which includes a template from a peer test project, and when it runs, results in two jobs. This is the job output for the overriding job:
$ echo sonar.projectKey=${SONAR_PROJECT_NAME}
sonar.projectKey=foo
$ echo sonar.login=${SONAR_LOGIN}
sonar.login=bob
$ echo sonar.host.url=${SONAR_SERVER}
sonar.host.url=bar

CircleCI - different value to environment variable according to the branch

I'm trying to set different values for environment variables on CircleCI according to the current $CIRCLE_BRANCH.
I tried setting two different values on CircleCI settings and exporting them accordingly on the deployment phase, but that doesn't work:
deployment:
release:
branch: master
commands:
...
- export API_URL=$RELEASE_API_URL; npm run build
...
staging:
branch: develop
commands:
...
- export API_URL=$STAGING_API_URL; npm run build
...
How could I achieve that?
Thanks in advance.
The question is almost 2 years now but recently I was looking for similar solution and I found it.
It refers to CircleCI's feature called Contexts (https://circleci.com/docs/2.0/contexts/).
Thanks to Contexts you can create multiple sets of environment variables which are available within entire organisation. Then you can dynamically load one of the sets depending of workflows' filters property.
Let me demonstrate it with following example:
Imagine you have two branches and you want each of them to be deployed into different server. What you have to do is:
create two contexts (e.g. prod-ctx and dev-ctx) and define SERVER_URLenvironment variable in each of them. You need to log into CircleCI dashboard and go to "Settings" -> "Contexts".
in your .circleci/config.yml define job's template and call it deploy:
deploy: &deploy
steps:
- ...
define workflows:
workflows:
version: 2
deploy:
jobs:
- deploy-dev:
context: dev-ctx
filters:
branches:
only:
- develop
- deploy-prod:
context: prod-ctx
filters:
branches:
only:
- master
finally define two jobs deploy-prod and deploy-dev which would use deploy template:
jobs:
deploy-dev:
<<: *deploy
deploy-prod:
<<: *deploy
Above steps create two jobs and run them depending on filters condition. Additionally, each job gets different set of environment variables but the logic of deployment stays the same and is defined once. Thanks to this, we achieved dynamic environment variables values for different branches.
Use a bash script
In my projects, I archive that by using a bash script.
For example, this is my circle.yml :
machine:
node:
version: 6.9.5
dependencies:
override:
- yarn install
compile:
override:
- chmod -x compile.sh
- bash ./compile.sh
And this is my compile.sh
#!/bin/bash
if [ "${CIRCLE_BRANCH}" == "development" ]
then
export NODE_ENV=development
export MONGODB_URI=${DEVELOPMENT_DB}
npm run build
elif [ "${CIRCLE_BRANCH}" == "staging" ]
then
export NODE_ENV=staging
export MONGODB_URI=${STAGING_DB}
npm run build
elif [ "${CIRCLE_BRANCH}" == "master" ]
then
export NODE_ENV=production
export MONGODB_URI=${PRODUCTION_DB}
npm run build
else
export NODE_ENV=development
export MONGODB_URI=${DEVELOPMENT_DB}
npm run build
fi
echo "Sucessfull build for environment: ${NODE_ENV}"
exit 0

Resources