Can't share global variable value between jobs in gitlab ci yaml file - yaml

I'm trying to build an application using GitLab CI.
The name of the generated file is depending on the time, in this format
DEV_APP_yyyyMMddhhmm
(example: DEV_APP_201810221340, corresponding to the date of today 2018/10/22 13h40).
How can I store this name in a global variable inside the .gitlab-ci.yml file?
Here is my .gitlab-ci.yml file:
image: docker:latest
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
# TIME: ""
# BRANCH: ""
# REC_BUILD_NAME: ""
TIME: "timex"
BRANCH: "branchx"
DEV_BUILD_NAME: "DEV_APP_x"
stages:
- preparation
- build
- package
- deploy
- manual_rec_build
- manual_rec_package
job_preparation:
stage: preparation
script:
- echo ${TIME}
- export TIME=$(date +%Y%m%d%H%M)
- "BRANCH=$(echo $CI_BUILD_REF_SLUG | sed 's/[^[[:alnum:]]/_/g')"
- "DEV_BUILD_NAME=DEV_APP_${BRANCH}_${TIME}"
- echo ${TIME}
maven-build:
image: maven:3-jdk-8
stage: build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
# when: manual
docker-build:
stage: package
script:
- echo ${TIME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
when: manual
k8s-deploy-production:
image: google/cloud-sdk
stage: deploy
script:
- echo ${TIME}
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west1-c
- gcloud config set project actuator-sample
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials actuator-example
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=myUserName--docker-password=$REGISTRY_PASSWD --docker-email=myEmail#gmail.com
- kubectl apply -f deployment.yml --namespace=production
environment:
name: production
url: https://example.production.com
when: manual
job_manual_rec_build:
image: maven:3-jdk-8
stage: manual_rec_build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
when: manual
# allow_failure: false
job_manual_rec_package:
stage: manual_rec_package
variables:
script:
- echo ${TIME}
- echo ${DEV_BUILD_NAME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple:${DEV_BUILD_NAME} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
artifacts:
paths:
- target/*.jar
when: on_success
#test 1
When I call
echo ${TIME}
It displays "timex".
echo faild
Could you tell me how to store a global variable and set it in each job?

Check if GitLab 13.0 (May 2020) could help in your case:
Inherit environment variables from other jobs
Passing environment variables (or other data) between CI jobs is now possible.
By using the dependencies keyword (or needs keyword for DAG pipelines), a job can inherit variables from other jobs if they are sourced with dotenv report artifacts.
This offers a more graceful approach for updating variables between jobs compared to artifacts or passing files.
See documentation and issue.
You can inherit environment variables from dependent jobs.
This feature makes use of the artifacts:reports:dotenv report feature.
Example with dependencies keyword.
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Example with the needs keyword:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
needs:
- job: build
artifacts: true

You can use artifacts for passing data between jobs. Here's example from Flant to check previous pipeline manual decision:
approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/approved
artifacts:
paths:
- .ci_status/
NOT approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/not_approved
artifacts:
paths:
- .ci_status/
deploy to production:
script:
- if [[ $(cat .ci_status/not_approved) > $(cat .ci_status/approved) ]]; then echo "Need approve from release engineer!"; exit 1; fi
- echo "deploy to production!"

There's an open issue 47517 'Pass variables between jobs' on Gitlab CE..
CI/CD often needs to pass information from one job to another and
artifacts can be used for this, although it's a heavy solution with
unintended side effects. Workspaces is another proposal for passing
files between jobs. But sometimes you don't want to pass files at all,
just a small bit of data.
I have faced the same issue, and workaround this by storing DATA in file, then access to it in other Jobs..

You kind of can....The way I went about it:
You can send a POST request to the project to save a variable:
export VARIABLE=secret
curl --request POST --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.example.com/api/v4/projects/$CI_PROJECT_ID/variables/" --form "key=VARIABLE" --form "value=$VARIABLE"
and cleanup after the work/trigger is finished
curl --request DELETE --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.seznam.net/api/v4/projects/$CI_PROJECT_ID/variables/VARIABLE"
I'm not sure it suppose to be used this way, but it does the trick. You have the variable accessible for all the following jobs (specially when you use trigger and script in that job is not an option.)
Just please make sure, you run the cleanup job even if previous once fail...

Related

Setting a GitHub Action environment variable with bash before reusable workflow

I have scoured the forums and couldn't find a solution.
I have a config.yml file which contains a set of key/value pairs. One of those pairs I want to set as a GITHUB_ENV to be used in my workflow. But I am running into issues as the logs say
"reusable workflows should be referenced at the top-level
`jobs.*.uses' key, not within steps"
How do I get around this?
name: "Deploy The Kraken"
on:
push:
branches:
- dev
pull_request:
branches:
- dev
workflow_dispatch:
jobs:
call-build:
steps:
- name: Set env
run: |
echo "FEATURE_BRANCH=$(cat config.yml | awk -F: '/^branch:/ { print $2 }'
| sed 's/ //g')" >> $GITHUB_ENV
- name: Test
run: echo $FEATURE_BRANCH
- uses: ######/#####/.github/workflows/kraken.yml#$FEATURE_BRANCH
with:
environment: #####
workspace: #####
contract: #####
production-ref: #####
I have tried multiple variations of placing the shell command in different parts. But I still can't get it to point to my chosen branch.

Env variables not showing in `printenv`

(Added bash and terminal tags since I'm unsure if my issue is specific to Github actions specifically or if instead is a misunderstanding on how env vars work more generally)
I'm working on a workflow.yml and in a step "Env substitue in sql script" am trying to set some env vars:
on: [push]
env:
GAME: "FunGame"
TRAIN_HORIZON: 7
jobs:
ssql:
runs-on: ubuntu-latest
name: Get data
steps:
- name: Checkout cum-rev repo
uses: actions/checkout#v2 # Defaults to current repo - check out current repo
- name: Checkout ds-ssql-gh-action
uses: actions/checkout#v2
with:
repository: ourorg/ds-ssql-gh-action
token: ${{ secrets.cumrev_workflow_token }}
ref: main
path: './ds-ssql-gh-action'
- name: Env substitue in sql script
run: |
INSTALL_DATE=$(date -d "`date +%Y%m01` -12 month" +%Y-%m-%d)
echo "Here is install date $INSTALL_DATE"
IOS_GAME="${{ env.GAME }}_IOS_PROD"
ANDROID_GAME="${{ env.GAME }}_ANDROID_PROD"
envsubst < get-data/training-data.sql
cat get-data/training-data.sql
printenv
After pushing this the job attempts to run. I printenv at the bottom and when I see the env variables, I don't see any of INSTALL_DATE, IOS_GAME or ANDROID_GAME.
Why are those env variables not being set with the lines:
INSTALL_DATE=$(date -d "`date +%Y%m01` -12 month" +%Y-%m-%d)
echo "Here is install date $INSTALL_DATE"
IOS_GAME="${{ env.GAME }}_IOS_PROD"
ANDROID_GAME="${{ env.GAME }}_ANDROID_PROD"
Note line echo "Here is install date $INSTALL_DATE" does indeed print out the correct value as expected. But it's not showing when I run printenv?
You have to export the variables you want to see in the environment:
export INSTALL_DATE=$(date -d "`date +%Y%m01` -12 month" +%Y-%m-%d)
...

using for-loop in azure pipeline jobs

I'm gonna use a for-loop which scans the files (value-f1.yaml, values-f2.yaml,...) in a folder and each time use a filename as a varibale and run the job in Azure pipeline job to deploy the helmchart based on that values file. The folder is located in the GitHub repository. So I'm thinking of something like this:
pipeline.yaml
stages:
- stage: Deploy
variables:
azureResourceGroup: ''
kubernetesCluster: ''
subdomain: ''
jobs:
${{ each filename in /myfolder/*.yaml}}:
valueFile: $filename
- template: Templates/deploy-helmchart.yaml#pipelinetemplates
deploy-helmchart.yaml
jobs:
- job: Deploy
pool:
vmImage: 'ubuntu-latest'
steps:
- task: HelmInstaller#1
displayName: 'Installing Helm'
inputs:
helmVersionToInstall: '2.15.1'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: HelmDeploy#0
displayName: 'Initializing Helm'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'init'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: PowerShell#2
displayName: 'Fetching GitTag'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Fetching the latest GitTag"
$gt = git describe --abbrev=0
Write-Host "##vso[task.setvariable variable=gittag]$gt"
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
- task: Bash#3
displayName: 'Fetching repo-tag'
inputs:
targetType: 'inline'
script: |
echo GitTag=$(gittag)
echo BuildID=$(Build.BuildId)
echo SourceBranchName=$(Build.SourceBranchName)
echo ClusterName= $(kubernetesCluster)
- task: HelmDeploy#0
displayName: 'Upgrading helmchart'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscription: $(azureSubscription)
azureResourceGroup: $(azureResourceGroup)
kubernetesCluster: $(kubernetesCluster)
command: 'upgrade'
chartType: 'FilePath'
chartPath: $(chartPath)
install: true
releaseName: $(releaseName)
valueFile: $(valueFile)
arguments: '--set image.tag=$(gittag) --set subdomain=$(subdomain)'
condition: and(succeeded(), startsWith(variables['build.sourceBranch'], 'refs/tags/v'))
Another thing is that if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
Besides how can I use for-loop in the job for this case?
Any help would be appreciated.
Updated after getting comments from #Leo
Here is a PowerShell task that I added in deploy-helmchart.yaml for fetching the files from a folder in GitHub.
- task: PowerShell#2
displayName: 'Fetching Files'
inputs:
targetType: 'inline'
script: |
Write-Host "Fetching values files"
cd myfolder
$a=git ls-files
foreach ($i in $a) {
Write-Host "##vso[task.setvariable variable=filename]$i"
Write-Host "printing"$i
}
Now the question is how can I run the task: HelmDeploy#0 for each files using parameters?
if the jobs can get access to the GitHub repo by default or do I need to do something in the job level?
The answer is yes.
We could add a command line task in the jobs, like job1 to clone the GitHub repository by Github PAT, then we could access those files (value-f1.yaml, values-f2.yaml,...) in $(Build.SourcesDirectory):
git clone https://<GithubPAT>#github.com/XXXXX/TestProject.git
Besides how can I use for-loop in the job for this case?
You could create a template which will have a set of actions, and pass parameters across during your build, like:
deploy-helmchart.yaml:
parameters:
param : []
steps:
- ${{each filename in parameters.param}}:
- scripts: 'echo ${{ filename }}'
pipeline.yaml:
steps:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
Check the document Solving the looping problem in Azure DevOps Pipelines for some more details.
Command line get the latest file name in the foler:
FOR /F "delims=|" %%I IN ('DIR "$(Build.SourcesDirectory)\*.txt*" /B /O:D') DO SET NewestFile=%%I
echo "##vso[task.setvariable variable=NewFileName]NewestFile"
Update:
Now the question is how can I run the task: HelmDeploy#0 for each
files using parameters?
Its depends on whether your HelmDeploy` task has options to accept the filename parameter.
As I said before, we could use following yaml to invoke the template yaml with parameters:
- template: deploy-helmchart.yaml
parameters:
param: ["filaname1","filaname2","filaname3"]
But, if the task HelmDeploy has no options to accept parameters, we could not run the task HelmDeploy#0 for each files using parameters.
Then I check the HelmDeploy#0, I found there is only one option that can accept Helm command parameters:
So, the answer for this question is depends on whether your file name can be used as a Helm command, if not, you could not run the task HelmDeploy#0 for each files using parameters. If yes, you can do it.
Please check the official document Templates for some more details.
Hope this helps.

How to create a manual trigger for multiple parallel jobs

I'd like to have a blocking manual action that will trigger multiple parallel jobs in the next stage. How can I achieve this? For example:
deploy-int runs on merge
The pipeline waits on a single manual trigger
deploy-prd-1 and deploy-prd-2 run in parallel
Here's what I've tried:
1
stages:
- deploy-dev
- deploy-prd-1
- deploy-prd-2
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd-1
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd-2
script:
- echo deploy-prd-2
This achieves 1 and 2, but fails on 3, as deploy-prd-1 and deploy-prd-2 are in series, not parallel
2
stages:
- deploy-dev
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd
script:
- echo deploy-prd-2
This fails on 2, as deploy-prd-2 will run automatically without waiting on the manual trigger
3
stages:
- deploy-dev
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
deploy-prd-1:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
when: manual
allow_failure: false
stage: deploy-prd
script:
- echo deploy-prd-2
This isn't ideal as it requires manually triggering each parallel job separately.
4
stages:
- deploy-dev
- approve
- deploy-prd
deploy-dev:
stage: deploy-dev
script:
- echo deploy-dev
approve:
when: manual
allow_failure: false
deploy-prd-1:
stage: deploy-prd
script:
- echo deploy-prd-1
deploy-prd-2:
stage: deploy-prd
script:
- echo deploy-prd-2
This is my attempt to set up a manual "gate", but gitlab rejects the file with Error: jobs:approve script can't be blank.
I can set a no-op like script: [":"], but now gitlab will spin up a container to do nothing, which wastes time and resources.
I’m assuming the reason you want a manual gate is because you want to do some testing before you publish your app.
Instead of doing everything in a single go, a change in mindset might work better.
Use the gitlab ci rules keyword to make each of the two groups of jobs to run on their own schedule (these can have their times turned off, so they become a manual trigger requiring you to push a button).
Then break your ci down, where you build and deploy to your dev environment first. Do your tests and then when you are happy trigger the deploy to prod jobs.

Is it possible to change Gitlab CI variable value after pipeline has started?

I'm trying to create a dynamic gitlab pipeline based on it's own execution progress. For example I have 2 environments and deployment to each of them will be enabled/disabled based on the execution of the script in before_script. It doesn't work for me, seems that pipeline variable value can't be changed after pipeline has started. Any suggestions? (please see my gitlab-ci.yml below)
variables:
RELEASE: limited
stages:
- build
- deploy
before_script:
- export RELEASE=${check-release-type-dynamically.sh}
build1:
stage: build
script:
- echo "Do your build here"
## DEPLOYMENT
deploy_production_ga:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
allow_failure: false
only:
- branches
only:
variables:
- $RELEASE == "general_availability"
deploy_production_limited:
stage: update_prod_env
script:
- echo "deploy environment for limited customers"
allow_failure: false
only:
- branches
only:
variables:
- $RELEASE == "limited"
Variables can't be evaluated in the definition. If you really want to use a shell script to decide what get's deployed you could use an bash if clause:
stages:
- build
- update_prod_env
build1:
stage: build
script:
- echo "Do your build here"
deploy_production_ga:
stage: update_prod_env
script:
- if [ "$(./check-release-type-dynamically.sh)" == "general_availability" ]; then
echo "deploy environment for all customers"
fi
only:
- branches
deploy_production_limited:
stage: update_prod_env
script:
- if [ "$(./check-release-type-dynamically.sh)" == "limited" ]; then
echo "deploy environment for all customers"
fi
only:
- branches
However this is really bad design. Both jobs will get executed on every commit, but only one will do something. It is better to distinguish them by branch. Only commit things to the branch you want to deploy to:
stages:
- build
- update_prod_env
build1:
stage: build
script:
- echo "Do your build here"
deploy_production_ga:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
only:
- branches-general_availability
deploy_production_limited:
stage: update_prod_env
script:
- echo "deploy environment for all customers"
only:
- branches-limited
This way only the build job gets executed that you want to be executed.
A couple of other things I noticed:
export RELEASE=${check-release-type-dynamically.sh} use () instead of {} for subshells. Also if the shell script is in the same directory you must prepend an ./. It should look like:
export RELEASE=$(./check-release-type-dynamically.sh)
allow_failure: false this is the default in gitlab-ci and not necessary.
variables:
- $RELEASE == "general_availability"
Wrong syntax for variables, use:
variables:
VARIABLE_NAME: "Value of Variable"
Have a look at https://docs.gitlab.com/ee/ci/yaml/

Resources