A branch job in which the branch property of the trigger property is using a variable will always fail with reason: downstream pipeline can not be created.
Steps to reproduce
Set up a downstream pipeline with a trigger property as you would normally.
Add a branch property to the trigger property. Write the name of an existing branch on the downstream repository, like master/main or the name of a feature branch.
Run the pipeline and observe that the downstream pipeline is successfully created.
Now change the branch property to use a variable instead, like branch: $CI_TARGET_BRANCH.
Manually run the CI pipeline with that, setting variable through the GitLab GUI.
The job will instantly fail with reason: downstream pipeline can not be created.
Code example
The goal is to create a GitLab CI config that runs the pipeline of a specified downstream branch. The bug occurs when attempting to do it with a variable.
This works, creating a downstream pipeline like normal. But the branch name is hardcoded:
stages:
- deploy
deploy:
variables:
environment: dev
stage: deploy
trigger:
project: group/project
branch: foo
strategy: depend
This does not work; although TARGET_BRANCH is set successfully, the job fails because the downstream pipeline can not be created:
stages:
- removeme
- deploy
before_script:
- if [ -z "$TARGET_BRANCH" ]; then TARGET_BRANCH="main"; fi
- echo $TARGET_BRANCH
test_variable:
stage: removeme
script:
- echo $TARGET_BRANCH
deploy:
variables:
environment: dev
stage: deploy
trigger:
project: group/project
branch: $TARGET_BRANCH
strategy: depend
If you know what I'm doing wrong, or you have something that does work with variable expansion of the branch property, please share it (along with your GitLab version). Alternate solutions are also welcome, but this one seems like it should work.
GitLab Version on which bug occurs
Self-hosted GitLab Community Edition 12.10.7
What is the current bug behavior?
The job always fails for reason: downstream pipeline can not be created.
What is the expected correct behavior?
The branch property should be set to the value of the variable and the downstream pipeline should be created as normal, just as if you simply hardcoded/typed the name of the branch.
More details
The ability to use variable expansion in the trigger branch property was added in v12.4, and it's explicitly mentioned in the docs.
I searched for other .gitlab-ci.yml / GitLab config files. Every single one that attempted to use variable expansion in the branch property had it commented out, saying it was bugged for an unknown reason (example.
I haven't been able to find a repository in which someone claimed to have a working variable expansion for the branch property of the trigger property.
Unfortunately, the alternate solutions are either (a) hardcoding every downstream branch name into the GitLab CI config of the upstream project, or (b) not being able to test changes to the downstream GitLab CI config without first committing them to master/main, or having to use only/except.
TL;DR: How to use the value of a variable for the branch property of a bridge job? My current solution makes it so the job fails and the downstream pipeline isn't created.
this is a 'works as designed', and gitlab will improve in upcoming releases.
trigger job will pretty weak b/c it is not a full job that runs on a runner. Therefore most of the trigger configuration needs to be hardcoded.
I use direct API calls to trigger downstream jobs passing the CI_JOB_TOKEN which links the upstream job to downstream as the trigger does
API calls give you full control
curl -X POST \
-s \
-F token=${CI_JOB_TOKEN} \
-F "ref=${REF_NAME}" \
-F "variables[STAGE]=${STAGE}" \
"${CI_SERVER_URL}/api/v4/projects/${CI_PROJECT_ID}/trigger/pipeline"
now this will not wait and monitor for when the job is done so you will need to code for that if you need to wait for the downstream job to finish,
Moreover, CI_JOB_TOKEN cannot be used to get the status of the downstream job, so you will another token for that.
- |
DOWNSTREAM_RESULTS=$( curl --silent -X POST \
-F token=${CI_JOB_TOKEN} \
-F "ref=${DOWNSTREAM_PROJECT_REF}" \
-F "variables[STAGE]=${STAGE}" \
-F "variables[SLS_PACKAGE_PATH]=.serverless-${STAGE}" \
-F "variables[INVOKE_SLS_TESTS]=false" \
-F "variables[UPSTREAM_PROJECT_REF]=${CI_COMMIT_REF_NAME}" \
-F "variables[INSTALL_SLS_PLUGINS]=${INSTALL_SLS_PLUGINS}" \
-F "variables[PROJECT_ID]=${CI_PROJECT_ID}" \
-F "variables[PROJECT_JOB_NAME]=${PROJECT_JOB_NAME}" \
-F "variables[PROJECT_JOB_ID]=${PROJECT_JOB_ID}" \
"${CI_SERVER_URL}/api/v4/projects/${DOWNSTREAM_PROJECT_ID}/trigger/pipeline" )
echo ${DOWNSTREAM_RESULTS} | jq .
DOWNSTREAM_PIPELINE_ID=$( echo ${DOWNSTREAM_RESULTS} | jq -r .id )
echo "Monitoring Downstream pipeline ${DOWNSTREAM_PIPELINE_ID} status..."
DOWNSTREAM_STATUS='running'
COUNT=0
PIPELINE_API_URL="${CI_SERVER_URL}/api/v4/projects/${DOWNSTREAM_PROJECT_ID}/pipelines/${DOWNSTREAM_PIPELINE_ID}"
echo "Pipeline api endpoint => ${PIPELINE_API_URL}"
while [ ${DOWNSTREAM_STATUS} == "running" ]
do
if [ $COUNT -eq 0 ]
then
echo "Starting loop"
fi
if [ ${COUNT} -ge 350 ]
then
echo 'TIMEOUT!'
DOWNSTREAM_STATUS="TIMEOUT"
break
elif [ $(( ${COUNT} % 60 )) -eq 0 ]
then
echo "Downstream pipeline status => ${DOWNSTREAM_STATUS}"
echo "Count => ${COUNT}"
sleep 10
else
sleep 10
fi
DOWNSTREAM_CALL=$( curl --silent --header "PRIVATE-TOKEN: ${GITLAB_TOKEN}" ${PIPELINE_API_URL} )
if [ $COUNT -eq 0 ]
then
echo ${DOWNSTREAM_CALL} | jq .
fi
DOWNSTREAM_STATUS=$( echo ${DOWNSTREAM_CALL} | jq -r .status )
COUNT=$(( ${COUNT} + 1 ))
done
#pipeline status is running, failed, success, manual
echo "PIPELINE STATUS => ${DOWNSTREAM_STATUS}"
if [ ${DOWNSTREAM_STATUS} != "success" ]
then
exit 2
fi
Related
Currently I have this line in my .gitlab-ci.yml file:
if (( $coverage < $MIN_COVERAGE )) ; then echo "$coverage% of code coverage below threshold of $MIN_COVERAGE%" && exit 1 ; else exit 0 ; fi
$coverage is the test coverage of the code, determined with pytest-cov
$MIN_COVERAGE is a specified minimum level of test coverage which $coverage shouldn't drop below
Currently, this causes the pipeline to fail if, for instance, coverage is 70% and min_coverage is 80%. A message is also printed to the terminal: "$coverage% of code coverage below threshold of $MIN_COVERAGE%"
However, this message is only displayed in the terminal of the gitlab job, so if someone wanted to see why and by how much their pipeline failed they would need to go into the job terminal and look at the output.
Instead of having this echo to the job terminal, is there a way to get this message to output somewhere on the gitlab UI?
Here's how to create a new Merge Request Note/Comment using the GitLab API.
script:
# Project -> Settings -> Access Tokens, Create token with API scope.
# Project -> Settings -> CI/CD -> Variables, Store as CI_API_TOKEN
# GET /merge_requests?scope=all&state=opened&source_branch=:branch_name
- |
merge_request_iid=$( \
curl --request GET \
--header "PRIVATE-TOKEN: ${CI_API_TOKEN}" \
"${CI_API_V4_URL}/merge_requests?scope=all&state=opened&source_branch=${CI_COMMIT_REF_NAME}" | \
jq .[0].iid \
)
# POST /projects/:id/merge_requests/:iid/notes
- json_data='{"body":"Your message, here"}'
- |
echo $json_data |
curl --request POST \
--header "PRIVATE-TOKEN: ${CI_API_TOKEN}" \
--header "Content-Type: application/json" \
--data #- \
"${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/merge_requests/${merge_request_iid}/notes"
If you have a GitLab Premium subscription or higher, you can use metrics reports to expose any metric, including coverage percentage, in the MR UI.
In all tiers of GitLab, coverage visualization is also available, but it's unclear to me if this displays the overall coverage percentage.
Alternatively, you can use the API to add comments to the merge request (you can get the MR ID from predefined variables in the job). However, you will need to supply an API token to the CI job -- you cannot use the builtin job token to add comments.
In addition, you can use the GitLab CLI tool (glab) in your CI pipeline:
comment-mr:
image: registry.gitlab.com/gitlab-org/cli:latest
variables:
GIT_STRATEGY: none
GITLAB_TOKEN: $MR_AUTOMATION_TOKEN # Project -> Settings -> Access Tokens (api, api_read scopes)
script:
- if [ -z "$CI_OPEN_MERGE_REQUESTS" ]; then echo "No opened MR"; exit 0; fi
- glab mr --repo "$CI_PROJECT_PATH" comment $(echo "$CI_OPEN_MERGE_REQUESTS" | cut -d '!' -f2) --unique=true
--message "🐋 Branch-based docker image - \`$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-branch\`"
I'm trying to create an output to use later in the job.
However, for some reason, the BRANCH env variable which I'm getting to be the GITHUB_REF_NAME is an empty string, which according to the docs, should be the branch.
Also using the variable directly produces the same result.
- name: Set Terraform Environment Variable
id: set_tf_env
env:
BRANCH: ${{env.GITHUB_REF_NAME}}
run: |
if [ "$BRANCH" == "dev" ]; then
run: echo "::set-output name=TF_ENV::dev"
elif [ "$BRANCH" == "prod" ]; then
run: echo "::set-output name=TF_ENV::prod"
else
echo "Branch has no environment"
exit 1
fi
So after a bit of more research and thanks to the comments, I discovered the reason why it wasn't working.
It was because I was triggering a GitHub action in a Pull Request, something I failed to mention.
So what I ended up using was:
github.event.pull_request.head.ref
When running my gitlab ci I need to check whether a specified svn directory exists.
I was using the script:
variables:
DIR_CHECK: "default"
stages:
- setup
- test
- otherDebugJob
.csharp:
only:
changes:
- "**/*.cs"
- "**/*.js"
setup:
script:
- $DIR_CHECK = $(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo $DIR_CHECK
test:
script:
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK == ''
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
- echo $DIR_CHECK
rules:
- if: $DIR_CHECK != ''
the svn command works and echos back the correct reply but $DIR_CHECK does not get set to anything but the original default. It does not store the returned string from the svn command.
How do I store the returned string from an exe in a variable in gitlab ci?
Test run:
Executing "step_script" stage of the job script 00:00 $ $DIR_CHECK =
$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal
--depth empty) svn: E170000: Illegal repository URL https://server.fsl.local:port/svn/myco/personal/TestNotReal' $ echo
$DIR_CHECK Cleaning up file based variables 00:01 Job succeeded
Passing variables between jobs
Unfortunately, you cannot use DIR_CHECK variable the way you described. List of steps to be executed generates before steps actually runs, that means for all of the steps DIR_CHECK will be equal to default. First of all there are few tips how you can pass variables between jobs:
First way
You can add desired command to the before_script section in your .csharp template:
.csharp:
before_script:
- export DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
and extend other steps with this .csharp.
Second way
You can pass variables between jobs with job artifacts:
setup:
stage: setup
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- echo "DIR_CHECK=$DIR_CHECK" > dotenv_file
artifacts:
reports:
dotenv:
- dotenv_file
Thirds way
You can trigger or use parent/child pipelines to pass variables into pipelines.
staging:
variables:
DIR_CHECK: "you are awesome, guys!"
stage: deploy
trigger: my/deployment
In the triggered pipeline your variable will exists at the very start moment, and all the rules will be applied correctly.
Solution
In your case, if you really don't want to include otherDebugJob step in your pipeline you can do the following:
First approach
This is quite easy way and this will work, but looks like not a best practice. So, we are already know how to pass our DIR_CHECK variable from setup step , just add some check in the test step script block:
script:
- |
if [ -z "$DIR_CHECK" ]; then
exit 0
fi
- echo "DIR_CHECK is blank"
- echo $DIR_CHECK
Do the almost same thing for the otherDebugJob but check if DIR_CHECK is not empty with if [ -n "$DIR_CHECK" ].
This approach is helpful when your pipeline not contains a lot of steps, but after the test and otherDebugJob follows another few steps.
Second approach
You can fail your setup step and then handle this fail in otherDebugJob step:
setup:
script:
- DIR_CHECK=$(svn ls https://server.fsl.local:port/svn/myco/personal/TestNotReal --depth empty)
- |
if [ -z "$DIR_CHECK" ]; then
exit 1
fi
otherDebugJob:
script:
- echo "DIR_CHECK is not blank"
when: on_failure
This approach is useful if you only want to make some debug stuff after this setup step. After all on_failure jobs, pipeline will be marked as Failed and stopped.
I saw there are several ways to trigger the CI.
Even for merge requests
https://docs.gitlab.com/ee/ci/merge_request_pipelines/
What I want to do is to trigger a gitlab CI pipeline not for all merge request and not for all commits.
Only when someone comments:
'test please' or 'test gitlab' or some special keyword maybe defined by regex?
Is this possible?
That was requested in gitlab-org/gitlab-foss issue 39215, and shipped with 11.0
rspec:
script: ...
only:
variables:
- $CI_COMMIT_MESSAGE =~ /some-regexp/
You also have the following workaround for pipelines (windows cmd shell):
Process the job only if commit message doesn't contain [CI Release]
script:
- git show -s --format=%%B | findstr /C:"[CI Release]" >nul 2>&1 && (exit 0) || (set errorlevel=0)
- cd beUcb
- call mvn -N -Pver resources:resources
- REM ... rest of script ...
I have the following shell script. There is a deploy function, that is used later in the script to deploy containers.
function deploy {
if [[ $4 -eq QA ]]; then
echo Building a QA Test container...
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-e SPECIAL_ENV_VARIABLE \
-p $2:9696 \
... # skipping other project specific settings
else
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-p $2:9696 \
... # skipping some project specific stuff
fi
During the deployment, I have to do some tests on the application (which is in containers). I use different containers for that, however I need to pass one additional parameter to the deploy function for my QA_test container, because I need a different setting in the docker create. Hence, why I put the if statement in the begining, which checks if the 4th argument equals 'QA', and if it does it creates a specific container with special env variables, otherwise if it has just the 3 arguments, it creates a 'normal' one. I was able to run the code with two seperate deploy functions, but I want to make my code better for readabiity. Anyway, this is how it should go:
Step 1: Normal tests:
deploy container_test 9696 test_database # 3 parameters
run tests... (this is not relevant to the question)
Step 2: QA testing:
deploy container_qa_test 9696 test_database QA # 4 parameters, so I can create a
# a special container
run tests... (again, not relevant to the question)
Step 3: If they are successful, deploy a production-ready container:
deploy production_container 9696 production_database # 3 parameters again
However what happens according to the log:
Step 1: test_container is created. However its created with the upper if, but there is not a 4th parameter that equals QA, however it executes it.
Step 2: this runs normal.
Step 3: production container is built as a QA container
It never reaches the else part, even if the condition is not satisfied. Can anyone give me some tips?
just change [[ $4 -eq QA ]] to :
if [[ "$4" == "QA" ]]; then
-eq used to comapre numbers ....