I would like to save gradle-based project's version into a variable in gitlab-ci script. In my build.gradle I have:
tasks.register('version') {
doLast {
println(version)
}
}
It reads version from gradle.properties (let's say version=0.1) and returns it.
I execute it as gradlew version -q so I get only result, with no unnecessary output. When using unix-style variable creation of command result, that is: version=$(./gradlew version -q), the runner ends script. Is it possible to save the output into a variable for script?
My .gitlab-ci.yml:
image: gradle:jdk11
cache: &wrapper
paths:
- .gradle/wrapper
- .gradle/caches
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod a+x gradlew
stages:
- prepare
- build
- deploy
wrapper:
stage: prepare
script:
- gradle wrapper
compile:
stage: build
script:
- ./gradlew assemble
artifacts:
paths:
- build/classes/**
- build/libs/*.jar
cache:
<<: *wrapper
policy: pull
properties:
stage: deploy
script:
- eval version=$(./gradlew version -q)
- echo $version # not even called
I also tried to omit eval to have version=$(./gradlew version -q) in script, but nothing changes.
CI output:
$ export GRADLE_USER_HOME=`pwd`/.gradle
$ chmod a+x gradlew
$ version=$(./gradlew version -q)
Cleaning up file based variables
Ok, I've found the solution. I mustn't use variable, simply. It's needed to directly pass evaluation to another command, using double quotes, like:
.gitlab-ci.yml (part):
properties:
stage: deploy
script:
- echo "$(./gradlew version -q)"
It started to work
Related
I am trying to only trigger the pipeline when commit message has the conditional phrase. I know this has been asked a lot of times and there are helpful answers available. I have also checked gitlab ci documentation and it also provide the right ways to do it.
Still the stage is built no matter the required phrase is in commit message or not. Here is the .yml code.
before_script:
- export LC_ALL=en_US.UTF-8
- export LANG=en_US.UTF-8
- export BUILD_TIME=$(date '+%Y-%m-%d %H:%M:%S')
- echo $branch
stages:
- build
build_job:
stage: build
only:
variables:
- $branch
- $CI_COMMIT_MESSAGE =~ /\[ci build]/
script:
- bundle fastlane
- fastlane build
Anyone have any idea that what is wrong with it?
Maybe you can remove the variable $branch and use only: refs
here some example
before_script:
- export LC_ALL=en_US.UTF-8
- export LANG=en_US.UTF-8
- export BUILD_TIME=$(date '+%Y-%m-%d %H:%M:%S')
stages:
- build
build_job:
stage: build
script:
- bundle fastlane
- fastlane build
only:
variables:
- $CI_COMMIT_MESSAGE =~ /\[ci build]/
refs:
- /^develop*.*$/
you can use regex in refs , in my example meaning : when branch name contain develop and commit message contain [ci build] then run the stages
you can modify thats regex.
thats method used in my production.
consider the next solution:
before_script:
- export LC_ALL=en_US.UTF-8
- export LANG=en_US.UTF-8
- export BUILD_TIME=$(date '+%Y-%m-%d %H:%M:%S')
stages:
- build
build_job:
stage: build
rules:
- if: $CI_COMMIT_MESSAGE =~ /\[ci build]/
script:
- bundle fastlane
- fastlane build
I have many Gitlab project followed the same CI template. Whenever there is a small change in the CI script, I have to manually modify the CI script in each project. Is there a way you can store your CI script in a central location and have your project called that CI script with some environment variable substitution? For instance,
gitlab-ci.yml in each project
/bin/bash -c "$(curl -fsSL <link_to_the_central_location>.sh)"
gitlab-ci.yml in the central location
stages:
- build
- test
build-code-job:
stage: build
script:
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
test-code-job1:
stage: test
script:
- echo "If the files are built successfully, test some files with one command:"
- rake test1
test-code-job2:
stage: test
script:
- echo "If the files are built successfully, test other files with a different command:"
- rake test2
You do not need curl, actually gitlab supports this via the include directive.
you need a repository, where you store your general yml files. (you can choose if it is a whole ci file, or just parts. For this example lets call this repository CI and assume your gitlab runs at example.com - so the project url would be example.com/ci. we create two files in there just to show the possibilities.
is a whole CI definition, ready to use - lets call the file ci.yml. This approach is not really flexible
stages:
- build
- test
build-code-job:
stage: build
script:
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
test-code-job1:
stage: test
script:
- echo "If the files are built successfully, test some files with one command:"
- rake test1
test-code-job2:
stage: test
script:
- echo "If the files are built successfully, test other files with a different command:"
- rake test2
is a partly CI definition, which is more extendable. lets call the files includes.yml
.build:
stage: build
script:
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
.test:
stage: test
script:
- echo "this script tag will be overwritten"
There is even the option to use template string from yaml. please reference the gitlab documentation but it is similar to 2.
we do have our project which wants to use such definitions. so either
For the whole CI file
include:
- project: 'ci'
ref: master # think about tagging if you need it
file: 'ci.yml'
as you can see now we are referencing one yml file, with all the cahnges.
with partial extends
include:
- project: 'ci'
ref: master # think about tagging if you need it
file: 'includes.yml'
stages:
- build
- test
build-code-job:
extends: .build
job1:
extends: .test
script:
- rake test1
job2:
extends: .test
script:
- rake test2
As you see, you can easily use the includes, to have a way more granular setup. Additionally you could define at job1 and job2 variables, eg for the test target, and move the script block into the includes.yml
Futhermore you can also use anchors for the script parts. Which looks like this
includes.yml
.build-scirpt: &build
- echo "Check the ruby version, then build some Ruby project files:"
- ruby -v
- rake
.build:
stage: build
script:
- *build
and you can use also the script anchor within your configuration
For a deeper explanation you can also take a look at https://docs.gitlab.com/ee/ci/yaml/includes.html
I try to implement a conditional versioning depending on if the CI script runs for a tagged branch or not.
However the version var is not resolved. Instead it is printed as a string.
The relevant jobs of the GitLab CI script:
# build template
.build_base_template: &build_base_template
image: registry.gitlab.com/xxxxxxx/npm:latest
tags:
- docker
stage: LintBuildTest
script:
- export CUR_VERSION='$(cat ./version.txt)$BUILD_VERSION_SUFFIX'
- npm ci
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist/
# default build job
build:
before_script:
- export BUILD_VERSION_SUFFIX='-$CI_COMMIT_REF_SLUG-SNAPSHOT-$CI_COMMIT_SHORT_SHA'
<<: *build_base_template
except:
refs:
- tags
only:
variables:
- $FEATURE_NAME == null
# specific build job for tagged versions
build_tag:
before_script:
- export BUILD_VERSION_SUFFIX=''
<<: *build_base_template
only:
refs:
- tags
Variables which are exported within before_script are visible within script.
before:
before_script:
- export HELLOWELT="hi martin"
script:
- echo $HELLOWELT # prints "hi martin"
in general you can't export variables from child to parent processes.
As workaround you can use text file to write/read variable value. Also maybe it's possible to pass variable via yaml template.
I run e2e tests on CI environment, but I cannot see the artifacts in pipelines.
bitbucket-pipelines.yml:
image: cypress/base:10
options: max-time: 20
pipelines:
default:
-step:
script:
- npm install
-npm run test
artifacts:
-/opt/atlassian/pipelines/agent/build/cypress/screenshots/*
-screenshots/*.png
Maybe I typed in the wrong way path, but I am not sure.
Does anyone have any ideas what I am doing wrong?
I don't think it's documented anywhere but artifacts only accepts a relative directory from $BITBUCKET_CLONE_DIR. When I run my pipeline it says: Cloning into '/opt/atlassian/pipelines/agent/build'..., so I think artifacts is relative to that path. My guess is that if you change it to something like this, it will work:
image: cypress/base:10
options: max-time: 20
pipelines:
default:
-step:
script:
- npm install
-npm run test
artifacts:
- cypress/screenshots/*.png
Edit
From your comment I now understand what the real problem is: BitBucket pipelines are configured to stop at any non-zero exit code. That means that the pipeline execution is stopped when cypress fails the tests. Because artifacts are stored after the last step of the pipeline, you won't have any artifacts.
To work around this behavior you have to make sure the pipeline doesn't stop until the images are saved. One way to do this is to prefix the npm run test part with set +e (for more details about this solution, look at this answer here: https://community.atlassian.com/t5/Bitbucket-questions/Pipeline-script-continue-even-if-a-script-fails/qaq-p/79469). This will prevent the pipeline from stopping, but also makes sure that your pipeline always finishes! Which of course is not what you want. Therefore I recommend that you run cypress tests separately and create a second step in your pipeline to check the output of cypress. Something like this:
# package.json
...
"scripts": {
"test": "<your test command>",
"testcypress": "cypress run ..."
...
# bitbucket-pipelines.yml
image: cypress/base:10
options: max-time: 20
pipelines:
default:
- step:
name: Run tests
script:
- npm install
- npm run test
- set +e npm run testcypress
artifacts:
- cypress/screenshots/*.png
-step:
name: Evaluate Cypress
script:
- chmod +x check_cypress_output.sh
- ./check_cypress_output.sh
# check_cypress_output.sh
# Check if the directory exists
if [ -d "./usertest" ]; then
# If it does, check if it's empty
if [ -z "$(ls -A ./usertest)" ]; then
# Return the "all good" signal to BitBucket if the directory is empty
exit 0
else
# Return a fault code to BitBucket if there are any images in the directory
exit 1
fi
# Return the "all good" signal to BitBucket
else
exit 0
fi
This script will check if cypress created any artifacts, and will fail the pipeline if it did. I'm not sure this is exactly what you need but it's probably a step in the direction.
Recursive search (/**) worked for me with Cypress 3.1.0, due to additional folder inside videos and screenshots
# [...]
pipelines:
default:
- step:
name: Run tests
# [...]
artifacts:
- cypress/videos/** # Double star provides recursive search.
- cypress/screenshots/**
this is my working solution
'cypress:pipeline' is the custom pipeline in my bitbucket config file to run cypress.
please try cypress/screenshots/**/*.png in artifact section
"cypress:pipeline": "cypress run --env user=${E2E_USER_EMAIL},pass=${E2E_USER_PASSWORD} --browser chrome --spec cypress/integration/src/**/*.spec.ts"
pipelines:
custom:
healthCheck:
- step:
name: Integration and E2E Test
script:
- npm install
- npm run cypress:pipeline
artifacts:
# store any generated images as artifacts
- cypress/screenshots/**/*.png
I'm building a workflow on circleci 2.0 and so far jobs are running until it gets to android job.
At the build step ./gradlew assembleRelease it fails stating that an ENV VAR is not set:
Unzipping /home/circleci/.gradle/wrapper/dists/gradle-2.14.1-all/8bnwg5hd3w55iofp58khbp6yv/gradle-2.14.1-all.zip to /home/circleci/.gradle/wrapper/dists/gradle-2.14.1-all/8bnwg5hd3w55iofp58khbp6yv
Set executable permissions for: /home/circleci/.gradle/wrapper/dists/gradle-2.14.1-all/8bnwg5hd3w55iofp58khbp6yv/gradle-2.14.1/bin/gradle
FAILURE: Build failed with an exception.
* What went wrong:
Could not open terminal for stdout: $TERM not set
What I did try according to this post is setting the $TERM variable is a run command prior to the gradle invocation. But the build still fails looking for this variable.
Question:
How can you resolve $TERM not set on gradlew ./assembleRelease on CIrcleCI?
I did verify that I'm using the correct docker image according to this SO post:
https://stackoverflow.com/a/45744987/1829251
Here is the config.yml gist of the android CI Job:
android:
working_directory: ~/repo/android
docker:
- image: circleci/android:api-25-node8-alpha
steps:
- checkout:
path: ~/repo
- restore_cache:
key: jars-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
- attach_workspace:
at: ~/repo
- run: ./gradlew androidDepedencies
- run: export TERM=xterm
- run: sudo chmod +x ./gradlew
- run: ./gradlew assembleRelease
- save_cache:
paths:
- ~/.gradle
key: jars-{{ checksum "build.gradle" }}-{{ checksum "app/build.gradle" }}
- store_test_results:
path: ~/repo/android/reports
disclaimer: Developer Evangelist at CircleCI
- run: export TERM=xterm
That line sets the variable $TERM only for that specific shell. Each run step starts a brand new shell.
Your solution of running gradlew in the same step is one possible solution:
- run: export TERM=xterm && ./gradlew androidDepedencies
Another would be to properly export $TERM so that all subsequent steps can see the variable. This would be done like this:
- run: echo 'export TERM=xterm' >> $BASH_ENV
$BASH_ENV contains the path to the Bash file that is sourced at the beginning of every CircleCI step. Here's where this came from: https://circleci.com/docs/2.0/env-vars/#setting-path
I was exporting the ENV VAR incorrectly, using the following fixed the missing $TERM not set erorr in the android build:
- run: export TERM=xterm && ./gradlew androidDepedencies