I'm using GitLab CI.
I have 2 jobs in the build stage that build differently my app. The 2 jobs upload a cache for the branch. I use the compiled sources to launch some tests in the test stage.
build:
stage: build
script:
- ./gradlew build --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
build_with_different_conf:
stage: build
script:
- ./gradlew buildDiff --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Test:
stage: test
script:
- ./gradlew test --build-cache
In my example, the job build_with_different_conf take more time to finish.
My question is : Is the last finishing build job upload the cache and replace the cache from the first build job or is it merging files whith the precedent job ?
Thanks.
From what i understand you are using global cache for gradle dependencies.
Than you want to have some kind of job, to job cache.
I would do it this way, more or less.
stages:
- build
- test
cache:
paths:
- <your_gradle_cache>
build_classes:
stage: build
script:
- ./gradlew build --build-cache --quiet
artifacts:
expire_in: 1d
paths:
- <your_build_dir>
build_war:
stage: build
dependencies:
- build_classes
script:
- ./gradlew buildDiff --build-cache --quiet
artifacts:
expire_in: 1w
paths:
- <path_to_your_war>
test_classes:
stage: test
dependencies:
- build_war
script:
- ./gradlew test --build-cache
test_war:
stage: test
dependencies:
- build_war
script:
- test # some kind of test to assure your war is in good condition
In this configuration:
build_classes --[.classes]--> build_war -> [.war]
| |
[.classes] [.war]
| |
V V
test_classes test_war
PS. Dont forget you can use shell (or whatever your runner's os) to debug, understand more about this. You can put ls -la all over the place.
build:
stage: build
Same stage jobs runs in parallel.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Cache files are managed by cache:key. It means if you're using same cache:key for different jobs they'll share same cache.zip file across jobs even you're defining different cache:paths. If you're using same key, but different path then your cache won't be effective, because of every job will overwrite cache.zip file with different path contents.
In your case you're using same cache:key across different jobs.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
It means last finished job will overwrite the cache.zip file not merge and will be used for next job and subsequent pipeline jobs with same key defined.
Bonus:
Test:
stage: test
script:
- ./gradlew test --build-cache
Also beware that if this job requires */build directory contents to exist, you have to be careful and better to use artifacts instead. Cache doesn't always exist and it's provided as best effort delivery.
For example I use gitlab ci's cache like this.
nodejs_test:
stage: test
image: node:12.13-alpine
before_script:
- npm install
script:
- yarn test
cache:
key:
files:
# New cache key will be computed on each package.json change.
- package.json
paths:
- node_modules/
nodejs_build:
stage: build
image: node:12.13-alpine
before_script:
# In case if we miss cache, we can simply install packages again.
# If cache is there npm install won't download them again.
- npm install
script:
- yarn build
cache:
policy: pull # totally optional
key:
files:
- package.json
prefix: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
Related
I'm trying to run job A and B on merge request. After a successful merge, job C should be executed.
Job A is used to install a Nodejs dependencies and push them to cache so jobs B and C can use that cache. On Gitlab side I'm using a shared runners. Please help me to solve that task. My gitlab-ci.yml:
image: node:19-alpine
stages:
- install_dependencies
- test1
- test2
cache:
key: somekeyvalue1234
paths:
- node_modules/
policy: pull
install_dependencies:
stage: install_dependencies
cache:
key: somekeyvalue1234
paths:
- node_modules/
policy: push
script:
- npm ci
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
test1:
stage: test1
script:
- npm run test1
rules:
- if: '$CI_COMMIT_BRANCH == "merge-request-test"'
test2:
stage: test2
script:
- npm run test2
rules:
- if: '$CI_COMMIT_BRANCH == "branch-foo"'
For some reason the cache can be successfully downloaded by job B while MR is triggered. But after a successful MR job C can't find that cache. How to populate the cache while running my pipeline showed above?
Finally solved the issue. It was very simple but I lost a few days by googling.
In CI/CD Settings -> General pipelines I unchecked the setting Use separate caches for protected branches and voila
So I have a job which is triggered with specific rules - creating a new tag app-prod-1.0.0 or app-dev-1.0.0. Whenever new tag is created I call the job, which in return extends other jobs
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
My thought was that jobs will be called in the order I have described inside the job: .install-packages, .build-project, .deploy-project. But that's not happening it seems that it just jumps to the last job .deploy-project, without installing and building and thus breaking my pipeline.
How to run/extend jobs in sequence?
This is the behaviour for which I didn't use multiple extends so far in my work with GitLab.
GitLab, attempts to merge the code from parent job.
Now all your parent jobs defines the script tag and in your job for e.g. build_prod the extends happening in below order
extends:
- .install-packages
- .build-project
- .deploy-project
the script code from .deploy-project is overwriting the other job's script tag.
It works differently for the variables. It will merge all the variables and overwrites if same variable is used.
See your own example updated with variables.
image: node:lts-alpine
stages:
- install
- build
- deploy
.install-packages:
stage: install
variables:
PACKAGE: 'install'
INSTALL: 'install'
script:
- echo "INSTALL-PACKAGES"
- yarn install --cache-folder .yarn-cache
artifacts:
paths:
- node_modules
cache:
- key:
files:
- yarn.lock
paths:
- .yarn-cache/
.build-project:
stage: build
variables:
PACKAGE: 'build'
BUILD: 'build'
script:
- echo "BUILD-PROJECT"
- echo $ENVIRONMENT
- yarn build
artifacts:
paths:
- build
.deploy-project:
stage: deploy
variables:
PACKAGE: 'deploy'
DEPLOY: 'from deploy'
script:
- echo "DEPLOY-PROJECT"
- ls -la build
build_prod:
variables:
PACKAGE: '/app/prod'
ENVIRONMENT: 'prod'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-prod-[0-9]+\.[0-9]+\.[0-9]+$/'
build_dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
extends:
- .install-packages
- .build-project
- .deploy-project
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
And now notice how PACKAGE variable is overwritten with the final value of '/app/prod' which comes from build-prod job itself. At the same time other variables from individual parent jobs are merged to look like below
variables:
PACKAGE: "/app/prod"
INSTALL: install
BUILD: build
DEPLOY: from deploy
ENVIRONMENT: prod
I really found View merged YAML feature best to understand how my yml file will be evaluated.
Its available in CI/CD -> Editor
It's not actually "jumps to the last job", but simply executes a single job you have provided - that is build_prod or build_dev, depending on commit tag.
As per docs when you are calling extends, you are basically just merging everything inside all the template jobs that you specified, so the last stage keyword, which comes from .deploy-project template job wins.
You should specify each job separately for each stage, and maybe even put your rules to a separate template job, i.e.
.dev:
variables:
PACKAGE: '/app/dev'
ENVIRONMENT: 'dev'
rules:
- if: '$CI_COMMIT_TAG =~ /^app-dev-[0-9]+\.[0-9]+\.[0-9]+$/'
install-dev:
extends:
- .dev
- .install-packages
build-dev:
extends:
- .dev
- .build-project
deploy-dev:
extends:
- .dev
- .deploy-project
You should create similar jobs for prod env, define template job .prod, and create install-prod, build-prod, deploy-prod jobs
I'm configuring CircleCI to try and cache dependencies so I don't have to run yarn install on every single commit.
This is what my config.yml file looks like:
version: 2.1
jobs:
build-and-test-frontend:
docker:
- image: circleci/node:14
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
working_directory: ./frontend/tests
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
workflows:
sample:
jobs:
- build-and-test-frontend
But when either restore_cache or save_cache attempts to run, I get the following error:
error computing cache key: template: cacheKey:1:17: executing "cacheKey" at <checksum "yarn.lock">: error calling checksum: open /home/circleci/project/yarn.lock: no such file or directory
I'm brand new to using CircleCI so I'm not sure how to interpret this. What can I do to fix this?
EDIT:
This is the structure of my directory:
--project_root
|
|--frontend
|-node_modules/
|-public/
|-src/
|-tests/
|-package.json
|-yarn.lock
It's hard for me to give a great answer since I can't see your files in the repo but the config you have now suggest that the yarn.lock file you have is not in the root of the repo but rather in ./frontend/tests.
If that's where it is and that's where you want to keep it, then I'd suggest moving the working_dir key from the step level to the job level. This will then apply it to every step including the caching steps. Then they should find the file they are looking for.
Update:
Thanks for the repo tree. According to that you likely want to have your config like this:
version: 2.1
workflows:
sample:
jobs:
- build-and-test-frontend
jobs:
build-and-test-frontend:
docker:
- image: cimg/node:14.17
working_directory: ./frontend
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
You'll notice a few things here:
I moved workflows to that top. That's just a personal stylistic choice but I believe it helps keep your config readable as it grows.
I moved working_directory to the job level instead of the step it was on.
I set working_directory to the frontend directory. Most filepaths on CircleCI will be relative to the working_directory. Since that's where yarn.lock is, that's where I set it.
I change the image from circleci/node:14 to cimg/node:14. The images in the circleci namespaces are deprecated. Going forward, you'll want to use the newer CircleCI images which are in the cimg namespace.
I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.
I'm setting up a gitlab pipeline with multiple stages and at least two sites each stage. How do iIconditionally allow the next stage to continue even if one site in the first stage failed (and the whole stage as it is marked as failed)? For example: I want to prepare, build and test and I want to do this on Windows & Linux runner. So if my Linux runner failed in preparation but my Windows runner succeeded, then the next stage should start without building the Linux package, because this already failed. But the windows build should start.
My Intention is that if one system fails at least the second is able to continue.
I added dependencies and i thought that this would solve my problem. Because if site "build windows" is dependent on "prepare windows" then it shouldn't matter if "prepare Linux" failed. But this isn't the case :/
image: node:10.6.0
stages:
- prepare
- build
- test
prepare windows:
stage: prepare
tags:
- windows
script:
- npm i
- git submodule foreach 'npm i'
prepare linux:
stage: prepare
tags:
- linux
script:
- npm i
- git submodule foreach 'npm i'
build windows:
stage: build
tags:
- windows
script:
- npm run build-PWA
dependencies:
- prepare windows
build linux:
stage: build
tags:
- linux
script:
- npm run build-PWA
dependencies:
- prepare linux
unit windows:
stage: test
tags:
- windows
script:
- npm run test
dependencies:
- build windows
artifacts:
paths:
- dist/
- package.json
expire_in: 5 days
unit linux:
stage: test
tags:
- linux
script:
- npm run test
dependencies:
- build linux
artifacts:
paths:
- dist/
- package.json
expire_in: 5 days
See allow_failure option:
allow_failure allows a job to fail without impacting the rest of the CI suite.
example:
job1:
stage: test
script:
- execute_script_that_will_fail
allow_failure: true