YAML merge level - yaml

We have a gitlab-ci yaml file with duplicate parts.
test:client:
before_script:
- node -v
- yarn install
cache:
untracked: true
key: client
paths:
- node_modules/
script:
- npm test
build:client:
before_script:
- node -v
- yarn install
cache:
untracked: true
key: client
paths:
- node_modules/
policy: pull
script:
- npm build
I would like to know, with the merge syntax, if I can extract the common part to reuse it efficiently in the context of these two parts.
.node_install_common: &node_install_common
before_script:
- node -v
- yarn install
cache:
untracked: true
key: client
paths:
- node_modules/
But the real question is: at which indent level do I have to merge the block to ensure policy: pull is applied to the cache section. I tried to so that:
test:client:
<<: *node_install_common
script:
- npm test
test:build:
<<: *node_install_common
policy: pull
script:
- npm build
But I get an invalid yaml error. How to indent to get the correct merge behavior?

Note that merge keys are not part of the YAML specification and therefore are not guaranteed to work. They are also specified for the obsolete YAML 1.1 version and have not been updated for the current YAML 1.2 version. We intend to explicitly remove merge keys in upcoming YAML 1.3 (and possibly provide a better alternative).
That being said: There is no merge syntax. the merge key << must be placed like a normal key in a mapping. This means that the key must have the same indentation as other keys. So this would be valid:
test:client:
<<: *node_install_common
script:
- npm test
While this is not:
test:build:
<<: *node_install_common
policy: pull
script:
- npm build
Note that compared to your code, I added : to the test:client and test:build lines.
Now merge is specified to place all key-value pairs of the referenced mapping into the current mapping if they do not already exist in it. This means that you can not, as you want to, replace values deeper in the subtree – merge does not support partial replacement of subtrees. However, you can use merge multiple times:
.node_install_common: &node_install_common
before_script:
- node -v
- yarn install
cache: &cache_common
untracked: true
key: client
paths:
- node_modules/
test:client:
<<: *node_install_common
script:
- npm test
test:build:
<<: *node_install_common
cache: # define an own cache mapping instead of letting merge place
# its version here (which could not be modified)
<<: *cache_common # load the common cache content
policy: pull # ... and place your additional key-value pair
script:
- npm build

Related

CircleCI error when attempting to restore/save cache

I'm configuring CircleCI to try and cache dependencies so I don't have to run yarn install on every single commit.
This is what my config.yml file looks like:
version: 2.1
jobs:
build-and-test-frontend:
docker:
- image: circleci/node:14
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
working_directory: ./frontend/tests
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
workflows:
sample:
jobs:
- build-and-test-frontend
But when either restore_cache or save_cache attempts to run, I get the following error:
error computing cache key: template: cacheKey:1:17: executing "cacheKey" at <checksum "yarn.lock">: error calling checksum: open /home/circleci/project/yarn.lock: no such file or directory
I'm brand new to using CircleCI so I'm not sure how to interpret this. What can I do to fix this?
EDIT:
This is the structure of my directory:
--project_root
|
|--frontend
|-node_modules/
|-public/
|-src/
|-tests/
|-package.json
|-yarn.lock
It's hard for me to give a great answer since I can't see your files in the repo but the config you have now suggest that the yarn.lock file you have is not in the root of the repo but rather in ./frontend/tests.
If that's where it is and that's where you want to keep it, then I'd suggest moving the working_dir key from the step level to the job level. This will then apply it to every step including the caching steps. Then they should find the file they are looking for.
Update:
Thanks for the repo tree. According to that you likely want to have your config like this:
version: 2.1
workflows:
sample:
jobs:
- build-and-test-frontend
jobs:
build-and-test-frontend:
docker:
- image: cimg/node:14.17
working_directory: ./frontend
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
You'll notice a few things here:
I moved workflows to that top. That's just a personal stylistic choice but I believe it helps keep your config readable as it grows.
I moved working_directory to the job level instead of the step it was on.
I set working_directory to the frontend directory. Most filepaths on CircleCI will be relative to the working_directory. Since that's where yarn.lock is, that's where I set it.
I change the image from circleci/node:14 to cimg/node:14. The images in the circleci namespaces are deprecated. Going forward, you'll want to use the newer CircleCI images which are in the cimg namespace.

Gitlab cache not uploading due to policy

I am getting the error Not uploading cache {name of branch} due to policy in my gitlab runner. My .yaml file looks like this:
stages:
- test
- staging
- publish
- deploy
# cache using branch name
# https://gitlab.com/help/ci/caching/index.md
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
before_script:
- yarn install --cache-folder .yarn
test:
stage: test
image: node:14
script:
- yarn install
- yarn test
pages:
stage: staging
image: alekzonder/puppeteer
except:
- master
script:
- yarn install
- yarn compile
- yarn build
publish:
stage: publish
image: alekzonder/puppeteer
when: manual
script:
- yarn install
- yarn compile
- yarn build
artifacts:
paths:
- dist
deploy:
image: google/cloud-sdk:latest
stage: deploy
needs:
- publish
script:
- gcloud auth activate-service-account --account ${GCP_SERVICE_ACCOUNT} --key-file ${GOOGLE_APPLICATION_CREDENTIALS}
- gsutil -u test rsync -r -d dist/ gs://test.ca
I was wondering why it always fails to upload, and thereby fails to extract the cache. Any tips or corrections welcome. Here is a screenshot of where it fails:
You have the following set:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .yarn
- node_modules/
policy: pull
Which sets a pipeline-global precedent that you only want to (policy: pull) pull from the cache.
You'll want to read https://docs.gitlab.com/ee/ci/yaml/#cachepolicy
If you omit the policy: piece, the default is to pull-push (which will get your cache uploading).
I tend to have my pipelines structured a little different than yours, though. I typically have a "prep" step that I define, and then run the yarn install once there:
"Installing Dependencies":
image: node:lts
stage: Prep
cache:
paths:
- .yarn
- node_modules/
policy: pull-push
script:
yarn install
...
Note: Then you can leave your global policy of 'pull', since this one job will have an override to pull-push.
Then, you can remove the yarn install on all other tasks, as the cache will be restored.

GitLab CI : merge or replace cache?

I'm using GitLab CI.
I have 2 jobs in the build stage that build differently my app. The 2 jobs upload a cache for the branch. I use the compiled sources to launch some tests in the test stage.
build:
stage: build
script:
- ./gradlew build --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
build_with_different_conf:
stage: build
script:
- ./gradlew buildDiff --build-cache --quiet
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Test:
stage: test
script:
- ./gradlew test --build-cache
In my example, the job build_with_different_conf take more time to finish.
My question is : Is the last finishing build job upload the cache and replace the cache from the first build job or is it merging files whith the precedent job ?
Thanks.
From what i understand you are using global cache for gradle dependencies.
Than you want to have some kind of job, to job cache.
I would do it this way, more or less.
stages:
- build
- test
cache:
paths:
- <your_gradle_cache>
build_classes:
stage: build
script:
- ./gradlew build --build-cache --quiet
artifacts:
expire_in: 1d
paths:
- <your_build_dir>
build_war:
stage: build
dependencies:
- build_classes
script:
- ./gradlew buildDiff --build-cache --quiet
artifacts:
expire_in: 1w
paths:
- <path_to_your_war>
test_classes:
stage: test
dependencies:
- build_war
script:
- ./gradlew test --build-cache
test_war:
stage: test
dependencies:
- build_war
script:
- test # some kind of test to assure your war is in good condition
In this configuration:
build_classes --[.classes]--> build_war -> [.war]
| |
[.classes] [.war]
| |
V V
test_classes test_war
PS. Dont forget you can use shell (or whatever your runner's os) to debug, understand more about this. You can put ls -la all over the place.
build:
stage: build
Same stage jobs runs in parallel.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
Cache files are managed by cache:key. It means if you're using same cache:key for different jobs they'll share same cache.zip file across jobs even you're defining different cache:paths. If you're using same key, but different path then your cache won't be effective, because of every job will overwrite cache.zip file with different path contents.
In your case you're using same cache:key across different jobs.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- "*/build"
It means last finished job will overwrite the cache.zip file not merge and will be used for next job and subsequent pipeline jobs with same key defined.
Bonus:
Test:
stage: test
script:
- ./gradlew test --build-cache
Also beware that if this job requires */build directory contents to exist, you have to be careful and better to use artifacts instead. Cache doesn't always exist and it's provided as best effort delivery.
For example I use gitlab ci's cache like this.
nodejs_test:
stage: test
image: node:12.13-alpine
before_script:
- npm install
script:
- yarn test
cache:
key:
files:
# New cache key will be computed on each package.json change.
- package.json
paths:
- node_modules/
nodejs_build:
stage: build
image: node:12.13-alpine
before_script:
# In case if we miss cache, we can simply install packages again.
# If cache is there npm install won't download them again.
- npm install
script:
- yarn build
cache:
policy: pull # totally optional
key:
files:
- package.json
prefix: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/

yaml: did not find expected key

Error parsing config file: yaml: line 22: did not find expected key
Cannot find a job named build to run in the jobs: section of your configuration file.
I got those errors, but I'm really new to yaml so I can't really find reaons why It's not working. any ideas? Some says It might have extra spaces or something, but I can't really find it.
yaml file
defaults: &defaults:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
version: 2
jobs:
build:
docker:
- image: circleci/node:10.3.0
working_directory: ~/repo
steps:
<<: *defaults // << here
- run: npm run test
- run: npm run build
deploy:
docker:
- image: circleci/node:10.3.0
working_directory: ~/repo
steps:
<<: *defaults
- run:
name: Deploy app scripts to AWS S3
command: npm run update-app
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
What you are trying to do is trying to merge two sequences. ie all elements of default are merged into steps. Which is not supported in YAML spec. Only you can merge maps and nested sequences.
This is invalid:
steps:
<<: *defaults
- run:
as <<: is for merging map elements, not sequences
If you do this:
step_values: &step_values
- run ...
steps:
- *defaults
- *step_values
You will end up with nested sequences, which is not what you intend.
Its not possible for now. Unfortunately, the only solution is to repeat the whole list. Many users are requesting the same feature.
it looks like your YAML is not written properly. You can always check the structure validation of YAML from an open-source website such as http://www.yamllint.com/.
On checking the yaml file, on line 22 you are doing wrong. As explained by Srikanth, that you are trying to do is merging two sequences. i.e. all elements of default are merged into steps. Which is not supported in YAML at the moment.
Only you can merge maps and nested sequences
If you do this:
step_values: &step_values
- run ...
-----------------------------------------------
steps:
- *defaults
- *step_values
You will end up with nested sequences, which is not what you intend.

Travis + Matrix deploy to multiple aws buckets

I wonder how I set matrix with dual deployment.
here is logic
language: node_js
node_js:
- 8
cache:
yarn: true
matrix:
include:
- env: MHTD=USER1
- env: MHTD=USER2
install:
- if [ "$MHTD" = "USER1" ]; then yarn build-web:"$BUILD_NAME"; fi
- if [ "$MHTD" = "USER2" ]; then yarn build-web:"$BUILD_NAME1"; fi
So now I have a logic but I don't know how to set deployment step. I want to push two different builds to 2 different s3 buckets. How I can do that?
Any suggestions?
Matrix expansion per build stage is an open feature request for travis-ci currently. A workaround is to manually specify multiple stages with differing inputs, like from this example:
cache: bundler
jobs:
include:
- stage: prepare cache
script: true
rvm: 2.3
- stage: test
script: bundle show
rvm: 2.3
- stage: test
script: bundle show
rvm: 2.3

Resources