Error parsing config file: yaml: line 22: did not find expected key
Cannot find a job named build to run in the jobs: section of your configuration file.
I got those errors, but I'm really new to yaml so I can't really find reaons why It's not working. any ideas? Some says It might have extra spaces or something, but I can't really find it.
yaml file
defaults: &defaults:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
version: 2
jobs:
build:
docker:
- image: circleci/node:10.3.0
working_directory: ~/repo
steps:
<<: *defaults // << here
- run: npm run test
- run: npm run build
deploy:
docker:
- image: circleci/node:10.3.0
working_directory: ~/repo
steps:
<<: *defaults
- run:
name: Deploy app scripts to AWS S3
command: npm run update-app
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
What you are trying to do is trying to merge two sequences. ie all elements of default are merged into steps. Which is not supported in YAML spec. Only you can merge maps and nested sequences.
This is invalid:
steps:
<<: *defaults
- run:
as <<: is for merging map elements, not sequences
If you do this:
step_values: &step_values
- run ...
steps:
- *defaults
- *step_values
You will end up with nested sequences, which is not what you intend.
Its not possible for now. Unfortunately, the only solution is to repeat the whole list. Many users are requesting the same feature.
it looks like your YAML is not written properly. You can always check the structure validation of YAML from an open-source website such as http://www.yamllint.com/.
On checking the yaml file, on line 22 you are doing wrong. As explained by Srikanth, that you are trying to do is merging two sequences. i.e. all elements of default are merged into steps. Which is not supported in YAML at the moment.
Only you can merge maps and nested sequences
If you do this:
step_values: &step_values
- run ...
-----------------------------------------------
steps:
- *defaults
- *step_values
You will end up with nested sequences, which is not what you intend.
Related
I have self-hosted agents on multiple environments that I am trying to run the same build/deploy processes on. I would like to be able to deploy the same code from a single repo to multiple systems concurrently. Thus, I have created an "overhead" pipeline, and several "processes" pipeline templates. Everything seems to be going very well, except for when I try to perform checkouts of the same repo twice in the same pipeline execution. I get the following error:
An error occurred while loading the YAML build pipeline. An item with the same key has already been added.
I would really like to be able to just click ONE button to trigger a main pipeline that calls all the templates requires and gives the parameters needed to get all my jobs done at once. I could of course define this "overhead" pipeline and then queue up as many instances as I need of it per systems that I need to deploy to, but I'm lazy, hence why I'm using pipelines!
As soon as I remove the checkout from Common.yml, the validation succeeds without any issues. If I keep the checkout in there but only call the Common.yml once for the entire Overhead pipeline, then it succeeds without any issues as well. But the problem is: I need to pull the contents of the repo to EACH of my agents that are running on completely separate environments that are in no way ever able to talk to each other (can't pull the information to one agent and have it do some sort of a "copy" to all the other agent locations.....).
Any assistance is very much welcomed, thank you!
The following is my "overhead" pipeline:
# azure-pipelines.yml
trigger:
none
parameters:
- name: vLAN
type: string
default: 851
values:
- 851
- 1105
stages:
- stage: vLAN851
condition: eq('${{ parameters.vLAN }}', '851')
pool:
name: xxxxx
demands:
- vLAN -equals 851
jobs:
- job: Common_851
steps:
- template: Procedures/Common.yml
- job: Export_851
dependsOn: Common_851
steps:
- template: Procedures/Export.yml
parameters:
Server: ABTS-01
- stage: vLAN1105
condition: eq('${{ parameters.vLAN }}', '1105')
pool:
name: xxxxx
demands:
- vLAN -equals 1105
jobs:
- job: Common_1105
steps:
- template: Procedures/Common.yml
- job: Export_1105
dependsOn: Common_1105
steps:
- template: Procedures/Export.yml
parameters:
Server: OTS-01
And here is the "Procedures/Common.yml":
steps:
- checkout: git://xxxxx/yyyyy#$(Build.SourceBranchName)
clean: true
enabled: true
timeoutInMinutes: 1
- task: UsePythonVersion#0
enabled: true
timeoutInMinutes: 1
displayName: Select correct version of Python
inputs:
versionSpec: '3.8'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
enabled: true
timeoutInMinutes: 5
displayName: Ensure Python Requirements Installed
inputs:
script: |
python -m pip install GitPython
And here is the "Procedures/Export.yml":
parameters:
- name: Server
type: string
steps:
- task: PythonScript#0
enabled: true
timeoutInMinutes: 3
displayName: xxxxx
inputs:
arguments: --name "xxxxx" --mode True --Server ${{ parameters.Server }}
scriptSource: 'filePath'
scriptPath: 'xxxxx/main.py'
I managed to make checkout work with variable branch names by using template expression variables ${{ ... }} instead of macro syntax $(...) variables.
The difference is that, template expressions are processed at compile time while macros are processed at runtime.
So in my case I have something like:
- checkout: git://xxx/yyy#${{ variables.BRANCH_NAME }}
For more information about variables syntax :
Understand variable syntax
I couldn't get it to work with expressions but I was able to get it to work using repository resources following the documentation at: https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops
resources:
repositories:
- repository: MyGitHubRepo # The name used to reference this repository in the checkout step
type: git
name: MyAzureProjectName/MyGitRepo
ref: $(Build.SourceBranch)
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
#some job
steps:
- checkout: MyGitHubRepo
#some other job
steps:
- checkout: MyGitHubRepo
- script: dir $(Build.SourcesDirectory)
I'm configuring CircleCI to try and cache dependencies so I don't have to run yarn install on every single commit.
This is what my config.yml file looks like:
version: 2.1
jobs:
build-and-test-frontend:
docker:
- image: circleci/node:14
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
working_directory: ./frontend/tests
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
workflows:
sample:
jobs:
- build-and-test-frontend
But when either restore_cache or save_cache attempts to run, I get the following error:
error computing cache key: template: cacheKey:1:17: executing "cacheKey" at <checksum "yarn.lock">: error calling checksum: open /home/circleci/project/yarn.lock: no such file or directory
I'm brand new to using CircleCI so I'm not sure how to interpret this. What can I do to fix this?
EDIT:
This is the structure of my directory:
--project_root
|
|--frontend
|-node_modules/
|-public/
|-src/
|-tests/
|-package.json
|-yarn.lock
It's hard for me to give a great answer since I can't see your files in the repo but the config you have now suggest that the yarn.lock file you have is not in the root of the repo but rather in ./frontend/tests.
If that's where it is and that's where you want to keep it, then I'd suggest moving the working_dir key from the step level to the job level. This will then apply it to every step including the caching steps. Then they should find the file they are looking for.
Update:
Thanks for the repo tree. According to that you likely want to have your config like this:
version: 2.1
workflows:
sample:
jobs:
- build-and-test-frontend
jobs:
build-and-test-frontend:
docker:
- image: cimg/node:14.17
working_directory: ./frontend
steps:
- checkout
- restore_cache:
name: Restore Yarn Package Cache
keys:
- yarn-packages-{{ checksum "yarn.lock" }}
- run:
name: Run jest tests
command: |
yarn install --frozen-lockfile --cache-folder ~/.
yarn test
- save_cache:
name: Save Yarn Package Cache
key: yarn-packages-{{ checksum "yarn.lock" }}
paths:
- ~/.cache/yarn
You'll notice a few things here:
I moved workflows to that top. That's just a personal stylistic choice but I believe it helps keep your config readable as it grows.
I moved working_directory to the job level instead of the step it was on.
I set working_directory to the frontend directory. Most filepaths on CircleCI will be relative to the working_directory. Since that's where yarn.lock is, that's where I set it.
I change the image from circleci/node:14 to cimg/node:14. The images in the circleci namespaces are deprecated. Going forward, you'll want to use the newer CircleCI images which are in the cimg namespace.
Despite using copy and paste I keep getting the following error when I try to validate my YAML using
circleci config validate
I get this error
Error: Unable to parse YAML
while parsing a block mapping
in 'string', line 30, column 5:
<<: *defaults
^
expected <block end>, but found '<block mapping start>'
in 'string', line 31, column 7:
steps:
^
The yaml causing the error is the deploy-job-canada:
# Deploy the project to google
deploy-job:
<<: *defaults
steps:
- attach_workspace:
at: ~/repo
- run:
name: Firebase set project-id
command: ./node_modules/.bin/firebase use production --token=$FIREBASE_DEPLOY_TOKEN
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_DEPLOY_TOKEN
deploy-job-canada:
<<: *defaults
steps:
- attach_workspace:
at: ~/repo
- run:
name: Firebase set project-id
command: ./node_modules/.bin/firebase use canada --token=$FIREBASE_DEPLOY_TOKEN_CANADA
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_DEPLOY_TOKEN_CANADA
For reference, if I just include deploy-job it works fine, I replicated this job exactly so am very confused how I can have ended up with that error
everyone. I need some help with some issues that I am facing configuring circleCi for my Angular project.
The config.yml that I am using for a build and deploy process is detailed below. Currently I have decided to do separate jobs for each environment and each one includes the building and deploy. The problem with this approach is that I am repeating myself and I can't find the correct way to deploy an artifact builded in the previous job for the same workflow.
version: 2
jobs:
build:
docker:
- image: circleci/node:8-browsers
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
deploy_prod:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use default
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
deploy_qa:
docker:
- image: circleci/node:8-browsers
environment:
- FIREBASE_TOKEN: "1/AFF2414141ASdASDAKDA4141421sxscq"
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- run:
name: Install dependencies
command: npm install
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- .node_modules
- run:
name: Build Application (Production mode - aot enabled)
command: npm run build:prod
- store_artifacts:
path: dist
destination: dist
- run:
command: ./node_modules/.bin/firebase use qa
- deploy:
command: ./node_modules/.bin/firebase deploy --token=$FIREBASE_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
- deploy_prod:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
I understand that each job is using a different docker image, so this prevents me work on the same workspace.
Q: How can I use the same docker image for different jobs in the same workflow?
I included the store_artifacts thinking it could help me, but what I read about it is that it only works for using through the dashboard or the API.
Q: Am I able to recover an artifact on a job that requires a different job that stored the artifact?
I know that I am repeating myself, my goal is to have a build job required for a deploy job per environment depending on the tags' name. So my deploy_{env} jobs are mainly the firebase commands.
workflows:
version: 2
build-and-deploy:
jobs:
- build:
filters:
branches:
only:
- master
ignore:
- /feat-.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
- deploy_prod:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}/
- deploy_qa:
requires:
- build
filters:
branches:
ignore:
- /.*/
tags:
only:
- /v[0-9]+(\.[0-9]+){2}-BETA-([0-9]*)/
Q: What are the recommended steps (best practices) for this solution?
Q: How can I use the same docker image for different jobs in the same workflow?
There might be two methods of going about this:
1.) Docker Layer Caching: https://circleci.com/docs/2.0/docker-layer-caching/
2.) Caching the .tar file: https://circleci.com/blog/how-to-build-a-docker-image-on-circleci-2-0/
Q: Am I able to recover an artifact on a job that requires a different
job that stored the artifact?
The persist_to_workspace and attach_workspace keys should be helpful here: https://circleci.com/docs/2.0/workflows/#using-workspaces-to-share-data-among-jobs
Q: What are the recommended steps (best practices) for this solution?
Not sure here! Whatever works the fastest and cleanest for you. :)
We have a gitlab-ci yaml file with duplicate parts.
test:client:
before_script:
- node -v
- yarn install
cache:
untracked: true
key: client
paths:
- node_modules/
script:
- npm test
build:client:
before_script:
- node -v
- yarn install
cache:
untracked: true
key: client
paths:
- node_modules/
policy: pull
script:
- npm build
I would like to know, with the merge syntax, if I can extract the common part to reuse it efficiently in the context of these two parts.
.node_install_common: &node_install_common
before_script:
- node -v
- yarn install
cache:
untracked: true
key: client
paths:
- node_modules/
But the real question is: at which indent level do I have to merge the block to ensure policy: pull is applied to the cache section. I tried to so that:
test:client:
<<: *node_install_common
script:
- npm test
test:build:
<<: *node_install_common
policy: pull
script:
- npm build
But I get an invalid yaml error. How to indent to get the correct merge behavior?
Note that merge keys are not part of the YAML specification and therefore are not guaranteed to work. They are also specified for the obsolete YAML 1.1 version and have not been updated for the current YAML 1.2 version. We intend to explicitly remove merge keys in upcoming YAML 1.3 (and possibly provide a better alternative).
That being said: There is no merge syntax. the merge key << must be placed like a normal key in a mapping. This means that the key must have the same indentation as other keys. So this would be valid:
test:client:
<<: *node_install_common
script:
- npm test
While this is not:
test:build:
<<: *node_install_common
policy: pull
script:
- npm build
Note that compared to your code, I added : to the test:client and test:build lines.
Now merge is specified to place all key-value pairs of the referenced mapping into the current mapping if they do not already exist in it. This means that you can not, as you want to, replace values deeper in the subtree – merge does not support partial replacement of subtrees. However, you can use merge multiple times:
.node_install_common: &node_install_common
before_script:
- node -v
- yarn install
cache: &cache_common
untracked: true
key: client
paths:
- node_modules/
test:client:
<<: *node_install_common
script:
- npm test
test:build:
<<: *node_install_common
cache: # define an own cache mapping instead of letting merge place
# its version here (which could not be modified)
<<: *cache_common # load the common cache content
policy: pull # ... and place your additional key-value pair
script:
- npm build