Why is every section in this file indented by 2 except the step
image: atlassian/default-image:2
pipelines:
default:
- step:
deployment: production
script:
- git submodule update --recursive --init
- zip -r $FILENAME . -x bitbucket-pipelines.yml *.git*
- pipe: atlassian/bitbucket-upload-file:0.1.6
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
FILENAME: $FILENAME
And gives an error if changed to two?
https://bitbucket-pipelines.prod.public.atl-paas.net/validator
This topic seems to say because YAML does not consider - to be the first character? But after pipe is the same sequence with only two?
https://community.atlassian.com/t5/Bitbucket-questions/Is-it-intentional-that-bitbucket-pipelinese-yml-indentation/qaq-p/582084
You will usually see two things in YAML
Sequences — in other programming languages, this is commonly referred as arrays, lists, ...:
- apple
- banana
- pear
Mappings — in other programming languages, this is commonly referred as objects, hashes, dictionaries, associative arrays, ...:
question: how do I foobar?
body: Can you help?
tag: yaml
So if you want to store a mapping of mapping you would do:
pier1:
boat: yes
in_activity: yes
comments: needs some painting in 2023
## The mapping above this comment refers to the pier1,
## as it is indented one level below the key `pier1`
pier2:
boat: no
in_activity: no
comments: currently inactive, needs urgent repair
## The mapping above this comment refers to the pier2,
## as it is indented one level below the key `pier2`
While, if you store a list in a mapping, you could go without the indent, indeed:
fruits:
- apple
- banana
- pear
Is strictly equal to
fruits:
- apple
- banana
- pear
But, if you are trying to indent the first step of your pipeline only, like so:
pipelines:
default:
- step:
deployment: production
You end up with an valid YAML, but, a YAML that does not have the same meaning anymore.
When your original yaml means: I do have a default pipeline with a list of actions, the first action being a step that consists of a deployment to production.
The resulting incorrectly indented YAML means: I do have a default pipeline with a list of actions, the first action having two properties, the first property being an empty step, and the second property is that it is a deployment to production.
So, here, really, the deplyement key that was a property of the step mapping became a property of the first element of the list default!
To indent all this as you would like, you will have to go:
pipelines:
default:
- step:
deployment: production
## ^-- This property is now further indented too
So, you end up with the YAML:
image: atlassian/default-image:2
pipelines:
default:
- step:
deployment: production
script:
- git submodule update --recursive --init
- zip -r $FILENAME . -x bitbucket-pipelines.yml *.git*
- pipe: atlassian/bitbucket-upload-file:0.1.6
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
FILENAME: $FILENAME
Related
I think the problem speak for itselft. I have mutiples included hidden jobs. In my .gitlab-ci.yml and I would like to enjoy the 'variables' section of all of them.
I thought I would have found what I need here https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html but:
Anchors does not allow included files
!reference cannot be used mutiples times in the 'Variables' section
extends does not merge the content but take the last one.
If anyone as an idea. Here the behavior I am trying to achieve:
hidden_1.yml
.hidden_1:
variables:
toto1: toto1
hidden_2.yml
.hidden_2
variables:
toto2: toto2
hidden_3
.hidden_3
variables:
toto2: toto3
result.yml
include:
- 'hidden_3'
- 'hidden_2'
- 'hidden_1'
Job_test:
stage: test
variables:
toto2: toto3
toto1: toto1
toto2: toto3
script: Echo '$toto1, $toto2, $toto3'
I am having trouble with dynamically passing one of two file based variables to a job.
I have defined two file variables in my CI/CD settings that contain my helm values for deployments to developement and production clusters. They are typical yaml syntax, their content does not really matter.
baz:
foo: bar
I have also defined two jobs for the deployment that depend on a general deployment template .deploy.
.deploy:
variables:
DEPLOYMENT_NAME: ""
HELM_CHART_NAME: ""
HELM_VALUES: ""
before_script:
- kubectl ...
script:
- helm upgrade $DEPLOYMENT_NAME charts/$HELM_CHART_NAME
--install
--atomic
--debug
-f $HELM_VALUES
The specialization happens in two jobs, one for dev and one for prod.
deploy:dev:
extends: .deploy
variables:
DEPLOYMENT_NAME: my-deployment
HELM_CHART_NAME: my-dev-chart
HELM_VALUES: $DEV_HELM_VALUES # from CI/CD variables
deploy:prod:
extends: .deploy
variables:
DEPLOYMENT_NAME: my-deployment
HELM_CHART_NAME: my-prod-chart
HELM_VALUES: $PROD_HELM_VALUES # from CI/CD variables
The command that fails is the one in the script tag of .deploy. If I pass in the $DEV_HELM_VALUES or $PROD_HELM_VALUES, the deployment is triggered. However if I put in the $HELM_VALUES as described above, the command fails (Error: "helm upgrade" requires 2 arguments, which is very misleading).
The problem is that the $HELM_VALUES that are accessed in the command are already the resolved content of the file, whereas passing the $DEV_HELM_VALUES or the $PROD_HELM_VALUES directly works with the -f syntax.
This can be seen using echo in the job's output:
echo "$DEV_HELM_VALUES"
/builds/my-company/my-deployment.tmp/DEV_HELM_VALUES
echo "$HELM_VALUES"
baz:
foo: bar
How can I make sure the $HELM_VALUES only point to one of the files, and do not contain the files' content?
(Side note: This is a follow-up on this https://sourceforge.net/p/ruamel-yaml/tickets/313/)
I'm building a GitLab CI pipeline by defining a .gitlab-ci.yml file, see https://docs.gitlab.com/ee/ci/yaml/.
As my CI consists of several very similar build steps, I'm using YAML-Anchors quite heavily. For example to define common cache and before-scripts.
I saw that "the correct way" of merging several yaml-anchors, due to the spec, is using
befor-script: &before-script
...
cache: &cache
...
ci-step:
image: ABC
<<: [*before-script, *cache]
script: ...
However, using this also works fine with GitLab CI and IMHO is much nicer to read:
...
ci-step:
image: abc
<<: *before-script
script: ...
<<: *cache
This also enables to put different merge keys at different positions.
All is fine so far, because it is working in GitLab CI.
Now we are using https://github.com/pre-commit/pre-commit-hooks to validate YAML-files in our repository. pre-commit-hooks is using ruamel-yaml internally for yaml-validation.
As a result, the pre-commit-hook fails with the following error message
while construction a mapping
in ".gitlab-ci.yml", line xx, column y
found duplicate key "<<"
in ".gitlab-ci.yml", line zz, column y
How can I prevent this exception from happeing if the key is equal to << in the ruamel-yaml library.
It would also be possible to update pre-commit-hooks to set allow_duplicate_keys = True, see yaml-duplicate-keys.
But this would also allow other duplicate keys, which is not perfect.
The normal way to prevent duplicate keys from throwing an error, is by setting .allow_duplicate_keys as
you indicated. If you set that, any values for duplicate keys 'later' in the mapping overwrite previous values.
In PyYAML, from which ruamel.yaml was derived, this is the side effect of a bug in PyYAML.
However duplicating << is IMO more problematic, as
<<: *a
<<: *b
is undefined and might be expected to work as if YAML document contained:
<<: [*a, *b]
or contained:
<<: [*b, *a]
or only:
<<: *b
or:
<<: *a
And depending on what key-value pairs a and b refer to, these have all different outcomes for the mapping in which the merge is applied.
To prevent the error from being thrown on merge keys only, you need to adapt the loader, but make sure you don't try to use or dump the result, garbage in means garbage out.
import sys
import ruamel.yaml
yaml_str = """\
before-script: &before-script
x: 1
cache: &cache
y: 2
ci-step:
image: ABC
<<: *before-script
script: DEF
<<: *cache
"""
class MyConstructor(ruamel.yaml.SafeConstructor):
def flatten_mapping(self, node):
index = 0
while index < len(node.value):
key_node, value_node = node.value[index]
if key_node.tag == 'tag:yaml.org,2002:merge':
del node.value[index]
index += 1
yaml = ruamel.yaml.YAML(typ='safe')
yaml.Constructor = MyConstructor
data = yaml.load(yaml_str)
print(list(data['ci-step'].keys()))
which gives:
['image', 'script']
You should complain to Gitlab that it allows invalid YAML, especially bad because it has no defined loading behaviour. And if they insist on continuing to support that kind of invalid YAML, they should tell you what it means for the mapping in which this happens.
I'm trying to download multiple artifacts into different servers(like web, db) using environments. Currently i have added the task DownloadPipelineArtifact#2 in a file and using template to add that task in azure-pipelines.yml. As i'm having multiple artifacts, im trying to use for loop where i'm getting issues.
#azure-pipelines.yml
- template: artifacts-download.yml
parameters:
pipeline:
- pipeline1
- pipeline2
- pipeline3
path:
- path1
- path2
- path3
I need to write loop in yaml so that it should download the pipeline1 artifacts to path1 and so on. Can someone please help??
Object-type parameters are your friend. They are incredibly powerful. As qBasicBoy answered, you'll want to make sure that you group the multiple properties together. If you're finding that you have a high number of properties per object, though, you can do a multi-line equivalent.
The following is an equivalent parameter structure to what qBasicBoy posted:
parameters:
- name: pipelines
type: object
default:
- Name: pipeline1
Path: path1
- Name: pipeline2
Path: path2
- Name: pipeline3
Path: path3
An example where you can stack many properties to a single object is as follows:
parameters:
- name: big_honkin_object
type: object
default:
config:
- appA: this
appB: is
appC: a
appD: really
appE: long
appF: set
appG: of
appH: properties
- appA: and
appB: here
appC: we
appD: go
appE: again
appF: making
appG: more
appH: properties
settings:
startuptype: service
recovery: no
You can, in essence, create an entire dumping ground for everything that you want to do by sticking it in one single object structure and properly segmenting everything. Sure, you could have had "startuptype" and "recovery" as separate string parameters with defaults of "service" and "no" respectively, but this way, we can pass a single large parameter from a high level pipeline to a called template, rather than passing a huge list of parameters AND defining said parameters in the template yaml scripts (remember, that's necessary!).
If you then want to access JUST a single setting, you can do something along the lines of:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Apps start as a "${{ parameters.settings.startuptype }}
Write-Host "Do the applications recover? "${{ parameters.settings.recovery }}
This will give you the following output:
Apps start as a service
Do the applications recover? no
YAML and Azure Pipelines are incredibly powerful tools. I can't recommend enough going through the entire contents of learn.microsoft.com on the subject. You'll spend a couple hours there, but you'll come out the other end with an incredibly knowledge of how these pipelines can be tailored to do everything you could ever NOT want to do yourself!
Notable links that helped me a TON (only learned this a couple months ago):
How to work with the YAML language in Pipelines
https://learn.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema
How to compose expressions (also contains useful functions like convertToJSON for your object parameters!)
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/expressions?view=azure-devops
How to create variables (separate from parameters, still useful)
https://learn.microsoft.com/en-us/azure/devops/pipelines/build/variables?view=azure-devops&tabs=yaml
SLEEPER ALERT!!! Templates are HUGELY helpful!!!
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops
You could use an object with multiple properties
parameters:
- name: pipelines
type: object
default:
- { Name: pipeline1, Path: path1 }
- { Name: pipeline2, Path: path2 }
- { Name: pipeline3, Path: path3 }
steps:
- ${{each pipeline in parameters.pipelines}}:
# use pipeline.Name or pipeline.Path
I'm trying to use variables in my gitlab-ci.yml file. This variable is passed as a parameter to a batch file that'll either only build or build and deploy based on parameter passed in. I've tried many different ways to pass my variable into the batch file but each time the variable is treated more like a static string instead.
I've read gitlabs docs on variables but cant seem to make it work.
- build
variables:
BUILD_PUBLISH_CONFIG_FALSE: 0
BUILD_PUBLISH_CONFIG_TRUE: 1
# BUILD ===============================
build: &build
stage: build
tags:
- webdev
script:
- ./build.bat %BUILD_CONFIG%
build:branch:
<<: *build
variables:
BUILD_CONFIG: $BUILD_PUBLISH_CONFIG_FALSE
only:
- /^(feature|hotfix|release)\/.+$/
build:branch:
<<: *build
variables:
BUILD_CONFIG: $BUILD_PUBLISH_CONFIG_TRUE
only:
- /^(stage)\/.+$/
build:branch:
<<: *build
variables:
BUILD_CONFIG: $BUILD_PUBLISH_CONFIG_TRUE
only:
- /^(master)\/.+$/
When watching gitlab's ci script execute, I expect ./build.bat 0, or ./build.bat 1.
Each time it prints out as ./build.bat %BUILD_CONFIG%
When you place variables inside job, that mean that you want to create new variable (and thats not correct way to do it). You want to output content of variable setup on top? Can u maybe add that to echo? or something like that? I didn't get it what you are trying to achieve.
https://docs.gitlab.com/ee/ci/variables/#gitlab-ciyml-defined-variables