Suppose I have:
base_array:
-1
-2
how could I do something like:
my_array: << base_array
-3
so that my_array was [1,2,3]
Update: I should specify that I want the extending to occur inside the YAML itself.
Since the already commented issue#35 exists, merge-keys << doesn't help you. It only merges/inserts referenced keys into a map (see YAML docs merge). Instead you should work with sequences and use anchor & and alias *.
So your example should look like this:
base_list: &base
- 1
- 2
extended: &ext
- 3
extended_list:
[*base, *ext]
Will give result in output like this (JSON):
{
"base_list": [
1,
2
],
"extended": [
3
],
"extended_list": [
[
1,
2
],
[
3
]
]
}
Although not exactly what you expected, but maybe your parsing/loading environment can flatten the nested array/list to a simple array/list.
You can always test YAML online, for example use:
http://ben-kiki.org/ypaste
Online YAML Parser
I needed to do the same thing but to run on Azure DevOps Pipeline. In particular, I had to update the stage dependency dynamically. How I did it:
dependents: [Stage_A, Stage_B]
otherDependents: [Stage_C] # This needed to be added by policy to the pipeline's execution
dependsOn:
- ${{ each dependent in dependents }}:
- ${{ dependent }}
- ${{ each dependent in otherDependents }}:
- ${{ dependent }}
Doing so resulted in the required setup:
dependents: [Stage_A, Stage_B]
otherDependents: [Stage_C] # This needed to be added by policy to the pipeline's execution
dependsOn:
- Stage_A
- Stage_B
- Stage_C
I say dynamically because the variable dependents came from a template to which I had to append Stage_C.
Related
I have a strange problem that I can't seem to get my head around. I am trying to define some variables for use as part of the job that will deploy bicep files via Azure CLI and execute PowerShell tasks.
I get this validation error when I try and execute the pipeline: While parsing a block mapping, did not find expected key
The line that it refers to is: - name: managementResourceDNSPrivateResolverName
On the research that I have done on this problem, it sounds like an indentation problem but on the face of it, it seems to look fine.
jobs:
- job: 'Deploy_Management_Resources'
pool:
vmImage: ${{ parameters.buildAgent }}
variables:
- name: managementResourceDNSPrivateResolverName
value: 'acme-$[ lower(parameters['environmentCode']) ]-$[ lower(variables['resourceLocationShort']) ]-private-dns-resolver'
- name: managementResourceGroupManagement
value: 'acme-infrastructure-rg-management'
- name: managementResourceRouteTableName
value: 'acme-$[ lower(variables['subscriptionCode']) ]-$[ lower(variables['resourceLocationShort']) ]-route-table'
- name: managementResourceVirtualNetworkName
value: 'acme-$[ lower(variables['subscriptionCode']) ]-$[ lower(variables['resourceLocationShort']) ]-vnet-internal-mng'
Thanks!
The error message ...parsing a block mapping, did not find expected key is usually a side-effect of malformed yaml. You'll see if often with variables if you have mixed formats of arrays and property elements
variables: # an array of objects
# variable group reference object
- group: myvariablegroup
# variable template reference object
- template: my-variables.yml
# variable object
- name: myVariable
value: 'value1'
# variable shorthand syntax
myVariable: 'value1' # this fails because it's a property instead of an array element
While it doesn't appear that the sample you've provided is malformed, I am curious about the use of $[ ] which is a runtime expression. The expression $[ lower(parameters['environmentcode']) ] refers to parameters which is are only available at compile time.
Change:
$[ lower(parameters['environmentCode']) ] to ${{ lower(parameters.environmentCode) }}
Hello currently I try to find a tool (I'm pretty sure yq does not the magic for me) to remove some content from a yaml file. My file looks as following:
paths:
/entity/{id}:
get:
tags: a
summary: b
...
So its a typical openapi-specification. I would like to add a magic property for example 'env: prod' so that some endpoints look as following:
paths:
/entity/{id}:
get:
env: prod
tags: a
summary: b
...
Is there a solution to remove all endpoints, which contain env: prod?
I am also free to change the concept. If there would be some transformation with a if else I would be very happy.
Using kislyuk/yq:
yq -y '.[][][] |= ({env: "prod"} + .)'
Using mikefarah/yq:
yq '.[][][] |= ({"env": "prod"} + .)'
Both produce:
paths:
/entity/{id}:
get:
env: prod
tags: a
summary: b
This adds env: prod to every object that is three levels deep. If you want the criteria be more sophisticated, you will have to adapt .[][][] accordingly.
yq 'del(.. | select(.env == "prod"))' file.yaml
Explanation:
You want to delete all the nodes that have a child 'env' property set to 'prod'.
.. recursively matches all nodes
select(.env == "prod") select the ones that have a env property equal to "prod"
del(.. | select(.env == "prod") delete those nodes :)
Disclaimer: I wrote mikefarah/yq
I need rundeck to parse an option based on the value selected in another option. I have an option ${option.env} and other options like ${option.id_dev}, ${option.id_qa}
I want to achieve something like below for extra-vars, so that "env" option value determines which id(dev or qa) to read.
ansible-playbook /build.yml -e id=${option.id_${option.env.value}}
Is this possible or Could I pass extra-vars like a conditional case based on env value ?. I'm using rundeck 3.0.X
Update :
To give clear info, If I select 'dev' for the option 'env', I need to use its value like ${option.id_${option.env.value}} , so it translates to ${option.id_dev} to get other option in the command line
You can use cascade remote options in a tricky way. The explanation is at the end of this answer.
I made a little example to see how to achieve this:
The branches.json file (referenced on the job options as "branches"):
[
{"name":"branch1", "value":"branch1.json"},
{"name":"branch2", "value":"branch2.json"},
{"name":"branch3", "value":"branch3.json"}
]
The branch1.json is the first tentative value of the branches option:
[
{"name":"v1", "value":"1"},
{"name":"v2", "value":"2"},
{"name":"v3", "value":"3"}
]
The branch2.json is the second tentative value of the branches option:
[
{"name":"v4", "value":"4"},
{"name":"v5", "value":"5"},
{"name":"v6", "value":"6"}
]
The branch3.json is the third tentative value of the branches option:
[
{"name":"v7", "value":"7"},
{"name":"v8", "value":"8"},
{"name":"v9", "value":"9"}
]
Full job definition to test:
- defaultTab: summary
description: ''
executionEnabled: true
id: ed0d84fe-135b-41ee-95b6-6daeaa94894b
loglevel: INFO
name: CascadeTEST
nodeFilterEditable: false
options:
- enforced: true
name: branches
valuesUrl: file:/Users/myuser/branches.json
- enforced: true
name: level2
valuesUrl: file:/Users/myuser/${option.branches.value}
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- fileExtension: .sh
interpreterArgsQuoted: false
script: |+
#!/bin/sh
# getting the options
first=#option.branches#
second=#option.level2#
# this just an example
echo "this an example: ${first%%.*}.$second"
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: ed0d84fe-135b-41ee-95b6-6daeaa94894b
Explanation
As you see, the value of the first option is always a file name, if you take the value directly you always obtain an "option1.json" like string, so, the trick here is to cut the file extension and take only the name as the value, for that, I used this approach.
So, with the first selection value and the second one, you can do anything later in a script, for example, launch the ansible-playbook command in the inline script.
Check the example result.
UPDATED ANSWER:
The closest way is just using two options and concatenating them in the command step but isn't possible to get the option value inside the other one as you say.
Just like this:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 4e8df698-c7ca-4a10-9f70-bc68c1007a10
loglevel: INFO
name: NewJob
nodeFilterEditable: false
options:
- enforced: true
name: env
value: qa
values:
- qa
- prod
- stage
valuesListDelimiter: ','
- enforced: true
name: id_dev
value: '1'
values:
- '1'
- '2'
- '3'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo ${option.env}_${option.id_dev}
keepgoing: false
strategy: node-first
uuid: 4e8df698-c7ca-4a10-9f70-bc68c1007a10
Result.
(the cascade option is another approach)
When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)
I'm having some issues with passing a hash from hiera through to a resource creation.
vhosts:
project_1:
name: project_1
project_name: project_1
project_2:
name: project_2
project_name: project_2
$vhosts = hiera('vhosts', [])
create_resources(project_vhosts::vhosts, $vhosts)
Ignore the hidden project names :) but you get the gist. My resource looks like this:
define project_vhosts::vhosts(
$vhosts = []
){
notice($vhosts)
}
I get these errors after my puppet run
Error: Invalid parameter project_name on project_vhosts::Vhosts[project_1] on node *
Wrapped exception:
Invalid parameter project_name
Error: Invalid parameter project_name on project_vhosts::Vhosts[project_1] on *
I get that it wants me to implement the parameters directly into the class. However what I really want is the hash available as a whole to me in the resource. What am I doing wrong here?
First off, please don't use [] to denote an empty hash. It's not. [] is the empty array, and {} is the empty hash.
To do what you want, your data just need one more layer of hashing.
vhost_data:
vhosts:
project_1:
name: project_1
project_name: project_1
project_2:
name: project_2
project_name: project_2
Then
$data = hiera('vhost_data', {})
create_resources(project_vhosts::vhosts, $vhosts)
Of course, there is yet a simpler way to do all of that with your data.
project_vhosts::vhosts {
'meaningless-resource-title':
vhosts => hiera('vhosts', {})
}