Is there a way to filter yaml files used in a helmfile?
I have a values.yaml:
component-a:
key1: value
key2: value
component-b:
key1: value
key3: value
In the helmfile I load the yaml and want to filter for a component template:
templates:
component: &component
chart: oci://myregistry/{{`{{ .Release.Name }}`}}
values:
- values.yaml | TODO filter for .Release.Name
releases:
- name: component-a
<<: *component
- name: component-b
<<: *component
I would like to have the values for both components in one file and only the ones for the release name filtered in the component template.
Related
I'm using Ansible
os_project_facts module to gather admin project id of OpenStack.
This is the ansible_fact log:
ansible_facts:
openstack_projects:
- description: Bootstrap project for initializing the cloud.
domain_id: default
enabled: true
id: <PROJECT_ID>
is_domain: false
is_enabled: true
location:
cloud: envvars
project:
domain_id: default
domain_name: null
id: default
name: null
region_name: null
zone: null
name: admin
options: {}
parent_id: default
properties:
options: {}
tags: []
tags: []
Apparently, this is not a dictionary, and I can't get openstack_projects.id since it is not a dictionary. How can I retrieve PROJECT_ID and use it in other tasks?
Since the openstack_projects facts contains single list element with a dictionary, we can use the array indexing method to get the id, i.e. openstack_projects[0]['id'].
You can use it directly, or use something like set_fact:
- name: get the project id
set_fact:
project_id: "{{ openstack_projects[0]['id'] }}"
This is my first time working with YAML, but I am running into an issue where it seems like if I want to include a variable-group (i.e., signing certificate password) with local pipeline-related variables then I cannot use the simplified naming convention where the variable's name and value can both be defined and set on the same line.
For example, wat I want is something similar to this (I made sure spacing is correct in the YAML):
variables:
solutionName: Foo.sln
projectName: Bar
buildPlatform: x64
buildConfiguration: development
major: '1'
minor: '0'
build: '0'
revision: $[counter('rev', 0)]
vhdxSize: '200'
- group: legacy-pipeline
signingCertPwd: $[variables.SigningCertificatePassword]
But, this results in a parsing error. As a result, I have to use a denser, but more bloated looking, format of:
variables:
- name: solutionName
value: Foo.sln
- name: projectName
value: Bar
- name: buildPlatform
value: x64
- name: buildConfiguration
value: development
- name: major
value: '1'
- name: minor
value: '0'
- name: build
value: '0'
- name: revision
value: $[counter('rev', 0)]
- name: vhdxSize
value: '200'
- group: legacy-pipeline
- name: signingCertPwd
value: $[variables.SigningCertificatePassword]
It seems like the simplified naming format is only available if I use it for local variables, but if I add a variable-group then the simplified format goes away. I have tried searching the web for a solution for this, but I am not able find anything useful for this. Is what I am trying to achieve possible or no? If yes, how can it be done?
Unfortunately mixing the styles is not possible, but you can work around that using templates:
# pipeline.yaml
stages:
- stage: declare_vars
variables:
- template: templates/vars.yaml
- group: my-group
- template: templates/inline-vars.yaml
parameters:
vars:
inline_var: yes!
and_more: why not
jobs:
- job:
steps:
- pwsh: |
echo 'foo=$(foo)'
echo 'bar=$(bar)'
echo 'var1=$(var1)'
echo 'inline_var=$(inline_var)'
# templates/vars.yaml
variables:
foo: bar
bar: something else
# templates/inline-vars.yaml
parameters:
- name: vars
type: object
default: {}
variables:
${{ each var in parameters.vars}}:
${{var.key}}: ${{var.value}}
templates/vars.yaml is just simply moving variables to another file.
templates/inline-vars.yaml lets you define inline variables using the denser syntax together with referencing groups, but there's additional ceremony of writing template:, parameters:, vars:.
ruamel.yaml==0.15.37
Python 3.6.2 :: Continuum Analytics, Inc.
Current code:
from ruamel.yaml import YAML
import sys
yaml = YAML()
kube_context = yaml.load('''
apiVersion: v1
clusters: []
contexts: []
current-context: ''
kind: Config
preferences: {}
users: []
''')
kube_context['users'].append({'name': '{username}/{cluster}'.format(username='test', cluster='test'), 'user': {'token': 'test'}})
kube_context['clusters'].append({'name': 'test', 'cluster': {'server': 'URL:443'}})
kube_context['contexts'].append({'name': 'test', 'context': {'user': 'test', 'cluster': 'test'}})
yaml.dump(kube_context, sys.stdout)
My yaml.dump() is producing output that contains the list and dict objects, instead of being fully expanded.
Current output:
apiVersion: v1
clusters: [{name: test, cluster: {server: URL:443}}]
contexts: [{name: test, context: {user: test, cluster: test}}]
current-context: ''
kind: Config
preferences: {}
users: [{name: test/test, user: {token: test}}]
What do I need to do in order to have yaml.dump() output fully expanded?
Expected output:
apiVersion: v1
clusters:
- name: test
cluster:
server: URL:443
contexts:
- name: test
context:
user: test
cluster: test
current-context: ''
kind: Config
preferences: {}
users:
- name: test/test
user:
token: test
ruamel.yaml, when using the default YAML() or YAML(typ='rt') will preserve the flow- or block style of sequences and mappings. There is no way to make a block style empty sequence or empty mapping and your [] and {} are therefore tagged as flow style when loaded.
Flow style can only contain flow style (whereas block style can contain block style or flow style) (YAML 1.2 spec 8.2.3):
YAML allows flow nodes to be embedded inside block collections (but not vice-versa).
Because of that, the dict/mapping data that you insert in the (flow-style) list/sequence will also be represented as flow-style.
If you want everything to be block style (what you call "expanded" mode), you can explicitly set that by calling the .set_block_style() method on the .fa attribute (which is only available on the collections, hence the try/except):
from ruamel.yaml import YAML
import sys
yaml = YAML()
kube_context = yaml.load('''
apiVersion: v1
clusters: []
contexts: []
current-context: ''
kind: Config
preferences: {}
users: []
''')
kube_context['users'].append({'name': '{username}/{cluster}'.format(username='test', cluster='test'), 'user': {'token': 'test'}})
kube_context['clusters'].append({'name': 'test', 'cluster': {'server': 'URL:443'}})
kube_context['contexts'].append({'name': 'test', 'context': {'user': 'test', 'cluster': 'test'}})
for k in kube_context:
try:
kube_context[k].fa.set_block_style()
except AttributeError:
pass
yaml.dump(kube_context, sys.stdout)
this gives:
apiVersion: v1
clusters:
- name: test
cluster:
server: URL:443
contexts:
- name: test
context:
user: test
cluster: test
current-context: ''
kind: Config
preferences: {}
users:
- name: test/test
user:
token: test
Please note that it is not necessary to set yaml.default_flow_style = False in the default round-trip-mode; and that although block-style has been set for the value of key preferences, it is represented flow style as there is no other way to represent an empty mapping.
The output is „pure“ YAML. You want the nodes to be presented in block style (indentation-based) as opposed to the current flow style ([]{}-based). Here's how to do that:
yaml = YAML(typ="safe")
yaml.default_flow_style = False
(Note Athon's comment on the typ below; you need to set it to safe or unsafe so that the RoundTripLoader does not set the style of the empty sequences)
I'm looking for a way to reuse variables defined in my list on YAML, I have a YAML list with the following sample entries :
workstreams:
- name: tigers
service_workstream: tigers-svc
virtual_user:
- {name: inbound-tigers, pass: '123', access: inbound, env: app1}
- {name: outbound-tigers, pass: '123', access: outbound, env: app1}
email: tigers#my-fqdn.com
mount_dir: /mnt/tigers
app_config_dir: /opt/tigers
Using the example from above I want to reuse a defined value, like tigers. The ideal solution would be something like this :
workstreams:
- name: tigers
service_workstream: "{{ vars['name'] }}-svc"
virtual_user:
- {name: "inbound-{{ vars['name'] }}", pass: '123', access: inbound, env: app1}
- {name: "outbound-{{ vars['name'] }}", pass: '123', access: outbound, env: app1}
email: "{{ vars['name'] }}#my-fqdn.com"
mount_dir: "/mnt/{{ vars['name'] }}"
app_config_dir: "/opt/{{ vars['name'] }}"
Any points as to how I can do this in YAML ?
You can do:
workstreams:
- name: &name tigers # the scalar "tigers" with an anchor &name
service_workstream: *name # alias, references the anchored scalar above
However, you can not do string concatenation or anything like it in YAML 1.2. It cannot do any transformations on the input data. An alias is really a reference to the node that holds the corresponding anchor, it is not a variable.
Quite some YAML-using software provides non-YAML solutions to that problem, for example, preprocessing the YAML file with Jinja or whatnot. Depending on context, that may or may not be a viable solution for you.
I have a playbook creating EC2 by using a dictionary declared in vars: then registering the IPs into a group to be used later on.
The dict looks like this:
servers:
serv1:
name: tag1
type: t2.small
region: us-west-1
image: ami-****
serv2:
name: tag2
type: t2.medium
region: us-east-1
image: ami-****
serv3:
[...]
I would like to apply tags to this playbook in the simplest way so I can create just some of them using tags. For example, running the playbook with --tags tag1,tag3 would only start EC2 matching serv1 and serv3.
Applying tags on the dictionary doesn't seem possible and I would like to avoid doing multiplying tasks like:
Creatinge EC2
Register infos
Getting private IP from previously registered infos
adding host to group
While I already have a working loop for the case I want to create all EC2 at once, is there any way to achieve that (without relying on --extra-vars, which would need key=value) ? For example, filtering out the dictionary by keeping only what is tagged before running the EC2 loop ?
I doubt you can do this out of the box. And not sure this is good idea at all.
Because tags are used to filter tasks in Ansible, so you will have to mark all tasks with tags: always.
You can accomplish this with custom filter plugin, for example (./filter_plugins/apply_tags.py):
try:
from __main__ import cli
except ImportError:
cli = False
def apply_tags(src):
if cli:
tags = cli.options.tags.split(',')
res = {}
for k,v in src.iteritems():
keep = True
if 'name' in v:
if v['name'] not in tags:
keep = False
if keep:
res[k] = v
return res
else:
return src
class FilterModule(object):
def filters(self):
return {
'apply_tags': apply_tags
}
And in your playbook:
- debug: msg="{{ servers | apply_tags }}"
tags: always
I found a way to match my needs without touching to the rest so I'm sharing it in case other might have a similar need.
I needed to combine dictionaries depending on tags, so my "main" dictionary wouldn't be static.
Variables became :
- serv1:
- name: tag1
type: t2.small
region: us-west-1
image: ami-****
- serv2:
- name: tag2
type: t2.medium
region: us-east-1
image: ami-****
- serv3:
[...]
So instead of duplicating my tasks, I used set_fact with tags like this:
- name: Combined dict
# Declaring empty dict
set_fact:
servers: []
tags: ['always']
- name: Add Server 1
set_fact:
servers: "{{ servers + serv1 }}"
tags: ['tag1']
- name: Add Server 2
set_fact:
servers: "{{ servers + serv2 }}"
tags: ['tag2']
[..]
20 lines instead of multiply tasks for each server, change vars from dictionary to lists, a few tags and all good :) Now if I add a new server it will only take a few lines.