Use yq to add a new root - yaml

I am trying to add a new root to an existing yaml.
I have a values.yaml file with the following content
key1: value1
key2: value2
key3:
key31: value31
key32: value32
And I am trying to add a root to the existing yaml.
mainkey:
key1: value1
key2: value2
key3:
key31: value31
key32: value32
I tried yq with +, merge and nothing yielded the result i wanted. Is it possible to append a root to an existing yaml?

Related

Ansible/Jinja "from_yaml" filter sorting keys in the original file

I am trying to read a YML file from a remote node in Ansible and make modifications and update it.
When I use "from_yaml" Jinja filter, I notice that the keys are sorted and when I update the file on remote, it will cause issues.
Is there a way to set Sort_Keys=False on the from_yaml filter?
Original file:
key1:
key5:
key3: value3
key2:
key5: value4
key6:
key7: value5
After applying "from_yaml" filter (the keys are sorted):
key1:
key2:
key3: value3
key5:
key5: value4
key6:
key7: value5
YAML standard says that there is no order of keys in a dictionary. Python3 recently(I don't remember the version. Probably 3.6 or 3.7. You can find it in the release notes.) introduced ordered keys in dictionaries. This is what you see as a result of the filter from_yaml. You'll see the dictionary also ordered without any filter when Ansible uses recent Python3. For example,
- hosts: localhost
vars:
key1:
key5:
key3: value3
key2:
key5: value4
key6:
key7: value5
tasks:
- debug:
var: key1
gives (abridged)
key1:
key2:
key5: value4
key5:
key3: value3
key6:
key7: value5
But, Jinja, even when using Python3.8, doesn't sort the dictionaries (I don't know why). For example,
- debug:
msg: |
{% for k,v in key1.items() %}
{{ k }}: {{ v }}
{% endfor %}
gives (abridged)
msg: |-
key5: {'key3': 'value3'}
key2: {'key5': 'value4'}
key6: {'key7': 'value5'}
To answer your question, unfortunately, there is no such option as Sort_Keys=False. Moreover, in YAML, there is no guarantee in what order the keys of a dictionary are stored. If you want to be sure that you write a dictionary in a specific order you have to keep the list of the keys. For example,
- debug:
msg: |
{% for k in list_of_keys %}
{{ k }}: {{ key1[k] }}
{% endfor %}
vars:
list_of_keys: [key5, key2, key6]
The imported data is organized in the data structure of a dict. In contrast to an array (or list), in which elements are ordered in their position and addressed via an index, the values of a dict are addressed via the respective key. However, you have no possibility to define or influence the order. Ansible always outputs a data structure of a Dict sorted by the keys.
The example you gave is wrong. Because the data structure is not changed even if the output is sorted by keys. key1.key5.key3 still returns the value value3. But in your example "After applying "from_yaml" filter" this value does not exist anymore, in your case value3 is now under key1.key2.key3. Your output does not match the output Ansible gives when the from_yaml filter is applied to said input.
Data file mydata.txt:
key1:
key5:
key3: value3
key2:
key5: value4
key6:
key7: value5
Ansible Task:
- debug:
msg: "{{ lookup('file', 'mydata.txt') | from_yaml }}"
Output:
TASK [debug] *****************************
ok: [localhost] => {
"msg": {
"key1": {
"key2": {
"key5": "value4"
},
"key5": {
"key3": "value3"
},
"key6": {
"key7": "value5"
}
}
}
}

How can I convert `key=value` file into Ansible facts?

I've got a file containing a few lines of simple shell-style (key=value, no whitespace or special characters) assignments. How would I go about converting this to a set of top-level facts using ansible.builtin.set_fact? expandvars looks like it might be relevant, but I can't find any examples or even any decent documentation.
For example, given the configuration file
shell> cat conf.ini
key1=alpha=beta=charlie
key2=value2
key3= value3
The variable below
config_vars: "{{ dict(lookup('file', 'conf.ini').split('\n')|
map('split', '=', 1)|
map('map', 'trim')) }}"
expands to
config_vars:
key1: alpha=beta=charlie
key2: value2
key3: value3
You can remove the last map/trim filter from the pipe if you're sure there are no spaces in the configuration file. But, to be on the safe side, I'd keep it.
Config file at the remote host
The solution in the previous section works at the controller only because the lookup plugins execute at the localhost. Fetch the files first if they are at the remote hosts. For example, given the configuration files
shell> ssh admin#test_11 cat /tmp/conf.ini
key1=alpha=beta=charlie
key2=value2
key3= value3
shell> ssh admin#test_12 cat /tmp/conf.ini
key1=value1
key2= value2
key3=alpha=beta=charlie
the playbook below
- hosts: test_11,test_12
vars:
conf_ini_path: "conf_ini/{{ inventory_hostname }}/tmp/conf.ini"
config_vars: "{{ dict(lookup('file', conf_ini_path).split('\n')|
map('split', '=', 1)|
map('map', 'trim')) }}"
tasks:
- fetch:
dest: conf_ini
src: /tmp/conf.ini
- debug:
var: config_vars
gives (abridged)
TASK [debug] ***********************************************
ok: [test_11] =>
config_vars:
key1: alpha=beta=charlie
key2: value2
key3: value3
ok: [test_12] =>
config_vars:
key1: value1
key2: value2
key3: alpha=beta=charlie
Alow no value
For example, given the configuration file
shell> cat conf.ini
key1=alpha=beta=charlie
key2=value2
key3= value3
key4
The variable below
config_vars: "{{ dict(lookup('file', 'conf.ini').split('\n')|
map('split', '=', 1)|
map('map', 'trim')|
map('json_query', '[]|[[0], [1]]')) }}"
expands to
config_vars:
key1: alpha=beta=charlie
key2: value2
key3: value3
key4: null

How to use loop and with_nested together in ansible

I have the vars defined like this-
vars:
values:
- key1: value1
key2:
- value1.1
- value1.2
- key1: value2
key2:
- value2.1
- value2.2
Want to iterate on key1 with corresponding values in key2
I am running ansible 2.7.10 with python 2.7.10. Here is what I have written in my task based on some suggestions I found online-
(used with_subelements)
- name: test loops
debug:
msg: "This is key1: {{ item.0.key1 }}, and here is corresponding key2 element {{ item.1 }}"
with_subelements:
- values
- key2
Expected output:
This is key1: value1, and here is corresponding key2 element value1.1
This is key1: value1, and here is corresponding key2 element value1.2
This is key1: value2, and here is corresponding key2 element value2.1
This is key1: value2, and here is corresponding key2 element value2.2
Error I get when I execute the playbook:
fatal: [localhost]: FAILED! => {"msg": "subelements lookup expects a dictionary, got 'values'"}
Any ideas how to achieve this?
Correct syntax is
with_subelements:
- "{{ values }}"
- key2
, or Migrated from with_X to loop
loop: "{{ values|subelements('key2') }}"
The play below
- hosts: localhost
vars:
values:
- key1: value1
key2:
- value1.1
- value1.2
- key1: value2
key2:
- value2.1
- value2.2
tasks:
- debug:
msg: "{{ item.0.key1 }} - {{ item.1 }}"
with_subelements:
- "{{ values }}"
- key2
gives (abridged):
"msg": "value1 - value1.1"
"msg": "value1 - value1.2"
"msg": "value2 - value2.1"
"msg": "value2 - value2.2"

Merging nested variables from group_vars in Ansible [duplicate]

This question already has answers here:
Ansible : host in multiple groups
(3 answers)
Ansible. override single dictionary key [duplicate]
(4 answers)
Ansible: overriding dictionary variables in extra-vars [duplicate]
(1 answer)
Closed 4 years ago.
It looks like Ansible is not able to merge nested variables from group_vars. My structure is like:
hosts.ini:
[common:children]
frontend
backend
[frontend]
server1
[backend]
server2
in groups-vars directory I have:
common.yaml:
start_of_nested variables:
var1: value1
var2: value2
frontend.yaml:
start_of_nested variables:
var3: value3
var4: value4
backend.yaml:
start_of_nested variables:
var5: value5
var6: value6
when I check server1 variables with:
ansible server1 -m debug -a "var=hostvars[inventory_hostname]"
I get variables only from frontend.yaml:
"start_of_nested": {
"var3": "value3",
"var4": "value4"
}
but I was expecting that they will be merged with common.yaml variables and I will get sth like
"start_of_nested": {
"var1": "value1",
"var2": "value2",
"var3": "value3",
"var4": "value4"
}
Is there any way that Ansible will merge nested variables for a host from group_vars which it belongs to?

external dict in ansible

I'm trying to use external dictionary with somehow "mapped" variables:
one_changes:
key1: value1
key2: value2
key3: value3
In my playbook I'm using vars_files which is recognized. Now, how do I do something like this:
name: variables_one
replace:
dest=/one/file.php
regexp="{{ one_changes.key }}"
replace="{{ one_changes.value }}"
with_items:
- one_changes
I cannot for sake of god figure it out since few hours. There are many variables for many files so I'd like to keep them mapped separately.
There is a chapter about this in documentation.
First, mind the indentation in your yaml file:
one_changes:
key1: value1
key2: value2
key3: value3
Second, use with_dict:
- name: variables_one
replace:
dest: /one/file.php
regexp: "{{ item.key }}"
replace: "{{ item.value }}"
with_dict: "{{ one_changes }}"

Resources