How can I configure Ansible to assert duplicate keys?
I don't want to rely on ANSIBLE_DUPLICATE_YAML_DICT_KEY , I'd like to assert this with a task.
Imagine this dict, which contains duplicate keys:
my_dict:
one:
one:
I've tried:
- assert:
that:
- my_dict | unique == my_dict
But this fails even when there are no duplicate keys.
This is not possible. Duplicate keys are discarded during parsing, so there is no way to detect that they were present once parsing is finished. Setting DUPLICATE_YAML_DICT_KEY to error is the only way to turn this into a failure.
Related
Long story short -
Task 1:
Through URI module I am querying Website to obtain json data. Output is saved to variable.
Task 2:
Json_search is used to filter data.
But result value comes with square brackets which I need to get rid off as it needs to be inserted into url address.
I am using not elegant replace module twice to solve this problem. This works but I more than sure you can engage regex_replace (or json_query in more intelligent way) to achieve same result.
Expected output: https://site1/api/host/1
Received output before using replace 2x:
https://site1/api/host/[1]
Snippet from playbook:
…
- name: display url
debug:
msg: {{ json_output_var.json|json_search('result[].id')|replace('[','')|replace(']','')
I have to validate a pipeline before triggering it. One criterion of the validation is if a CI/CD variable has one of the accepted values. Is there a way to find if it is matching the correct values?
I tried to create an array of values then to check it in the workflow rules but it is not clear from the other questions how to do that.
So it should be looking like this:
#WARNING: invalid yml!
variables:
ValidValues: ["Value1", "Value2", "SomeOtherValue"]
workflow:
rules:
- if: ValidValues contains $GivenValue
when: always
Searching on this issue, I found that I can add the allowed values to a regex on which I can check at the workflow rules. In the end it looks like this:
workflow:
rules:
- if: $GivenValue =~ /\b(Value1|Value2|SomeOtherValue)\b/
when: always
- when: never
Unfortunately I did not found a solution on my initial approach (adding the allowed values to an array, then looking for them) but this works as well.
I'm trying to use split with Ansible to return 2 different indexes, in the example below (which doesn't work) let's say I want to set my_split to 'ad':
my_string: "a-b-c-d"
my_split: "{{ my_string.split('-')[0,3]|join() }}"
All documentation I can find only shows examples returning 1 index and I can't find any references to what I'm trying to achieve
Q: Set my_split to 'ad'
A: The tasks
- set_fact:
my_split: "{{ [0,3]|map('extract',my_string.split('-'))|join() }}"
- debug:
var: my_split
give
"my_split": "ad"
The problem is the selection of the first and fourth elements of the sequence. The expression below
my_string.split('-')[0,3]
fails
The error was: list object has no element (0, 3)
Instead, it's possible to use map and extract. See Extracting values from containers.
I have inventory with a very complicated structure. For my specific installation I want to override only some values. For example, I have structure:
---
System:
atr1: 47
config:
- nodes:
- logger:
id: 'all'
svr: 'IEW'
- Database:
constr: 'login/pass#db'
atr2: 'some value'
I want to override severity of the logger, i.e. add statistic information
svr: 'IEWS'. I want to provide an override within --extra-vars parameter.
In ansible.cfg -> hash_behaviour = merge
I don't want to use something like - svr: "{{ svr_custom | default('IEW') }}", because there are too many parameters, and thus it will be difficult to write the entire inventory in such way.
I read about combine filter, but I can't use it, when I had to override only one item in hash.
How can I achieve my goal?
The way you found is the simplest one. It's verbose to write but very easy to debug and to fix.
If you REALLY want to shrink this job, you can write your own lookup plugin. (https://docs.ansible.com/ansible/2.5/plugins/lookup.html).
From my experience, I really want to say that direct and dumb approach (write verbose) is much better for overall maintainability. A next person will see a dumb dump (pun intended) which is easy to fix, not a some obscure python snippet.
To make life easier you may want to store this configuration as a separate file (with all jinja pieces) and use lookup (st right from docs):
# Since 2.4, you can pass in variables during evaluation
- debug: msg="{{ lookup('template', './some_template.j2', template_vars=dict(x=42)) }} is evaluated with x=42"
Moreover, you can use Jinja's |from_yaml (or from_json) to convert loaded and processed template into data structure.
I read about combine filter, but I can't use it, when I had to override only one item in hash.
Why is that? Wouldn't new_svr defined in --extra-vars achieve what you want?
- set_fact:
System: "{{ System | combine({'config':[{'nodes':[{'logger':{'svr':new_svr }}]}]}, recursive=True) }}"
I am trying to make sense of a variable reference I found in an incomplete Ansible role. The role references a value using
dest: “{{params['box'].t1}}”
In a separate yaml file I have
box:
t1: "Albany"
t2: "Albuquerque"
params isn't defined, so obviously this isn't going to work, but I can't figure out the correct way to define it. Can someone tell me where (or how) params must be defined for this variable reference to work in Ansible?
Related questions. Does the use of square brackets in dest: “{{params['box'].t1}}” indicate that it is a dictionary? If yes, could I also write this as dest: “{{params['box']['t1']}” or dest: “{{params.box.t1}”?
params['box'].t1 refers to Albany in:
params:
box:
t1: "Albany"
t2: "Albuquerque"
It is the same as params.box.t1 and params['box']['t1'].
Brackets refer to a key name, so they imply it is a dictionary.
You typically use square bracket-notation when you want to refer to a key via a variable:
vars:
wanted_key: box
params:
box:
t1: Albany
other:
t1: Albuquerque
Then params[wanted_key].t1 refers to Albany.
In your example the value inside square brackets is a string (quoted), so all above examples are equivalent.