Ansible - Getting List Values from within a Dictionary (Regsiter Variable) - ansible

So I'm working on some audit points using Ansible for many of the servers we support. In most cases, I have had to use the shell modules to get the data I want and then write some files based on pass/fail cases. In a lot of situations, this has been the easier way to work with the output data. First, I realize this isn't necessarily Ansible's forte. I guess at some point it was pitched to the company that it could do this pretty easily, and I would agree - it's easier in many ways than just writing a custom python/BASH script to do the same. So - I do realize I'm bending the concept of Ansible a bit here for reporting, rather than configuration/state management. However; I like the tool and want to show the company we can get a lot of value from it.
While I could do this section easily using the shell module, I would like to learn Ansible a bit better. So thought I would post this question.
I'm using the Yum module to just get a repolist on the target hosts. But I've been running into confusion on just how to extract the list values nested in the output dictionary. I have done some checking on the types and as far as I can tell - the 'results' variable is a dictionary, with the output in in a list. What I want to do, is get the key/values from the list and then perform some other tasks based on that output. But for the life of me - I can't figure out how to do this!
Ideally - I would like to either use some 'when' module statements based on the output (When the repo ID is.. do this.. for example) or at least be able to store them in a variable to work with the data. So from this output, I just want to get the repoid and if it's enabled. How can I get these values from the nested list?
Simple Playbook:
---
- hosts: localhost
become: yes
tasks:
- name: Section 1.1 - Check Yum Repos
yum:
list: repos
register: section1_1
- name: Debug
debug:
var: section1_1
Here is my output from the debug task in this playbook:
TASK [Debug] ****************************************************************************************************************************************************
ok: [localhost] => {
"section1_1": {
"changed": false,
"failed": false,
"results": [
{
"repoid": "ansible",
"state": "enabled"
},
{
"repoid": "epel",
"state": "enabled"
},
{
"repoid": "ol7_UEKR6",
"state": "enabled"
},
{
"repoid": "ol7_latest",
"state": "enabled"
}
]
}
}
I suspect this might be easy for someone out there. I've been trying this and that's for quite a while now and finally got to the point where I thought I would just ask :)

As the output of registered in section1_1 is a list of dictionaries. We can loop through each item, to get the dictionary keys.
Example:
- name: Get the first repo's repoid and state
debug:
msg: "Repo ID: {{ results[0]['repoid'] }}, is {{ results[0]['state'] }}"
# This will show -- Repo ID: ansible, is enabled
Similarly we can access other elements with their number.
Or we can loop through each element of array:
- name: loop through array and conditionally do something
debug:
msg: "Repo ID is {{ item.repoid }}, so I am going to write a playbook."
when: item.repoid == 'ansible'
loop: "{{ results }}"

Q: "Get the key/values from the list."
A: There are more options. Given the data below
section1_1:
changed: false
failed: false
results:
- repoid: ansible
state: enabled
- repoid: epel
state: enabled
- repoid: ol7_UEKR6
state: enabled
- repoid: ol7_latest
state: enabled
- repoid: test
state: disabled
1a) Get the keys and values, and create a dictionary
_keys1: "{{ section1_1.results|map(attribute='repoid')|list }}"
_vals1: "{{ section1_1.results|map(attribute='state')|list }}"
repos1: "{{ dict(_keys1|zip(_vals1)) }}"
gives
_keys1: [ansible, epel, ol7_UEKR6, ol7_latest, test]
_vals1: [enabled, enabled, enabled, enabled, disabled]
repos1:
ansible: enabled
epel: enabled
ol7_UEKR6: enabled
ol7_latest: enabled
test: disabled
1b) The filter items2dict gives the same result
repos2: "{{ section1_1.results|
items2dict(key_name='repoid', value_name='state') }}"
1c) The filter json_query gives also the same result
repos3: "{{ dict(section1_1.results|
json_query('[].[repoid, state]')) }}"
Iterate the dictionary
- debug:
msg: "Repo {{ item.key }} is {{ item.value }}"
loop: "{{ repos1|dict2items }}"
gives (abridged)
msg: Repo ansible is enabled
msg: Repo epel is enabled
msg: Repo ol7_UEKR6 is enabled
msg: Repo ol7_latest is enabled
msg: Repo test is disabled
The next option is the conversion of the values to boolean
_vals4: "{{ section1_1.results|
json_query('[].state.contains(#, `enabled`)') }}"
repos4: "{{ dict(_keys1|zip(_vals4)) }}"
gives
_vals4: [true, true, true, true, false]
repos4:
ansible: true
epel: true
ol7_UEKR6: true
ol7_latest: true
test: false
Iterate the dictionary
- debug:
msg: "Repo {{ item.key }} is enabled: {{ item.value }}"
loop: "{{ repos4|dict2items }}"
gives (abridged)
msg: 'Repo ansible is enabled: True'
msg: 'Repo epel is enabled: True'
msg: 'Repo ol7_UEKR6 is enabled: True'
msg: 'Repo ol7_latest is enabled: True'
msg: 'Repo test is enabled: False'
3a) The list of the enabled repos can be easily selected
- debug:
msg: "Repo {{ item.key }} is enabled"
loop: "{{ repos4|dict2items|selectattr('value') }}"
gives (abridged)
msg: Repo ansible is enabled
msg: Repo epel is enabled
msg: Repo ol7_UEKR6 is enabled
msg: Repo ol7_latest is enabled
3b), or rejected
- debug:
msg: "Repo {{ item.key }} is disabled"
loop: "{{ repos4|dict2items|rejectattr('value') }}"
gives (abridged)
msg: Repo test is disabled
Example of a complete playbook for testing
- hosts: localhost
vars:
section1_1:
changed: false
failed: false
results:
- {repoid: ansible, state: enabled}
- {repoid: epel, state: enabled}
- {repoid: ol7_UEKR6, state: enabled}
- {repoid: ol7_latest, state: enabled}
- {repoid: test, state: disabled}
_keys1: "{{ section1_1.results|map(attribute='repoid')|list }}"
_vals1: "{{ section1_1.results|map(attribute='state')|list }}"
repos1: "{{ dict(_keys1|zip(_vals1)) }}"
repos2: "{{ section1_1.results|
items2dict(key_name='repoid', value_name='state') }}"
repos3: "{{ dict(section1_1.results|
json_query('[].[repoid, state]')) }}"
_vals4: "{{ section1_1.results|
json_query('[].state.contains(#, `enabled`)') }}"
repos4: "{{ dict(_keys1|zip(_vals4)) }}"
tasks:
- debug:
var: section1_1
- debug:
var: _keys1|to_yaml
- debug:
var: _vals1|to_yaml
- debug:
var: repos1
- debug:
var: repos2
- debug:
var: repos3
- debug:
msg: "Repo {{ item.key }} is {{ item.value }}"
loop: "{{ repos1|dict2items }}"
- debug:
var: _vals4|to_yaml
- debug:
var: repos4
- debug:
msg: "Repo {{ item.key }} is enabled: {{ item.value }}"
loop: "{{ repos4|dict2items }}"
- debug:
msg: "Repo {{ item.key }} is enabled"
loop: "{{ repos4|dict2items|selectattr('value') }}"
- debug:
msg: "Repo {{ item.key }} is disabled"
loop: "{{ repos4|dict2items|rejectattr('value') }}"

Related

Check if a file has certain strings

I have some files (file1), in some servers (group: myservers), which should look like this:
search www.mysebsite.com
nameserver 1.2.3.4
nameserver 1.2.3.5
This is an example of what this file should look like:
The first line is mandatory ("search www.mysebsite.com").
The second and the third lines are mandatory as well, but the ips can change (although they should all be like this: ...).
I've being researching to implement some tasks using Ansible to check if the files are properly configured. I don't want to change any file, only check and output if the files are not ok or not.
I know I can use ansible.builtin.lineinfile to check it, but I still haven't managed to find out how to achieve this.
Can you help please?
For example, given the inventory
shell> cat hosts
[myservers]
test_11
test_13
Create a dictionary of what you want to audit
audit:
files:
/etc/resolv.conf:
patterns:
- '^search example.com$'
- '^nameserver \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$'
/etc/rc.conf:
patterns:
- '^sshd_enable="YES"$'
- '^syslogd_flags="-ss"$'
Declare the directory at the controller where the files will be stored
my_dest: /tmp/ansible/myservers
fetch the files
- fetch:
src: "{{ item.key }}"
dest: "{{ my_dest }}"
loop: "{{ audit.files|dict2items }}"
Take a look at the fetched files
shell> tree /tmp/ansible/myservers
/tmp/ansible/myservers
├── test_11
│   └── etc
│   ├── rc.conf
│   └── resolv.conf
└── test_13
└── etc
├── rc.conf
└── resolv.conf
4 directories, 4 files
Audit the files. Create the dictionary host_files_results in the loop
- set_fact:
host_files_results: "{{ host_files_results|default({})|
combine(host_file_dict|from_yaml) }}"
loop: "{{ audit.files|dict2items }}"
loop_control:
label: "{{ item.key }}"
vars:
host_file_path: "{{ my_dest }}/{{ inventory_hostname }}/{{ item.key }}"
host_file_lines: "{{ lookup('file', host_file_path).splitlines() }}"
host_file_result: |
[{% for pattern in item.value.patterns %}
{{ host_file_lines[loop.index0] is regex pattern }},
{% endfor %}]
host_file_dict: "{ {{ item.key }}: {{ host_file_result|from_yaml is all }} }"
gives
ok: [test_11] =>
host_files_results:
/etc/rc.conf: true
/etc/resolv.conf: true
ok: [test_13] =>
host_files_results:
/etc/rc.conf: true
/etc/resolv.conf: true
Declare the dictionary audit_files that aggregates host_files_results
audit_files: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, 'host_files_results'))) }}"
gives
audit_files:
test_11:
/etc/rc.conf: true
/etc/resolv.conf: true
test_13:
/etc/rc.conf: true
/etc/resolv.conf: true
Evaluate the audit results
- block:
- debug:
var: audit_files
- assert:
that: "{{ audit_files|json_query('*.*')|flatten is all }}"
fail_msg: "[ERR] Audit of files failed. [TODO: list failed]"
success_msg: "[OK] Audit of files passed."
run_once: true
gives
msg: '[OK] Audit of files passed.'
Example of a complete playbook for testing
- hosts: myservers
vars:
my_dest: /tmp/ansible/myservers
audit:
files:
/etc/resolv.conf:
patterns:
- '^search example.com$'
- '^nameserver \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$'
/etc/rc.conf:
patterns:
- '^sshd_enable="YES"$'
- '^syslogd_flags="-ss"$'
audit_files: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, 'host_files_results'))) }}"
tasks:
- fetch:
src: "{{ item.key }}"
dest: "{{ my_dest }}"
loop: "{{ audit.files|dict2items }}"
loop_control:
label: "{{ item.key }}"
- set_fact:
host_files_results: "{{ host_files_results|default({})|
combine(host_file_dict|from_yaml) }}"
loop: "{{ audit.files|dict2items }}"
loop_control:
label: "{{ item.key }}"
vars:
host_file_path: "{{ my_dest }}/{{ inventory_hostname }}/{{ item.key }}"
host_file_lines: "{{ lookup('file', host_file_path).splitlines() }}"
host_file_result: |
[{% for pattern in item.value.patterns %}
{{ host_file_lines[loop.index0] is regex pattern }},
{% endfor %}]
host_file_dict: "{ {{ item.key }}: {{ host_file_result|from_yaml is all }} }"
- debug:
var: host_files_results
- block:
- debug:
var: audit_files
- assert:
that: "{{ audit_files|json_query('*.*')|flatten is all }}"
fail_msg: "[ERR] Audit of files failed. [TODO: list failed]"
success_msg: "[OK] Audit of files passed."
run_once: true
... implement some tasks using Ansible to check if the files are properly configured. I don't want to change any file, only check and output if the files are not ok or not.
Since Ansible is mostly used as Configuration Management Tool there is no need to check (before) if a file is properly configured. Just declare the Desired State and make sure that the file is in that state. As this is approach is working with Validating: check_mode too, if interested in a Configuration Check or an Audit it could be implemented simply as follow:
resolv.conf as is it should be
# Generated by NetworkManager
search example.com
nameserver 192.0.2.1
hosts.ini
[test]
test.example.com NS_IP=192.0.2.1
resolv.conf.j2 template
# Generated by NetworkManager
search {{ DOMAIN }}
nameserver {{ NS_IP }}
A minimal example playbook for Configuration Check in order to audit the config
---
- hosts: test
become: false
gather_facts: false
vars:
# Ansible v2.9 and later
DOMAIN: "{{ inventory_hostname.split('.', 1) | last }}"
tasks:
- name: Check configuration (file)
template:
src: resolv.conf.j2
dest: resolv.conf
check_mode: true # will never change existing config
register: result
- name: Config change
debug:
msg: "{{ result.changed }}"
will result for no changes into an output of
TASK [Check configuration (file)] ******
ok: [test.example.com]
TASK [Config change] *******************
ok: [test.example.com] =>
msg: false
or for changes into
TASK [Check configuration (file)] ******
changed: [test.example.com]
TASK [Config change] *******************
ok: [test.example.com] =>
msg: true
and depending on what's in the config file.
If one is interested in an other message text and need to invert the output therefore, just use msg: "{{ not result.changed }}" as it will report an false if true and true if false.
Further Reading
Using Ansible inventory, variables in inventory, the template module (to) Template a file out to a target host and Enforcing check_mode on tasks makes it extremely simply to prevent Configuration Drift.
And as a reference for getting the search domain, Ansible: How to get hostname without domain name?.

Ansible when condition registered from csv

I'm using csv file as ingest data for my playbooks, but im having trouble with my when condition. it's either both task will skipped or both task will be ok, my objective is if ansible see the string in when condition it will skipped for the specific instance.
here is my playbook
- name: "Read ingest file from CSV return a list"
community.general.read_csv:
path: sample.csv
register: ingest
- name: debug ingest
debug:
msg: "{{ item.AWS_ACCOUNT }}"
with_items:
- "{{ ingest.list }}"
register: account
- name: debug account
debug:
msg: "{{ account.results | map(attribute='msg') }}"
register: accountlist
- name:
become: yes
become_user: awx
delegate_to: localhost
environment: "{{ proxy_env }}"
block:
- name: "Assume role"
community.aws.sts_assume_role:
role_arn: "{{ item.ROLE_ARN }}"
role_session_name: "pm"
with_items:
- "{{ ingest.list }}"
register: assumed_role
when: "'aws-account-rnd' not in account.results | map(attribute='msg')"
here is the content of sample.csv
HOSTNAME
ENVIRONMENT
AWS_ACCOUNT
ROLE_ARN
test1
dev
aws-account-rnd
arn:aws:iam::XXXX1
test2
uat
aws-account-uat
arn:aws:iam::XXXX2
my objective is to skipped all items in the csv file with aws-acount-rnd
Your condition does not mention item so it will have the same result for all loop items.
Nothing you've shown requires the weird abuse of debug + register that you're doing, and it is in fact getting in your way.
- name: Read CSV file
community.general.read_csv:
path: sample.csv
register: ingest
- name: Assume role
community.aws.sts_assume_role:
role_arn: "{{ item.ROLE_ARN }}"
role_session_name: pm
delegate_to: localhost
become: true
become_user: awx
environment: "{{ proxy_env }}"
loop: "{{ ingest.list }}"
when: item.AWS_ACCOUNT != 'aws-account-rnd'
register: assumed_role
If you'll always only care about one match you can also do this without a loop or condition at all:
- name: Assume role
community.aws.sts_assume_role:
role_arn: "{{ ingest.list | rejectattr('AWS_ACCOUNT', '==', 'aws-account-rnd') | map(attribute='ROLE_ARN') | first }}"
role_session_name: pm
delegate_to: localhost
become: true
become_user: awx
environment: "{{ proxy_env }}"
register: assumed_role
my objective is to skipped all items in the csv file with aws-acount-rnd
The multiple debug you have with register, seems to be a long-winded approach IMHO.
A simple task to debug the Role ARN, only if the account does not match aws-acount-rnd.
- name: show ROLE_ARN when account not equals aws-account-rnd
debug:
var: item['ROLE_ARN']
loop: "{{ ingest.list }}"
when: item['AWS_ACCOUNT'] != 'aws-account-rnd'
This results in:
TASK [show ROLE_ARN when account not equals aws-account-rnd] **********************************************************************************************************************
skipping: [localhost] => (item={'HOSTNAME': 'test1', 'ENVIRONMENT': 'dev', 'AWS_ACCOUNT': 'aws-account-rnd', 'ROLE_ARN': 'arn:aws:iam:XXXX1'})
ok: [localhost] => (item={'HOSTNAME': 'test2', 'ENVIRONMENT': 'uat', 'AWS_ACCOUNT': 'aws-account-uat', 'ROLE_ARN': 'arn:aws:iam:XXXX2'}) => {
"ansible_loop_var": "item",
"item": {
"AWS_ACCOUNT": "aws-account-uat",
"ENVIRONMENT": "uat",
"HOSTNAME": "test2",
"ROLE_ARN": "arn:aws:iam:XXXX2"
},
"item['ROLE_ARN']": "arn:aws:iam:XXXX2"
}
The same logic can be used to pass the item.ROLE_ARN to community.aws.sts_assume_role task.

Issue with using Omit option

I am trying configure a attribute only if my_int_http is defined else I dont want it. So I coded it like below:
profiles:
- "{{ my_int_L4 }}"
- "{{ my_int_http | default(omit) }}"
However my execution fail and when check the arguments passed by the code in actual configuration it shows like below:
"profiles": [
"my_example.internal_tcp",
"__omit_place_holder__ef8c5b99e9707c044ac07fda72fa950565f248a4"
So how to pass absolutely no value where it is passing __omit_place_holder_****?
Q: "How to pass absolutely no value where it is passing omit_place_holder ?"
A1: Some filters also work with omit as expected. For example, the play
- hosts: localhost
vars:
test:
- "{{ var1|default(false) }}"
- "{{ var1|default(omit) }}"
tasks:
- debug:
msg: "{{ {'a': item}|combine({'b': true}) }}"
loop: "{{ test }}"
gives
msg:
a: false
b: true
msg:
b: true
As a sidenote, default(omit) is defined type string
- debug:
msg: "{{ item is defined }}"
loop: "{{ test }}"
- debug:
msg: "{{ item|type_debug }}"
loop: "{{ test }}"
give
TASK [debug] *************************************************************
ok: [localhost] => (item=False) =>
msg: true
ok: [localhost] => (item=__omit_place_holder__6e56f2f992faa6e262507cb77410946ea57dc7ef) =>
msg: true
TASK [debug] *************************************************************
ok: [localhost] => (item=False) =>
msg: bool
ok: [localhost] => (item=__omit_place_holder__6e56f2f992faa6e262507cb77410946ea57dc7ef) =>
msg: str
A2: No value in Ansible is YAML null. Quoting:
This is typically converted into any native null-like value (e.g., undef in Perl, None in Python).
(Given my_int_L4=bob). If the variable my_int_http defaults to null instead of omit
profiles:
- "{{ my_int_L4 }}"
- "{{ my_int_http | default(null) }}"
the list profiles will be undefined
profiles: VARIABLE IS NOT DEFINED!
Use None instead
profiles:
- "{{ my_int_L4 }}"
- "{{ my_int_http | default(None) }}"
The variable my_int_http will default to an empty string
profiles:
- bob
- ''
See also section "YAML tags and Python types" in PyYAML Documentation.
You can try something like this,
profiles:
- "{{ my_int_L4 }}"
- "{{ my_int_http | default(None) }}"
This will give you an empty string. And you can add a check while iterating over the profiles.
Please have a look at this GitHub Issue to get more understanding.

jinja2 turning lists into strings even when from_json is being used

I am trying to run a nested for loop in order to retrieve a nested value.
I would like to retrieve some_value_4 when some_value_3 matches a criteria that's predefined.
{
"some_dict": {
"some_key_0": "value_0",
"some_key_1": "value_1"
},
"testval_dict": {
"test_key_0": "some_value_0",
"test_key_1": "some_value_1",
"test_key_2": "some_value_2",
"test_key_3": "some_value_3",
"test_key_4": "some_value_4"
}
}
The playbook:
- hosts: localhost
tasks:
- set_fact:
mydict: "{{ lookup('file', '/tmp/file.json' ) | from_json }}"
- debug:
msg: |
"{% for item in mydict %}
{{ item }}
{% endfor %}"
when run, it alreay looks like dict names are treated as string and nothing more:
ansible-playbook /tmp/test_playbook.yml -c local -i ', localhost'
TASK [debug] ******************************************************************
ok: [localhost] => {}
MSG:
" somee_dict
testval_dict
"
Then when I add an itme.key to the debug task, the playbook fails:
MSG:
The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'value'
Thank you.
edit for clarification
In the real example, I will not know the names of the dicts, so I cannot use some_dict or testval_dict, that is why I'm trying to go over this data source in an item.key or item.value form.
Q: "{% for item in mydict %} ... dict names are treated as string and nothing more."
A: This is correct. A dictionary, when evaluated as a list, returns the list of its keys. See some examples below
- debug:
msg: "{{ mydict.keys()|list }}"
- debug:
msg: "{{ mydict[item] }}"
loop: "{{ mydict.keys()|list }}"
- debug:
msg: "{{ mydict|difference(['testval_dict']) }}"
give
msg:
- some_dict
- testval_dict
msg:
some_key_0: value_0
some_key_1: value_1
msg:
test_key_0: some_value_0
test_key_1: some_value_1
test_key_2: some_value_2
test_key_3: some_value_3
test_key_4: some_value_4
msg:
- some_dict
See How to iterate through a list of dictionaries in Jinja template?
If you need to loop over the dictionary, you can use with_dict loop functionality. This way, if you loop over mydict and get item.key you will get somee_dict and testval_dict.
tasks:
- set_fact:
mydict: "{{ lookup('file', '/tmp/file.json')|from_json }}"
# This will get the top level dictionaries somee_dict and testval_dict
- debug:
var: item.key
with_dict: "{{ mydict }}"
And if you get item.value of mydict you will get the sub-dictionary:
- debug:
var: item.value
with_dict: "{{ mydict }}"
Will produce (showing output only for testval_dict):
"item.value": {
"test_key_0": "some_value_0",
"test_key_1": "some_value_1",
"test_key_2": "some_value_2",
"test_key_3": "some_value_3",
"test_key_4": "some_value_4"
}

Adding field to dict items

Consider the following play. What I am trying to do is add a field, tmp_path which is basically the key and revision appended together to each element in the scripts dict.
---
- hosts: localhost
connection: local
gather_facts: no
vars:
scripts:
a.pl:
revision: 123
b.pl:
revision: 456
tasks:
- with_dict: "{{ scripts }}"
debug:
msg: "{{ item.key }}_{{ item.value.revision }}"
# - with_items: "{{ scripts }}"
# set_fact: {{item.value.tmp_path}}="{{item.key}}_{{item.value.revision}}"
# - with_items: "{{ scripts }}"
# debug:
# msg: "{{ item.value.tmp_path }}"
...
Obviously the commented code doesn't work, any idea how I can get this working? Is it possible to alter the scripts dict directly, or should I somehow be creating a new dict to reference instead?
By the way welcome to correct the terminology for what I am trying to do.
OK, I think I got a solution (below), at least to let me move forwards with this. Disadvantages are it has removed the structure of my dict and also seems a bit redundant having to redefine all the fields and use a new variable, If anyone can provide a better solution I will accept that instead.
---
- hosts: localhost
connection: local
gather_facts: no
vars:
scripts:
a.pl:
revision: 123
b.pl:
revision: 456
tasks:
- with_dict: "{{ scripts }}"
debug:
msg: "{{ item.key }}_{{ item.value.revision }}"
- with_dict: "{{ scripts }}"
set_fact:
new_scripts: "{{ (new_scripts | default([])) + [ {'name': item.key, 'revision': item.value.revision, 'tmp_path': item.key ~ '_' ~ item.value.revision}] }}"
# - debug:
# var: x
# - with_dict: "{{ scripts }}"
- with_items: "{{ new_scripts }}"
debug:
msg: "{{ item.tmp_path }}"
...
BTW credit to the following question which pointed me in the right direction:
Using Ansible set_fact to create a dictionary from register results

Resources