Digging down an Ansible fact - ansible

I have a playbook
---
- hosts: all
gather_facts: True
tasks:
- action: debug msg="time = {{ ansible_date_time }}"
Which returns the full json representation for each machine.
How do I further filter that within the playbook such that I only get the iso8601_basic_short part
[root#pjux playbooks]# ansible --version
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
TASK [debug] *******************************************************************
ok: [10.99.97.222] => {
"msg": "time = {u'weekday_number': u'2', u'iso8601_basic_short': u'20160906T182117', u'tz': u'BST', u'weeknumber': u'36', u'hour': u'18', u'year': u'2016', u'minute': u'21', u'tz_offset': u'+0100', u'month': u'09', u'epoch': u'1473182477', u'iso8601_micro': u'2016-09-06T17:21:17.761900Z', u'weekday': u'Tuesday', u'time': u'18:21:17', u'date': u'2016-09-06', u'iso8601': u'2016-09-06T17:21:17Z', u'day': u'06', u'iso8601_basic': u'20160906T182117761843', u'second': u'17'}"
}
ok: [10.99.97.216] => {
"msg": "time = {u'weekday_number': u'2', u'iso8601_basic_short': u'20160906T182117', u'tz': u'BST', u'weeknumber': u'36', u'hour': u'18', u'year': u'2016', u'minute': u'21', u'tz_offset': u'+0100', u'month': u'09', u'epoch': u'1473182477', u'iso8601_micro': u'2016-09-06T17:21:17.938563Z', u'weekday': u'Tuesday', u'time': u'18:21:17', u'date': u'2016-09-06', u'iso8601': u'2016-09-06T17:21:17Z', u'day': u'06', u'iso8601_basic': u'20160906T182117938491', u'second': u'17'}"
}

Have you tried {{ ansible_date_time.iso8601_basic_short }}?

Related

Override group_vars by external source

I have a pilote project keeping many common variables in group_vars.
group_vars/
group1.yml
group2.yml
group3.yml
For different implementations (usually per client), I'd like to maintain reserved file which overrides the content of group_vars, where the content of that file could have following format, i.e. client1.yml :
group1:
var11_to_override: "foo"
var12_to_override: "bar"
group2:
var21_to_override: "foo"
var22_to_override: "bar"
Is there a simple possibility to say to Ansible that file client1.yml overrides group_vars content?
The module include_vars could be certainly the first step together with set_facts within a loop, but it requires probably complicated jinja2 filter expressions ...
Have I to write a new module or filter updating hostvars?
Finally resolved by custom filter updating a dict by another:
filter_plugins/vars_update.py
import copy
import collections
class FilterModule(object):
def update_hostvars(self, _origin, overlay):
origin = copy.deepcopy(_origin)
for k, v in overlay.items():
if isinstance(v, collections.Mapping):
origin[k] = self.update_hostvars(origin.get(k, {}), v)
else:
origin[k] = v
return origin
def filters(self):
return {"update_hostvars": self.update_hostvars}
.. and using this filter when updating all variables:
- name: Include client file
include_vars:
file: "{{ client_file_path }}"
name: client_overlay
- name: Update group_vars by template client
set_fact:
"{{ item.key }}": "{{ hostvars[inventory_hostname][item.key] | update_hostvars(item.value) }}"
with_dict: "{{ client_overlay }}"
Using the examples given in this thread i made my own solution:
The "external source" feeds in an inventory item using --extra-vars "#". The file content itself is uploaded as base64 encoded content and then decoded/written to fs.
The external file has a list of overrides per role/group like so:
role_overrides: [{
"groups": [
"my-group"
],
"overrides": {
"foo": "value",
"bar": "value",
}
},
but then jsonified obviously...
The filter module
#!/usr/bin/env python
class FilterModule(object):
def filters(self):
return {
"filter_hostvars_overrides": self.filter_hostvars_overrides,
}
def filter_hostvars_overrides(self, role_overrides, group_names):
"""
filter the overrides for the ones to apply for this host
[
{
"groups": [
"my-group"
],
"overrides": {
"foo: 42,
}
},
:param group_names: List of groups this host is member of
:param role_overrides: document with all overrides; to be filtered using groups_names
:return: items to be set
"""
overrides = {}
for idx, per_group_overrides in enumerate(role_overrides):
groups = per_group_overrides.get("groups", [])
if set(groups).intersection(set(group_names)):
overrides.update(per_group_overrides.get("overrides", {}))
return overrides
The play code:
- name: Apply group overrides
set_fact:
"{{ item.key }}": "{{ item.value }}"
with_dict: "{{ role_overrides | filter_hostvars_overrides(group_names) }}"

How to simply access an object in a JSON with keys which are integers?

I have this example:
---
- hosts: localhost
gather_facts: false
vars:
json1: {'disk_info': {'A': {'label': 'Hard disk 1'}, 'B': {'label': 'Hard disk 2'}}}
json2: {'disk_info': {'0': {'label': 'Hard disk 1'}, '1': {'label': 'Hard disk 2'}}}
tasks:
- debug: msg="{{json1.disk_info.A}}"
- debug: msg="{{item.value.label}}"
loop: "{{ lookup('dict', json2.disk_info) }}"
when: "'0' in item.key"
Is it possible to access json 1.disk_info.0 the same way as json 2.disk_info.A - i.e. without a lookup on a dict?
Yes it is perfectly possible. But you have to play a bit to make sure your key name is read as a string containing your digit, and not interpreted as an integer index of a list.
Because of this the following will fire an error:
json2.disk_info.0
json2.disk_info[0]
Therefore, the correct syntax for your data structure is:
json2.disk_info['0']

Ansible URI module not returning value

I am not sure what I'm doing wrong here. This is the uri task i have:
- name: Waiting for Entity Submission to complete
uri:
url: https://{{ endpoint_ip }}:9440/{{ endpoint_api_task }}/{{ intent_status.json.metadata.uuid }}
url_username: "{{ endpoint_api_username }}"
url_password: "{{ endpoint_api_password }}"
validate_certs: false
force_basic_auth: true
return_content: true
follow_redirects: all
method: 'GET'
register: v3_progress
debugger: always
I m not able to read the return value v3_progress
In debugger - I see below key error:
[PE] TASK: nat : Waiting for Entity Submission to complete (debug)> p task_vars['v3_progress']
***KeyError:KeyError('v3_progress',)
If I debug the task args - all arguments seem accurate enough and the task itself gets a 200:
E] TASK: nat : Waiting for Entity Submission to complete (debug)> p task.args
{'_ansible_check_mode': False,
'_ansible_debug': False,
'_ansible_diff': False,
'_ansible_keep_remote_files': False,
'_ansible_module_name': u'uri',
'_ansible_no_log': False,
'_ansible_remote_tmp': u'~/.ansible/tmp',
'_ansible_selinux_special_fs': ['fuse',
'nfs',
'vboxsf',
'ramfs',
'9p',
'vfat'],
'_ansible_shell_executable': u'/bin/sh',
'_ansible_socket': None,
'_ansible_string_conversion_action': u'warn',
'_ansible_syslog_facility': u'LOG_USER',
'_ansible_tmpdir': u'/home/nutanix/.ansible/tmp/ansible-tmp-1586646620.05-143519614604489/',
'_ansible_verbosity': 0,
'_ansible_version': '2.9.6',
u'follow_redirects': u'all',
u'force_basic_auth': True,
u'method': u'GET',
u'return_content': True,
u'url': u'https://xxxx/v3/images/ce405ddf-91f1-4399-b5b5-c6f12f1da79c',
u'url_password': u'xxxx',
u'url_username': u'admin',
u'validate_certs': False}
register variable will not be available in the debugger. The value for the register variable v3_progress gets populated only when the task returns and will be available for usage only in the subsequent tasks.
To view the result in the debugger, use p result._result.
You will be able to read the value in v3_progress in a subsequent task like,
- name: print uri result
debug: msg="{{ v3_progress }}"

Building custom module Ansible

I am trying to build a custom module for our private cloud Infrastructure.
I followed this doc http://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html
I created a my_module.py module file.
When I hit ansible-playbook playbook/my_module.yml
Response:
PLAY [Create, Update, Delete VM] *********************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [localhost]
TASK [VM create] *************************************************************************************************************************
changed: [localhost]
TASK [dump test output] ***********************************************************************************************************************
ok: [localhost] => {
"msg": {
"changed": true,
"failed": false,
"message": "goodbye",
"original_message": "pcp_vm_ansible"
}
}
PLAY RECAP ************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
Which means it is working fine as expected.
Module.py
from ansible.module_utils.basic import AnsibleModule
def run_module():
module_args = dict(
name=dict(type='str', required=True),
new=dict(type='bool', required=False, default=False)
)
result = dict(
changed=False,
original_message='',
message=''
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
if module.check_mode:
return result
result['original_message'] = module.params['name']
result['message'] = 'goodbye'
if module.params['new']:
result['changed'] = True
if module.params['name'] == 'fail me':
module.fail_json(msg='You requested this to fail', **result)
module.exit_json(**result)
def main():
print("================== Main Called =======================")
run_module()
if __name__ == '__main__':
main()
I am trying to print logs to visualize my input data using print() or even logging.
print("================== Main Called =======================")
But nothing is getting printed to console.
As per Conventions, Best Practices, and Pitfalls, "Modules must output valid JSON only. The top level return type must be a hash (dictionary) although they can be nested. Lists or simple scalar values are not supported, though they can be trivially contained inside a dictionary."
Effectively, the core runtime only communicates with the module via JSON and the core runtime controls stdout so the standard print statements from the module are suppressed. If you want or need more information out of the execution runtime then I suggest a Callback Plugin.

Inconsistent ordering of parent:child group names in inventory file

I am now experiencing a typical problem in ansible V 2.1.0. In the case below,
[DEV:children]
DEV8
[DEV8]
thehost ansible_ssh_host=10.2.131.26 ansible_ssh_user=someuser1
Now, the when I run
{{hostvars[inventory_hostname].group_names, it outputs
TASK [debug] ************************************************************
ok: [thehost] => {
"msg": [
"DEV",
"DEV8"
]
}
Now, for other group of machines
[PRODCTE:children]
CTE3
[CTE3]
thehost1 ansible_ssh_host=10.2.131.30 ansible_ssh_user=someuser2
output:
TASK [debug] *******************************************************************
ok: [thehost] => {
"msg": [
"CTE3",
"PRODCTE"
]
}
PROBLEM:
[PROD]
PRODA
[PRODA]
PROD1
[PROD1]
thehost2 ansible_ssh_host=10.2.3.33 ansible_ssh_user=someuser3
output:
TASK [debug] *******************************************************************
ok: [thehost] => {
"msg": [
"PROD",
"PROD1"
"PRODA"
]
}
Now, If ansible code is to execute alphabetically, then consistency cannot be achieved. The output always has to be consistent. I mean, if group_names[0] or group_names[1] shows me different values for different groups based alphabetically, the playbooks cannot be standardized.
Anyways, even if go with this behavior, I am trying to understand on what factors does ansible outputs these values?
If alphabetically, then how was PROD1 chosen over PRODA? Does ansible considers numerics to be priority than alphabets here?
Why should it be parent->children?
I guess it is supposed to be alphabetically sorted. From code:
results['group_names'] = sorted([ g.name for g in self.get_groups() if g.name != 'all'])

Resources