How does Ansible module return fact - ansible

I write Ansible module my_module that need to set some facts.
I define in module the below code
....
response = {
"hello": "world",
"ansible_facts" : {
"my_data": "xjfdks"
}
}
module.exit_json(changed=False, meta=response)
Now in playbook after execution my_module I want access to new facts, but it's not define
- my_module
- debug: msg="My new fact {{ my_data }}"
What is the correct way to do it?

You should set ansible_facts directly in module's output, not inside meta.
To return all response's keys from your example:
module.exit_json(changed=False, **response)
Or only for ansible_facts:
module.exit_json(changed=False, ansible_facts=response['ansible_facts'])

Related

How to pass ip-address from terraform to ansible [duplicate]

I am trying to create Ansible inventory file using local_file function in Terraform (I am open for suggestions to do it in a different way)
module "vm" config:
resource "azurerm_linux_virtual_machine" "vm" {
for_each = { for edit in local.vm : edit.name => edit }
name = each.value.name
resource_group_name = var.vm_rg
location = var.vm_location
size = each.value.size
admin_username = var.vm_username
admin_password = var.vm_password
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.edit_seat_nic[each.key].id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
output "vm_ips" {
value = toset([
for vm_ips in azurerm_linux_virtual_machine.vm : vm_ips.private_ip_address
])
}
When I run terraform plan with the above configuration I get:
Changes to Outputs:
+ test = [
+ "10.1.0.4",
]
Now, in my main TF I have the configuration for local_file as follows:
resource "local_file" "ansible_inventory" {
filename = "./ansible_inventory/ansible_inventory.ini"
content = <<EOF
[vm]
${module.vm.vm_ips}
EOF
}
This returns the error below:
Error: Invalid template interpolation value
on main.tf line 92, in resource "local_file" "ansible_inventory":
90: content = <<EOF
91: [vm]
92: ${module.vm.vm_ips}
93: EOF
module.vm.vm_ips is set of string with 1 element
Cannot include the given value in a string template: string required.
Any suggestion how to inject the list of IPs from the output into the local file while also being able to format the rest of the text in the file?
If you want the Ansible inventory to be statically sourced from a file in INI format, then you basically need to render a template in Terraform to produce the desired output.
module/templates/inventory.tmpl:
[vm]
%{ for ip in ips ~}
${ip}
%{ endfor ~}
alternative suggestion from #mdaniel:
[vm]
${join("\n", ips)}
module/config.tf:
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/templates/inventory.tmpl",
{ ips = module.vm.vm_ips }
)
filename = "${path.module}/ansible_inventory/ansible_inventory.ini"
file_permission = "0644"
}
A couple of additional notes though:
You can modify your output to be the entire map of objects of exported attributes like:
output "vms" {
value = azurerm_linux_virtual_machine.vm
}
and then you can access more information about the instances to populate in your inventory. Your templatefile argument would still be the module output, but the for expression(s) in the template would look considerably different depending upon what you want to add.
You can also utilize the YAML or JSON inventory formats for Ansible static inventory. With those, you can then leverage the yamldecode or jsondecode Terraform functions to make the HCL2 data structure transformation much easier. The template file would become a good bit cleaner in that situation for more complex inventories.

Generate list of IP addresses from start and end values with Ansible

Is there a way to generate a list of IP addresses between two arbitrary IPs (not from a subnet/range) with Ansible (v2.9)?
I've searched and the ipaddr filter looks like a good candidate, but from the documentation I couldn't figure out if it supports this.
I'm looking for a solution that allows me to get a list like
[ '10.0.0.123', '10.0.0.124', ... , '10.0.1.23' ]
from a task like
- name: generate IP list
set_fact:
ip_list: "{{ '10.0.0.123' | ipaddr_something('10.0.1.23') }}"
Create a filter plugin. For example
shell> cat filter_plugins/netaddr.py
import netaddr
def netaddr_iter_iprange(ip_start, ip_end):
return [str(ip) for ip in netaddr.iter_iprange(ip_start, ip_end)]
class FilterModule(object):
''' Ansible filters. Interface to netaddr methods.
https://pypi.org/project/netaddr/
'''
def filters(self):
return {
'netaddr_iter_iprange' : netaddr_iter_iprange,
}
Then, the task below shall create the list
- set_fact:
ip_list: "{{ '10.0.0.123'|netaddr_iter_iprange('10.0.1.23') }}"

Override group_vars by external source

I have a pilote project keeping many common variables in group_vars.
group_vars/
group1.yml
group2.yml
group3.yml
For different implementations (usually per client), I'd like to maintain reserved file which overrides the content of group_vars, where the content of that file could have following format, i.e. client1.yml :
group1:
var11_to_override: "foo"
var12_to_override: "bar"
group2:
var21_to_override: "foo"
var22_to_override: "bar"
Is there a simple possibility to say to Ansible that file client1.yml overrides group_vars content?
The module include_vars could be certainly the first step together with set_facts within a loop, but it requires probably complicated jinja2 filter expressions ...
Have I to write a new module or filter updating hostvars?
Finally resolved by custom filter updating a dict by another:
filter_plugins/vars_update.py
import copy
import collections
class FilterModule(object):
def update_hostvars(self, _origin, overlay):
origin = copy.deepcopy(_origin)
for k, v in overlay.items():
if isinstance(v, collections.Mapping):
origin[k] = self.update_hostvars(origin.get(k, {}), v)
else:
origin[k] = v
return origin
def filters(self):
return {"update_hostvars": self.update_hostvars}
.. and using this filter when updating all variables:
- name: Include client file
include_vars:
file: "{{ client_file_path }}"
name: client_overlay
- name: Update group_vars by template client
set_fact:
"{{ item.key }}": "{{ hostvars[inventory_hostname][item.key] | update_hostvars(item.value) }}"
with_dict: "{{ client_overlay }}"
Using the examples given in this thread i made my own solution:
The "external source" feeds in an inventory item using --extra-vars "#". The file content itself is uploaded as base64 encoded content and then decoded/written to fs.
The external file has a list of overrides per role/group like so:
role_overrides: [{
"groups": [
"my-group"
],
"overrides": {
"foo": "value",
"bar": "value",
}
},
but then jsonified obviously...
The filter module
#!/usr/bin/env python
class FilterModule(object):
def filters(self):
return {
"filter_hostvars_overrides": self.filter_hostvars_overrides,
}
def filter_hostvars_overrides(self, role_overrides, group_names):
"""
filter the overrides for the ones to apply for this host
[
{
"groups": [
"my-group"
],
"overrides": {
"foo: 42,
}
},
:param group_names: List of groups this host is member of
:param role_overrides: document with all overrides; to be filtered using groups_names
:return: items to be set
"""
overrides = {}
for idx, per_group_overrides in enumerate(role_overrides):
groups = per_group_overrides.get("groups", [])
if set(groups).intersection(set(group_names)):
overrides.update(per_group_overrides.get("overrides", {}))
return overrides
The play code:
- name: Apply group overrides
set_fact:
"{{ item.key }}": "{{ item.value }}"
with_dict: "{{ role_overrides | filter_hostvars_overrides(group_names) }}"

How do I assure that a path variable ends with slash in Ansible?

I want to assure that a variable representing a folder set by the user has an ending slash, so I can avoid bugs related to missing slash or double slash.
Mainly I am considering a repair task like:
- when: my_path[-1] != '/'
set_fact:
my_path: "{{ mypath }}/"
If this condition can be written in pure jinja2 even better as I could avoid creating an extra set_fact and put that trick inside a "vars" block.
Any better way to implement that? Apparently there is no in-build jinja2 filter to format paths.
You can write your own filter.
In ansible.cfg you can specify your filter directory:
[defaults]
filter_plugins=<path/to/your/library/of/filters>
And now you put in <path/to/your/library/of/filters>/path_filter.py:
from ansible.module_utils import basic
def canonical_path(path):
''' Verify that path ends with / and add / if not '''
if path[-1] != '/':
return path + '/'
return path
class FilterModule(object):
''' Ansible Filter to provide canonical_path '''
def filters(self):
return {'canonical_path': canonical_path}
That allows you to write in your playbooks
- name: Show canonical_path
debug:
msg: "Path is : {{ mypath | canonical_path }}"

Ansible callback plugin: how to get play attribute values with variables expanded?

I have a play below and am trying to get the resolved value of the remote_user attribute inside the callback plugin.
- name: test play
hosts: "{{ hosts_pattern }}"
strategy: free
gather_facts: no
remote_user: "{{ my_remote_user if my_remote_user is defined else 'default_user' }}"
tasks:
- name: a test task
shell: whoami && hostname
I am currently accessing the play field attribute as follows:
def v2_playbook_on_play_start(self, play):
self._play_remote_user = play.remote_user
And I also tried saving the remote_user within v2_playbook_on_task_start to see if this does the trick, as this is where the templated task name is made available.
def v2_playbook_on_task_start(self, task, is_conditional):
self._tasks[task._uuid].remote_user = task.remote_user
self._tasks[task._uuid].remote_user_2 = task._get_parent_attribute('remote_user')
However all cases above give me {{ my_remote_user if my_remote_user is defined else 'default_user' }} instead of the expanded/resolved value.
In general, is there a neat way to get a collection of all play attributes with resolved values as defined in the playbook?
Happily much easier for action plugins.
ActionBase class has templar and loader properties already.
One can iterate over task_vars and render all with Templar.template
for k in task_vars:
new_module_args = merge_hash(
new_module_args,
{k: self._templar.template(task_vars.get(k, None))}
)
and call module
result = self._execute_module(
module_name='my_module',
task_vars=task_vars,
module_args=new_module_args
)
I don't think there is an easy way to achieve this.
PlayContext is templated inside task_executor here.
And this happens after all callback methods are already notified.
So you should use Templar class manually (but I'm not sure you can get correct variables context for it to work correctly).
Credit goes Konstantin's tip to use the Templar class.
I came up with a solution for Ansible 2.3.1 - not entirely sure if it's the optimum one but it seems to work. This is an example code:
from ansible.plugins.callback import CallbackBase
from ansible.template import Templar
from ansible.plugins.strategy import SharedPluginLoaderObj
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'your_name'
def __init__(self):
super(CallbackModule, self).__init__()
# other shenanigans
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
def v2_playbook_on_play_start(self, play):
self.play = play
def _all_vars(self, host=None, task=None):
# host and task need to be specified in case 'magic variables' (host vars, group vars, etc) need to be loaded as well
return self.play.get_variable_manager().get_vars(
loader=self.playbook.get_loader(),
play=self.play,
host=host,
task=task
)
def v2_runner_on_ok(self, result):
templar = Templar(loader=self.playbook.get_loader(),
shared_loader_obj=SharedPluginLoaderObj(),
variables=self._all_vars(host=result._host, task=result._task))
remote_user = templar.template(self.play.remote_user)
# do something with templated remote_user

Resources