I'm trying to merge 2 hashes in ansible, but I'm running into problems. The hash is mostly merged, but the lists inside are not.
I'm using the combine filter for the merge:
config_hash: "{{ inventory_config | combine(host_config, recursive=True, list_merge='append_rp') }}"
Here are the two hashes/dicts I'd like to merge:
inventory_config:
exoscale:
inventory_name: test_inventory
security_groups:
- name: rancher
state: present
rules:
- port: 22
cidr: "1.2.3.4/32"
type: egress
- port: 80
cidr: "0.0.0.0/32"
type: egress
host_config:
exoscale:
name: host
disk_size: 25 GiB
security_groups:
- name: rancher
state: added
rules:
- port: 21
cidr: "1.2.3.4/32"
type: ingress
- port: 8080
cidr: "0.0.0.0/32"
type: ingress
The result:
TASK [debug] *******************************************************************
ok: [instance] => {
"config_hash": {
"exoscale": {
"disk_size": "25 GiB",
"inventory_name": "test_inventory",
"name": "host",
"security_groups": [
{
"name": "rancher",
"rules": [
{
"cidr": "1.2.3.4/32",
"port": 22,
"type": "egress"
},
{
"cidr": "0.0.0.0/32",
"port": 80,
"type": "egress"
}
],
"state": "present"
},
{
"name": "rancher",
"rules": [
{
"cidr": "1.2.3.4/32",
"port": 21,
"type": "ingress"
},
{
"cidr": "0.0.0.0/32",
"port": 8080,
"type": "ingress"
}
],
"state": "added"
}
]
}
}
}
The result I wanted:
"config_hash": {
"exoscale": {
"disk_size": "25 GiB",
"inventory_name": "test_inventory",
"name": "host",
"security_groups": [
{
"name": "rancher",
"rules": [
{
"cidr": "1.2.3.4/32",
"port": 22,
"type": "egress"
},
{
"cidr": "0.0.0.0/32",
"port": 80,
"type": "egress"
},
{
"cidr": "1.2.3.4/32",
"port": 21,
"type": "ingress"
},
{
"cidr": "0.0.0.0/32",
"port": 8080,
"type": "ingress"
}
],
"state": "added"
},
]
}
}
I tried some other options, and it seems that "list_merge='append_rp'" can combine lists of simple items but not hashes.
Any ideas? Thanks for the help!
Your question is complex, i suggest you to use a custom filter:
you create a folder filter_plugins in your playbook folder (i have named the file myfilters.py and the filter custom)
myfilters.py in folder filter_plugins:
#!/usr/bin/python
class FilterModule(object):
def filters(self):
return {
'custom': self.custom
}
def custom(self, invent, host):
isecu = invent['exoscale']['security_groups']
hsecu = host['exoscale']['security_groups']
inventory_name = invent['exoscale']['inventory_name']
name = host['exoscale']['name']
disk_size = host['exoscale']['disk_size']
security_groups = []
inames_present = [elem['name'] for elem in isecu]
# if you have present and added in inventory_config uncomment next lines
# inames_present = []
# inames_added = []
# for elem in isecu:
# if elem['state'] == 'present':
# inames_present.append(elem['name'])
# else:
# inames_added.append(elem['name'])
for it in hsecu:
secuname = it['name']
state = it['state']
if secuname in inames_present:
if state == 'present': #overwrite data
rules = it['rules']
else: #merge
rules = it['rules'] + isecu[inames_present.index(secuname)]['rules']
else:
rules = it['rules']
state = 'added'
security_groups.append({'name': secuname, 'rules': rules, 'state': state})
result = {'exoscale': {'disk_size': disk_size, 'name': name, 'inventory_name': inventory_name, 'security_groups': security_groups}}
#print(result)
return result
the playbook:
---
- hosts: localhost
vars:
inventory_config:
exoscale:
inventory_name: test_inventory
security_groups:
- name: rancher
state: present
rules:
- port: 22
cidr: "1.2.3.4/32"
type: egress
- port: 80
cidr: "0.0.0.0/32"
type: egress
host_config:
exoscale:
name: host
disk_size: 25 GiB
security_groups:
- name: rancher
state: added
rules:
- port: 21
cidr: "1.2.3.4/32"
type: ingress
- port: 8080
cidr: "0.0.0.0/32"
type: ingress
tasks:
- name: set variable
set_fact:
config_hash: "{{ inventory_config | custom(host_config) }}"
- name: Display
debug:
var: config_hash
result:
ok: [localhost] => {
"config_hash": {
"exoscale": {
"disk_size": "25 GiB",
"inventory_name": "test_inventory",
"name": "host",
"security_groups": [
{
"name": "rancher",
"rules": [
{
"cidr": "1.2.3.4/32",
"port": 21,
"type": "ingress"
},
{
"cidr": "0.0.0.0/32",
"port": 8080,
"type": "ingress"
},
{
"cidr": "1.2.3.4/32",
"port": 22,
"type": "egress"
},
{
"cidr": "0.0.0.0/32",
"port": 80,
"type": "egress"
}
],
"state": "added"
}
]
}
}
}
You have an idea how to use a custom filter in ansible.
You just adapt the code python.
Related
How to read/display the following response (e.g: name, srcinft, dstinf) from FortiGate Firewall using Ansible. Or is there any way I can read this JSON output from file and display the fields i want.
{
"changed": false,
"meta": {
"http_method": "GET",
"size": 2,
"matched_count": 2,
"next_idx": 1,
"revision": "ac9c4e1d722b74695dee4fb3ce4fcd12",
"results": [
{
"policyid": 1,
"q_origin_key": 1,
"status": "enable",
"name": "test-policy01",
"uuid": "c4de3298-97ce-51ed-ccba-cafc556ba9e0",
"uuid-idx": 14729,
"srcintf": [
{
"name": "port2",
"q_origin_key": "port2"
}
],
"dstintf": [
{
"name": "port1",
"q_origin_key": "port1"
}
],
"action": "accept",
"ztna-status": "disable",
"srcaddr": [
{
"name": "all",
"q_origin_key": "all"
}
],
"dstaddr": [
{
"name": "all",
"q_origin_key": "all"
}
],
"policy-expiry": "disable",
"policy-expiry-date": "0000-00-00 00:00:00",
"service": [
{
"name": "ALL",
"q_origin_key": "ALL"
}
],
"tos": "0x00",
"sgt-check": "disable",
"sgt": []
},
{
"policyid": 2,
"q_origin_key": 2,
"status": "enable",
"name": "test-policy-02",
"uuid": "534b6c9c-97d1-51ed-7aa8-7544c628c7ea",
"uuid-idx": 14730,
"srcintf": [
{
"name": "port1",
"q_origin_key": "port1"
}
],
"dstintf": [
{
"name": "port2",
"q_origin_key": "port2"
}
],
"action": "accept",
"nat64": "disable",
"nat46": "disable",
"ztna-status": "disable",
"srcaddr": [
{
"name": "all",
"q_origin_key": "all"
}
],
"dstaddr": [
{
"name": "login.microsoft.com",
"q_origin_key": "login.microsoft.com"
}
],
"srcaddr6": [],
"reputation-direction6": "destination",
"policy-expiry-date": "0000-00-00 00:00:00",
"service": [
{
"name": "ALL_ICMP6",
"q_origin_key": "ALL_ICMP6"
}
],
"tos": "0x00",
"webcache": "disable",
"webcache-https": "disable",
"sgt-check": "disable",
"sgt": []
}
],
"vdom": "root",
"path": "firewall",
"name": "policy",
"version": "v7.2.3",
"build": 1262
},
"invocation": {
"module_args": {
"vdom": "root",
"access_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"selector": "firewall_policy",
"selectors": null
}
},
"_ansible_no_log": false
}
Expected result:
result:
test-policy-02:
dstintf:
- name: port2
q_origin_key: port2
srcintf:
- name: port1
q_origin_key: port1
test-policy01:
dstintf:
- name: port1
q_origin_key: port1
srcintf:
- name: port2
q_origin_key: port2
Unfortunately there is absolute no description or any further information.
However, regarding
How to read/display the following response (e.g: name, srcinft, dstinf) from FortiGate Firewall using Ansible.
you may have a look into the following simple and lazy approach with loop
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Include vars of stuff.yaml into the 'stuff' variable (2.2).
ansible.builtin.include_vars:
file: stuff.json
name: stuff
- name: Show list of dict
debug:
msg: "{{ stuff.meta.results }}"
- name: Print key:value
debug:
msg:
- "name: {{ item.name }}"
- "{{ item.srcintf }}"
- "{{ item.dstintf }}"
loop_control:
label: "policyid: {{ item.policyid }}"
loop: "{{ stuff.meta.results }}"
resulting into an output of
TASK [Print key:value] *****************
ok: [localhost] => (item=policyid: 1) =>
msg:
- 'name: test-policy01'
- - name: port2
q_origin_key: port2
- - name: port1
q_origin_key: port1
ok: [localhost] => (item=policyid: 2) =>
msg:
- 'name: test-policy-02'
- - name: port1
q_origin_key: port1
- - name: port2
q_origin_key: port2
Further Documentation
include_vars module – Load variables from files, dynamically within a task
Loops
Or is there any way I can read this JSON output from file and display the fields I want?
To get familiar with data structure, respective JSON response you've provided in your example, you could start with something like a JSONPathFinder. It will result into an path of
x.meta.results[0].name
x.meta.results[0].srcintf
x.meta.results[0].dstintf
x.meta.results[1].name
x.meta.results[1].srcintf
x.meta.results[1].dstintf
for the provided keys.
It is also possible to use jq on CLI
jq keys stuff.json
[
"_ansible_no_log",
"changed",
"invocation",
"meta"
]
and just proceed further with
jq '.meta.results' stuff.json
jq '.meta.results[0]' stuff.json
jq '.meta.results[1]' stuff.json
Further Q&A
How to get key names from JSON using jq?
Updated with suggestions from larsks.
With the following structure
"intf_output_ios": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"failed": false,
"gathered": [
{
"name": "GigabitEthernet0/0"
},
{
"mode": "trunk",
"name": "GigabitEthernet0/1",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"99",
"100"
],
"encapsulation": "dot1q"
}
},
{
"mode": "trunk",
"name": "GigabitEthernet0/2",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"99",
"100"
],
"encapsulation": "dot1q"
}
},
{
"access": {
"vlan": 30
},
"mode": "access",
"name": "GigabitEthernet0/3"
},
{
"name": "GigabitEthernet1/0"
},
{
"name": "GigabitEthernet1/1"
},
{
"name": "GigabitEthernet1/2"
},
{
"name": "GigabitEthernet1/3"
},
{
"name": "GigabitEthernet2/0"
},
{
"name": "GigabitEthernet2/1"
},
{
"name": "GigabitEthernet2/2"
},
{
"name": "GigabitEthernet2/3"
},
{
"name": "GigabitEthernet3/0"
},
{
"name": "GigabitEthernet3/1"
},
{
"name": "GigabitEthernet3/2"
},
{
"access": {
"vlan": 99
},
"mode": "access",
"name": "GigabitEthernet3/3"
}
]
}
To print only the ports in VLAN 30 use the following?
- name: "P901T6: Set fact to include only access ports - IOS"
set_fact:
access_ports_ios_2: "{{ intf_output_ios | json_query(query) }}"
vars:
query: >-
gathered[?access.vlan==`30`]
- name: "P901T7: Dump list of access ports - IOS"
debug:
var=access_ports_ios_2
NOTE: It is important to use 30 (with backticks) and not '30'
I have gone through https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#managing-list-variables without really understanding how to fix this. If someone has some good link that would be very useful
With a structure like
ok: [access01] => {
"access_ports_ios": [
{
"access": {
"vlan": 30
},
"mode": "access",
"name": "GigabitEthernet0/3"
},
{
"access": {
"vlan": 99
},
"mode": "access",
"name": "GigabitEthernet3/3"
}
]
}
To get ports in vlan 30 use:
- debug:
var: access_ports_ios|json_query(query)
vars:
query: >-
[?access.vlan==`30`]
Note:
If you want to use a variable for vlan instead of hard-coding it. I had to do as follows:
- name: Debug 4
debug:
var: access_ports_ios|json_query('[?access.vlan==`{{ src_vlan | int}}`]')
You're asking for gathered.access, but gathered is a list and does not have an access attribute. You want "all items from gathered for which access.vlan is 30 (and note that the value of access.vlan is an integer, not a string):
- debug:
var: intf_output_ios|json_query(query)
vars:
query: >-
gathered[?access.vlan==`30`]
Which given you example input produces:
TASK [debug] *******************************************************************
ok: [localhost] => {
"intf_output_ios|json_query(query)": [
{
"access": {
"vlan": 30
},
"mode": "access",
"name": "GigabitEthernet0/3"
}
]
}
I'm going to reiterate advice I often give for json_query questions: use something like jpterm or the JMESPath website to test JMESPath expressions against your actual data. This makes it much easier to figure out where an expression might be going wrong.
There is a json output which I am trying to parse. I registered the output into variable named instance_ip.
Here is the json output:
{
"msg": {
"instances": [
{
"root_device_type": "ebs",
"private_dns_name": "",
"cpu_options": {
"core_count": 2,
"threads_per_core": 1
},
"security_groups": [],
"state_reason": {
"message": "Client.UserInitiatedShutdown: User initiated shutdown",
"code": "Client.UserInitiatedShutdown"
},
"monitoring": {
"state": "disabled"
},
"ebs_optimized": false,
"state": {
"code": 48,
"name": "terminated"
},
"client_token": "test-Logst-14O6L4IETB05E",
"virtualization_type": "hvm",
"architecture": "x86_64",
"tags": {
"sg:environment": "TST",
"Name": "logstash1",
"aws:cloudformation:logical-id": "Logstash1A1594E87",
"sg:owner": "Platforms#paparapa.com",
"aws:cloudformation:stack-name": "test-three-ec2-instances-elk-demo",
"elastic_role": "logstash",
"sg:function": "Storage"
},
"key_name": "AWS_key",
"image_id": "ami-09f765d333a8ebb4b",
"state_transition_reason": "User initiated (2021-01-31 09:46:23 GMT)",
"hibernation_options": {
"configured": false
},
"capacity_reservation_specification": {
"capacity_reservation_preference": "open"
},
"public_dns_name": "",
"block_device_mappings": [],
"metadata_options": {
"http_endpoint": "enabled",
"state": "pending",
"http_tokens": "optional",
"http_put_response_hop_limit": 1
},
"placement": {
"group_name": "",
"tenancy": "default",
"availability_zone": "ap-southeast-2a"
},
"enclave_options": {
"enabled": false
},
"ami_launch_index": 0,
"ena_support": true,
"network_interfaces": [],
"launch_time": "2021-01-31T09:44:51+00:00",
"instance_id": "i-0fa5dbb869833d7c6",
"instance_type": "t2.medium",
"root_device_name": "/dev/xvda",
"hypervisor": "xen",
"product_codes": []
},
{
"root_device_type": "ebs",
"private_dns_name": "ip-10-x-x-x.ap-southeast-2.compute.internal",
"cpu_options": {
"core_count": 2,
"threads_per_core": 1
},
"source_dest_check": true,
"monitoring": {
"state": "disabled"
},
"subnet_id": "subnet-0d5f856afab8f0eec",
"ebs_optimized": false,
"iam_instance_profile": {
"id": "AIPARWXXVHXJWC2FL4AI6",
"arn": "arn:aws:iam::instance-profile/test-three-ec2-instances-elk-demo-Logstash1InstanceProfileC3035819-1F2LI7JM16FVM"
},
"state": {
"code": 16,
"name": "running"
},
"security_groups": [
{
"group_id": "sg-0e5dffa834a036fab",
"group_name": "Ansible_sec_group"
}
],
"client_token": "test-Logst-8UF6RX33BH06",
"virtualization_type": "hvm",
"architecture": "x86_64",
"public_ip_address": "3.x.x.x",
"tags": {
"Name": "logstash1",
"aws:cloudformation:logical-id": "Logstash1A1594E87",
"srg:environment": "TST",
"aws:cloudformation:stack-id": "arn:aws:cloudformation:ap-southeast-2:117557247443:stack/test-three-ec2-instances-elk-demo/ca8ef2b0-63ad-11eb-805f-02630ffccc8c",
"sg:function": "Storage",
"aws:cloudformation:stack-name": "test-three-ec2-instances-elk-demo",
"elastic_role": "logstash",
"sg:owner": "Platforms#paparapa.com"
},
"key_name": "AWS_SRG_key",
"image_id": "ami-09f765d333a8ebb4b",
"ena_support": true,
"hibernation_options": {
"configured": false
},
"capacity_reservation_specification": {
"capacity_reservation_preference": "open"
},
"public_dns_name": "ec2-3-x-x-x.ap-southeast-2.compute.amazonaws.com",
"block_device_mappings": [
{
"device_name": "/dev/xvda",
"ebs": {
"status": "attached",
"delete_on_termination": true,
"attach_time": "2021-01-31T10:22:21+00:00",
"volume_id": "vol-058662934ffba3a68"
}
}
],
"metadata_options": {
"http_endpoint": "enabled",
"state": "applied",
"http_tokens": "optional",
"http_put_response_hop_limit": 1
},
"placement": {
"group_name": "",
"tenancy": "default",
"availability_zone": "ap-southeast-2a"
},
"enclave_options": {
"enabled": false
},
"ami_launch_index": 0,
"hypervisor": "xen",
"network_interfaces": [
{
"status": "in-use",
"description": "",
"subnet_id": "subnet-0d5f856afab8f0eec",
"source_dest_check": true,
"interface_type": "interface",
"ipv6_addresses": [],
"network_interface_id": "eni-09b045668ac59990c",
"private_dns_name": "ip-10-x-x-x.ap-southeast-2.compute.internal",
"attachment": {
"status": "attached",
"device_index": 0,
"attachment_id": "eni-attach-0700cd11dfb27e2dc",
"delete_on_termination": true,
"attach_time": "2021-01-31T10:22:20+00:00"
},
"private_ip_addresses": [
{
"private_ip_address": "10.x.x.x",
"private_dns_name": "ip-10-x-x-x.ap-southeast-2.compute.internal",
"association": {
"public_ip": "3.x.x.x",
"public_dns_name": "ec2-3-x-x-x.ap-southeast-2.compute.amazonaws.com",
"ip_owner_id": "amazon"
},
"primary": true
}
],
"mac_address": "02:d1:13:01:59:b2",
"private_ip_address": "10.x.x.x",
"vpc_id": "vpc-0016dcdf5abe4fef0",
"groups": [
{
"group_id": "sg-0e5dffa834a036fab",
"group_name": "Ansible_sec_group"
}
],
"association": {
"public_ip": "3.x.x.x",
"public_dns_name": "ec2-3-x-x-x.ap-southeast-2.compute.amazonaws.com",
"ip_owner_id": "amazon"
},
"owner_id": "117557247443"
}
],
"launch_time": "2021-01-31T10:22:20+00:00",
"instance_id": "i-0482bb8ca1bef6006",
"instance_type": "t2.medium",
"root_device_name": "/dev/xvda",
"state_transition_reason": "",
"private_ip_address": "10.x.x.x",
"vpc_id": "vpc-0016dcdf5abe4fef0",
"product_codes": []
}
],
"failed": false,
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
The goal is to get the private ip address and append the port number.
With the following task I got the list with node ip address ["10.x.x.x"]
- name: Getting EC2 instance ip address
set_fact:
instance_ip: "{{ logstash_instance | json_query('instances[*].network_interfaces[*].private_ip_address') | flatten }}"
With next task in a play I am trying to append the port number but I am keep getting
"['10.x.x.x:5044']"
- name: Get everything between quotes and append port 5044
set_fact:
logstash_hosts: "{{ instance_ip | map('regex_replace', '^(.*)$', '\\1:5044') | list }}"
Here is the template output:
# ------------------------------ Logstash Output -------------------------------
output.logstash:
hosts: "['10.x.x.x:5044']"
I need to get rid of the double quotes and pass the clean variable ['10.x.x.x:5044'] to my template file.
You can try creating a new list variable with the port number appended to each element, using this approach:
- set_fact:
logstash_hosts: "{{ logstash_hosts|default([]) + [ item ~ ':5044' ] }}"
with_items: "{{ instance_ip }}"
Then in template:
output.logstash:
hosts: {{ logstash_hosts|to_yaml }}
Also since the Logstash configuration is a YAML formatted file, you use YAML list syntax and directly use the instance_ip variable (and avoid set_fact). Then the template will look like this:
output.logstash:
hosts:
{% for ip in instance_ip %}
- {{ ip }}:5044
{% endfor %}
I have a the following snippet where I install OS on a virtual machine using ansible, and after it finishes it stops the VM so I can continue the rest of the tasks, I am collecting facts from the red hat virtualization manager regarding the state the vm, and I want to keep waiting until the status of the VM changes from up to down so I can proceed, how can I code this?:
# I am kickstarting the VM
- name: Installing OS
ovirt_vms:
state: running
name: "{{ vm_name }}"
initrd_path: iso://initrd.img
kernel_path: iso://vmlinuz
kernel_params: initrd=initrd.img inst.stage2=cdrom inst.ks=ftp://10.0.1.2/pub/ks.cfg net.ifnames=0 biosdevname=0 BOOT_IMAGE=vmlinuz
# Getting facts about the VM
- name: Gather VM Status
ovirt_vms_facts:
pattern: name={{ vm_name}}
- name: Register VM Status
debug:
msg: "{{ ovirt_vms[0].status }}"
register: vm_status
#Should Keep probing the value of vm_status until it changes from up to down.
????????????? --> What should I do here?
#When Status change continue the work book
I tried to parse the ovirt_vms I gathered from ovirt_vms_facts, and I got the following:
{
"_ansible_parsed": true,
"invocation": {
"module_args": {
"all_content": false,
"pattern": "name=as-vm-type1",
"nested_attributes": [],
"case_sensitive": true,
"fetch_nested": false,
"max": null
}
},
"changed": false,
"_ansible_no_log": false,
"ansible_facts": {
"ovirt_vms": [
{
"disk_attachments": [],
"origin": "ovirt",
"sso": {
"methods": []
},
"affinity_labels": [],
"placement_policy": {
"affinity": "migratable"
},
"watchdogs": [],
"creation_time": "2018-07-15 13:54:10.565000+02:00",
"snapshots": [],
"graphics_consoles": [],
"cluster": {
"href": "/ovirt-engine/api/clusters/a5272863-38a8-469d-998e-c1e1f26f4f5a",
"id": "a5272863-38a8-469d-998e-c1e1f26f4f5a"
},
"href": "/ovirt-engine/api/vms/08406dad-5173-4241-8d42-904ddf3d096a",
"migration": {
"auto_converge": "inherit",
"compressed": "inherit"
},
"io": {
"threads": 0
},
"migration_downtime": -1,
"id": "08406dad-5173-4241-8d42-904ddf3d096a",
"high_availability": {
"priority": 0,
"enabled": false
},
"cdroms": [],
"statistics": [],
"usb": {
"enabled": false
},
"display": {
"allow_override": false,
"disconnect_action": "LOCK_SCREEN",
"file_transfer_enabled": true,
"copy_paste_enabled": true,
"secure_port": 5900,
"smartcard_enabled": false,
"single_qxl_pci": false,
"type": "spice",
"monitors": 1,
"address": "10.254.148.74"
},
"nics": [],
"tags": [],
"name": "as-vm-type1",
"bios": {
"boot_menu": {
"enabled": false
}
},
"stop_time": "2018-07-15 13:54:10.569000+02:00",
"template": {
"href": "/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000",
"id": "00000000-0000-0000-0000-000000000000"
},
"memory": 42949672960,
"type": "server",
"katello_errata": [],
"numa_tune_mode": "interleave",
"status": "up",
"next_run_configuration_exists": false,
"delete_protected": false,
"sessions": [],
"start_time": "2018-07-15 13:54:14.079000+02:00",
"quota": {
"id": "ad014a63-fd76-42da-8369-57dae2dd5979"
},
"applications": [],
"host": {
"href": "/ovirt-engine/api/hosts/56a65d3b-1c0a-4b2a-9c6c-aa96262d9502",
"id": "56a65d3b-1c0a-4b2a-9c6c-aa96262d9502"
},
"memory_policy": {
"max": 171798691840,
"guaranteed": 42949672960
},
"numa_nodes": [],
"permissions": [],
"stateless": false,
"reported_devices": [],
"large_icon": {
"href": "/ovirt-engine/api/icons/2971ddbe-1dbf-4af8-b86a-078cbbe66419",
"id": "2971ddbe-1dbf-4af8-b86a-078cbbe66419"
},
"storage_error_resume_behaviour": "auto_resume",
"cpu_profile": {
"href": "/ovirt-engine/api/cpuprofiles/34000c79-d669-41ef-8d2a-d37d7f925c3c",
"id": "34000c79-d669-41ef-8d2a-d37d7f925c3c"
},
"time_zone": {
"name": "Etc/GMT"
},
"run_once": true,
"original_template": {
"href": "/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000",
"id": "00000000-0000-0000-0000-000000000000"
},
"start_paused": false,
"host_devices": [],
"small_icon": {
"href": "/ovirt-engine/api/icons/28054380-4723-42db-a8e5-fed8a3778199",
"id": "28054380-4723-42db-a8e5-fed8a3778199"
},
"os": {
"boot": {
"devices": [
"hd",
"cdrom"
]
},
"type": "rhel_7x64"
},
"cpu": {
"architecture": "x86_64",
"topology": {
"cores": 1,
"threads": 1,
"sockets": 8
}
},
"cpu_shares": 1024
}
]
}
}
You can do it as follows:
- name: Wait for VMs to be down
ovirt_vms_facts:
auth: "{{ ovirt_auth }}"
pattern: "name={{ vm_name }}"
until: "ovirt_vms[0].status == 'down'"
retries: 5
delay: 10
You can do it as below::
- name: Register VM Status
debug:
msg: "{{ ovirt_vms[0].status }}"
register: vm_status
until: vm_status.stdout.find("down") != -1
retries: 10
delay: 5
Here it is retrying for 10 times with a delay of 5 seconds.
How can I merge 2 lists of hashes based on a key/value pair, using Ansible 2.4.4
"foo": [
{
"hostname": "web1.example.com",
"guid": "73e85eb2-2ad5-4699-8a09-adf658a11ff2"
},
{
"hostname": "web2.example.com",
"guid": "827025fe-f13c-4fc8-ba51-7ff582596bbd"
},
{
"hostname": "web3.example.com",
"guid": "bba27304-c1bb-4889-aa44-125626be8488"
}
]
"bar": [
{
"ipaddress": "1.1.1.1",
"guid": "73e85eb2-2ad5-4699-8a09-adf658a11ff2"
},
{
"ipaddress": "2.2.2.2",
"guid": "827025fe-f13c-4fc8-ba51-7ff582596bbd"
},
{
"ipaddress": "3.3.3.3",
"guid": "bba27304-c1bb-4889-aa44-125626be8488"
}
]
I want something like:
"foobar" : [
{
"hostname": "web1.example.com",
"guid": "73e85eb2-2ad5-4699-8a09-adf658a11ff2",
"ipaddress": "1.1.1.1"
},
{
"hostname": "web2.example.com",
"guid": "827025fe-f13c-4fc8-ba51-7ff582596bbd",
"ipaddress": "2.2.2.2"
},
{
"hostname": "web3.example.com",
"guid": "bba27304-c1bb-4889-aa44-125626be8488",
"ipaddress": "3.3.3.3"
}
]
I've looked into several Ansible / Jinja2 filters including combine, union, map, custom plugins, but not having much success. I need to be able to match on the guid.
not sure if there is a smarter way, but to achieve what you need you can use the nested query plugin to loop over the combinations of the elements from the 2 list variables, find the combinations that have the common field equal, and then construct a new dictionary element and append it to the "final" list variable.
playbook:
- hosts: localhost
gather_facts: false
vars:
foo:
- hostname: web1.example.com
guid: 73e85eb2-2ad5-4699-8a09-adf658a11ff2
- hostname: web2.example.com
guid: 827025fe-f13c-4fc8-ba51-7ff582596bbd
- hostname: web3.example.com
guid: bba27304-c1bb-4889-aa44-125626be8488
bar:
- ipaddress: 1.1.1.1
guid: 73e85eb2-2ad5-4699-8a09-adf658a11ff2
- ipaddress: 2.2.2.2
guid: 827025fe-f13c-4fc8-ba51-7ff582596bbd
- ipaddress: 3.3.3.3
guid: bba27304-c1bb-4889-aa44-125626be8488
tasks:
- name: merge lists
set_fact:
merged_list: "{{ merged_list|default([]) + [{ 'hostname': item[0].hostname, 'guid': item[0].guid, 'ipaddress': item[1].ipaddress }] }}"
when: "item[0].guid == item[1].guid"
loop: "{{ query('nested', foo, bar) }}"
- name: print results
debug:
var: merged_list
result:
TASK [print results] ************************************************************************************************************************************************************************************************
ok: [localhost] => {
"merged_list": [
{
"guid": "73e85eb2-2ad5-4699-8a09-adf658a11ff2",
"hostname": "web1.example.com",
"ipaddress": "1.1.1.1"
},
{
"guid": "827025fe-f13c-4fc8-ba51-7ff582596bbd",
"hostname": "web2.example.com",
"ipaddress": "2.2.2.2"
},
{
"guid": "bba27304-c1bb-4889-aa44-125626be8488",
"hostname": "web3.example.com",
"ipaddress": "3.3.3.3"
}
]
}