Ansible search and query - ansible

Updated with suggestions from larsks.
With the following structure
"intf_output_ios": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"failed": false,
"gathered": [
{
"name": "GigabitEthernet0/0"
},
{
"mode": "trunk",
"name": "GigabitEthernet0/1",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"99",
"100"
],
"encapsulation": "dot1q"
}
},
{
"mode": "trunk",
"name": "GigabitEthernet0/2",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"99",
"100"
],
"encapsulation": "dot1q"
}
},
{
"access": {
"vlan": 30
},
"mode": "access",
"name": "GigabitEthernet0/3"
},
{
"name": "GigabitEthernet1/0"
},
{
"name": "GigabitEthernet1/1"
},
{
"name": "GigabitEthernet1/2"
},
{
"name": "GigabitEthernet1/3"
},
{
"name": "GigabitEthernet2/0"
},
{
"name": "GigabitEthernet2/1"
},
{
"name": "GigabitEthernet2/2"
},
{
"name": "GigabitEthernet2/3"
},
{
"name": "GigabitEthernet3/0"
},
{
"name": "GigabitEthernet3/1"
},
{
"name": "GigabitEthernet3/2"
},
{
"access": {
"vlan": 99
},
"mode": "access",
"name": "GigabitEthernet3/3"
}
]
}
To print only the ports in VLAN 30 use the following?
- name: "P901T6: Set fact to include only access ports - IOS"
set_fact:
access_ports_ios_2: "{{ intf_output_ios | json_query(query) }}"
vars:
query: >-
gathered[?access.vlan==`30`]
- name: "P901T7: Dump list of access ports - IOS"
debug:
var=access_ports_ios_2
NOTE: It is important to use 30 (with backticks) and not '30'
I have gone through https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#managing-list-variables without really understanding how to fix this. If someone has some good link that would be very useful
With a structure like
ok: [access01] => {
"access_ports_ios": [
{
"access": {
"vlan": 30
},
"mode": "access",
"name": "GigabitEthernet0/3"
},
{
"access": {
"vlan": 99
},
"mode": "access",
"name": "GigabitEthernet3/3"
}
]
}
To get ports in vlan 30 use:
- debug:
var: access_ports_ios|json_query(query)
vars:
query: >-
[?access.vlan==`30`]
Note:
If you want to use a variable for vlan instead of hard-coding it. I had to do as follows:
- name: Debug 4
debug:
var: access_ports_ios|json_query('[?access.vlan==`{{ src_vlan | int}}`]')

You're asking for gathered.access, but gathered is a list and does not have an access attribute. You want "all items from gathered for which access.vlan is 30 (and note that the value of access.vlan is an integer, not a string):
- debug:
var: intf_output_ios|json_query(query)
vars:
query: >-
gathered[?access.vlan==`30`]
Which given you example input produces:
TASK [debug] *******************************************************************
ok: [localhost] => {
"intf_output_ios|json_query(query)": [
{
"access": {
"vlan": 30
},
"mode": "access",
"name": "GigabitEthernet0/3"
}
]
}
I'm going to reiterate advice I often give for json_query questions: use something like jpterm or the JMESPath website to test JMESPath expressions against your actual data. This makes it much easier to figure out where an expression might be going wrong.

Related

ansible jinja2 seperate certain interfaces from list

I'm new to jinja and ansible, and am still trying to wrap my head around how to work with them.
In my case, I'm trying to get a sublist of interface names, where I can execute a certain action on if the description key contains a certain value.
I tried things with an if map construction, but that did seem to work out.
Any suggestion how to do this?
I have the following
- hosts: Router1
gather_facts: false
connection: network_cli
tasks:
- name: get interfaces
arista.eos.eos_interfaces:
state: gathered
register: all_interfaces
- name: check debug
debug:
msg: "{{ all_interfaces.gathered }}"
- name: only configured by Ansible ports
set_fact:
interface: >
{% if 'Configured by Ansible' in all_interfaces.gathered|map(attribute='description')%}
{% endif %}
- name: check debug
debug:
msg: "{{ interface }}"
The list printed out (and which I try to use)
[
{
"description": "Router2",
"enabled": true,
"mode": "layer3",
"name": "Port-Channel1000"
},
{
"description": "Router2",
"enabled": true,
"name": "Ethernet1"
},
{
"description": "",
"enabled": true,
"name": "Ethernet2"
},
{
"description": "",
"enabled": true,
"name": "Ethernet3"
},
{
"description": "",
"enabled": true,
"name": "Ethernet4"
},
{
"description": "Configured by Ansible",
"enabled": false,
"name": "Ethernet5"
},
{
"description": "Configured by Ansible",
"enabled": false,
"name": "Ethernet6"
},
{
"enabled": true,
"name": "Ethernet7"
},
{
"enabled": true,
"name": "Ethernet8"
},
{
"enabled": true,
"name": "Loopback0"
},
{
"enabled": true,
"name": "Management0"
},
{
"enabled": true,
"name": "Management1"
}
]

How to remove null and replace with a string value in ansible

How to remove null and replace with a string novlaue?
I have below output stored in variable out1
{
"out1": [
[
{
"destination": "dest-a",
"interface": "e1/1",
"metric": "10",
"name": "A"
},
{
"destination": "dest-b",
"interface": "e1/2",
"metric": "10",
"name": "B"
},
{
"destination": "dest-c",
"interface": null,
"metric": "10",
"name": "C"
},
{
"destination": "dest-d",
"interface": null,
"metric": "10",
"name": "B"
}
]
]
}
I have a json_query in my code:
- debug: msg="{{out1 |json_query(myquery)}}"
vars:
myquery: "[].{dest: destination ,int: interface}"
register: out2
Above code will print the following:
{
"msg": [
{
"dest": "dest-a",
"int": "e1/1"
},
{
"dest": "dest-b",
"int": "e/12"
},
{
"dest": "dest-c",
"int": null
},
{
"dest": "dest-d",
"int": null
}
]
}
I want to replace or remove null with the string novalue.
I looked into some posts and found default("novalue") can do the trick but in my case it is not working. I tried following added default("novalue") to my debug task, but I am getting an error.
I am sure that the error resides in myquery, the way I interpret/understand default() might be wrong and might be used wrongly.
Can anyone help me here please?
- debug: msg="{{out1 |json_query(myquery)}}"
vars:
myquery: "[].{dest: destination ,int: interface|default("novalue")}"
register: out2
Otherwise, with another JMESPath expression to achieve this, you can use an or expression ||, that will display the value of interface or a string that you are free to define.
So, given your JSON:
[
{
"destination": "dest-a",
"interface": "e1/1",
"metric": "10",
"name": "A"
},
{
"destination": "dest-b",
"interface": "e1/2",
"metric": "10",
"name": "B"
},
{
"destination": "dest-c",
"interface": null,
"metric": "10",
"name": "C"
},
{
"destination": "dest-d",
"interface": null,
"metric": "10",
"name": "B"
}
]
And the JMESPath query
[].{dest: destination ,int: interface || 'novalue'}
This yields
[
{
"dest": "dest-a",
"int": "e1/1"
},
{
"dest": "dest-b",
"int": "e1/2"
},
{
"dest": "dest-c",
"int": "novalue"
},
{
"dest": "dest-d",
"int": "novalue"
}
]
And your task ends up being:
- debug:
msg: "{{ out1 | json_query(_query) }}"
vars:
_query: "[].{dest: destination ,int: interface || 'novalue')}"
register: out2
You are using the jinja2 default filter inside a jmespath (i.e. json_query) expression. This can't work.
You can use the jmespath function not_null in this case
The playbook:
---
- hosts: localhost
gather_facts: false
vars:
"out1": [
[
{
"destination": "dest-a",
"interface": "e1/1",
"metric": "10",
"name": "A"
},
{
"destination": "dest-b",
"interface": "e1/2",
"metric": "10",
"name": "B"
},
{
"destination": "dest-c",
"interface": null,
"metric": "10",
"name": "C",
},
{
"destination": "dest-d",
"interface": null,
"metric": "10",
"name": "B"
}
]
]
tasks:
- debug:
msg: "{{ out1 | json_query(myquery) }}"
vars:
myquery: >-
[].{dest: destination ,int: not_null(interface, 'no value')}
Gives:
PLAY [localhost] **************************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
{
"dest": "dest-a",
"int": "e1/1"
},
{
"dest": "dest-b",
"int": "e1/2"
},
{
"dest": "dest-c",
"int": "no value"
},
{
"dest": "dest-d",
"int": "no value"
}
]
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Q: "How to remove 'null' and replace it with a string 'no value'?"
A: The variable out1 is a list of lists. Let's iterate it and create out2 with null replaced by 'no value' string. In each loop create the list of interface attributes with null replaced. Combine this list with the item and add it to the new list, e.g.
- set_fact:
out2: "{{ out2|d([]) + [item|zip(_item)|map('combine')] }}"
loop: "{{ out1 }}"
vars:
_item: "{{ item|json_query(_query) }}"
_query: |
[].{interface: not_null(interface, 'no value')}
gives
out2:
- - destination: dest-a
interface: e1/1
metric: '10'
name: A
- destination: dest-b
interface: e1/2
metric: '10'
name: B
- destination: dest-c
interface: no value
metric: '10'
name: C
- destination: dest-d
interface: no value
metric: '10'
name: B

Usage of Ansible JMESPath in queries

I am using following JSON file:
sample.json:
{
"lldp_output['gathered']": [
{
"mode": "trunk",
"name": "GigabitEthernet0/0",
"trunk": {
"encapsulation": "dot1q"
}
},
{
"access": {
"vlan": 12
},
"mode": "access",
"name": "GigabitEthernet0/1"
},
{
"name": "GigabitEthernet0/2"
}
]
}
And the playbook:
---
- hosts: localhost
gather_facts: no
vars:
tmpdata: "{{ lookup('file','sample.json') | from_json }}"
tasks:
- name: Take 4
debug:
msg: "{{ tmpdata | community.general.json_query(lldp_output['gathered']) }}"
I get the following error:
TASK [Take 4] ********************************************************************************************
task path: /root/scripts/atest.yml:18
fatal: [localhost]: FAILED! => {
"msg": "Error in jmespath.search in json_query filter plugin:\n'lldp_output' is undefined"
}
How do I query the JSON shown so I get a list of all ports that have mode: trunk
When I run in a playbook:
---
- name: Find trunk ports
hosts: ios
tasks:
- name: Collect interface output
cisco.ios.ios_l2_interfaces:
config:
state: gathered
register:
"intf_output"
- debug:
var=intf_output
- name: Take 4
debug:
msg: "{{ intf_output | json_query(query) }}"
vars:
query: >-
"lldp_output['gathered']"[?mode=='trunk']
The structure returned is like following:
{
"intf_output": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"failed": false,
"gathered": [
{
"name": "GigabitEthernet0/0"
},
{
"mode": "trunk",
"name": "GigabitEthernet0/1",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"100"
],
"encapsulation": "dot1q"
}
},
{
"mode": "trunk",
"name": "GigabitEthernet0/2",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"100"
],
"encapsulation": "dot1q"
}
},
{
"mode": "trunk",
"name": "GigabitEthernet0/3",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"100"
],
"encapsulation": "dot1q"
}
},
{
"mode": "trunk",
"name": "GigabitEthernet1/0",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"100"
],
"encapsulation": "dot1q"
}
},
{
"mode": "trunk",
"name": "GigabitEthernet1/1",
"trunk": {
"allowed_vlans": [
"10",
"20",
"30",
"100"
],
"encapsulation": "dot1q"
}
},
{
"name": "GigabitEthernet1/2"
},
{
"name": "GigabitEthernet1/3"
},
{
"name": "GigabitEthernet2/0"
},
{
"name": "GigabitEthernet2/1"
},
{
"name": "GigabitEthernet2/2"
},
{
"name": "GigabitEthernet2/3"
},
{
"name": "GigabitEthernet3/0"
},
{
"name": "GigabitEthernet3/1"
},
{
"name": "GigabitEthernet3/2"
},
{
"name": "GigabitEthernet3/3"
}
]
}
}
For each host I run against the playbook against.
The argument to json_query must be a string. Because you haven't quoted your argument, Ansible is looking for a variable named lldp_output. But you've got additonal problems, since you're trying to access a key named lldp_output['gathered'], but [ is a syntactically significant character in JSON (and JMESPath queries), so that needs to be escaped as well.
In order to avoid all sorts of quote escaping contortions, we can place the query itself into a variable, so that we have:
- hosts: localhost
vars:
tmpdata: "{{ lookup('file','sample.json') | from_json }}"
tasks:
- name: Take 4
debug:
msg: "{{ tmpdata | json_query(query) }}"
vars:
query: >-
"lldp_output['gathered']"
Note that we are using the >- block quote operator, which means that the value of query is the literal string "lldp_output['gathered']", including the outer quotes.
That playbook outputs:
TASK [Take 4] *********************************************************************************
ok: [localhost] => {
"msg": [
{
"mode": "trunk",
"name": "GigabitEthernet0/0",
"trunk": {
"encapsulation": "dot1q"
}
},
{
"access": {
"vlan": 12
},
"mode": "access",
"name": "GigabitEthernet0/1"
},
{
"name": "GigabitEthernet0/2"
}
]
}
To get just those systems with mode equal to trunk, just add that
criteria to your query:
- hosts: localhost
vars:
tmpdata: "{{ lookup('file','sample.json') | from_json }}"
tasks:
- name: Take 4
debug:
msg: "{{ tmpdata | json_query(query) }}"
vars:
query: >-
"lldp_output['gathered']"[?mode=='trunk']
This will output:
TASK [Take 4] *********************************************************************************
ok: [localhost] => {
"msg": [
{
"mode": "trunk",
"name": "GigabitEthernet0/0",
"trunk": {
"encapsulation": "dot1q"
}
}
]
}
Update
Given the data you've shown in your updated question, things are much
simpler, because you don't have the weird quoting you had in the
original question. With intf_output defined as shown, you can
write:
tasks:
- name: Take 4
debug:
msg: "{{ intf_output | json_query(query) }}"
vars:
query: >-
gathered[?mode=='trunk']

Remove double quotes from Ansible fact

There is a json output which I am trying to parse. I registered the output into variable named instance_ip.
Here is the json output:
{
"msg": {
"instances": [
{
"root_device_type": "ebs",
"private_dns_name": "",
"cpu_options": {
"core_count": 2,
"threads_per_core": 1
},
"security_groups": [],
"state_reason": {
"message": "Client.UserInitiatedShutdown: User initiated shutdown",
"code": "Client.UserInitiatedShutdown"
},
"monitoring": {
"state": "disabled"
},
"ebs_optimized": false,
"state": {
"code": 48,
"name": "terminated"
},
"client_token": "test-Logst-14O6L4IETB05E",
"virtualization_type": "hvm",
"architecture": "x86_64",
"tags": {
"sg:environment": "TST",
"Name": "logstash1",
"aws:cloudformation:logical-id": "Logstash1A1594E87",
"sg:owner": "Platforms#paparapa.com",
"aws:cloudformation:stack-name": "test-three-ec2-instances-elk-demo",
"elastic_role": "logstash",
"sg:function": "Storage"
},
"key_name": "AWS_key",
"image_id": "ami-09f765d333a8ebb4b",
"state_transition_reason": "User initiated (2021-01-31 09:46:23 GMT)",
"hibernation_options": {
"configured": false
},
"capacity_reservation_specification": {
"capacity_reservation_preference": "open"
},
"public_dns_name": "",
"block_device_mappings": [],
"metadata_options": {
"http_endpoint": "enabled",
"state": "pending",
"http_tokens": "optional",
"http_put_response_hop_limit": 1
},
"placement": {
"group_name": "",
"tenancy": "default",
"availability_zone": "ap-southeast-2a"
},
"enclave_options": {
"enabled": false
},
"ami_launch_index": 0,
"ena_support": true,
"network_interfaces": [],
"launch_time": "2021-01-31T09:44:51+00:00",
"instance_id": "i-0fa5dbb869833d7c6",
"instance_type": "t2.medium",
"root_device_name": "/dev/xvda",
"hypervisor": "xen",
"product_codes": []
},
{
"root_device_type": "ebs",
"private_dns_name": "ip-10-x-x-x.ap-southeast-2.compute.internal",
"cpu_options": {
"core_count": 2,
"threads_per_core": 1
},
"source_dest_check": true,
"monitoring": {
"state": "disabled"
},
"subnet_id": "subnet-0d5f856afab8f0eec",
"ebs_optimized": false,
"iam_instance_profile": {
"id": "AIPARWXXVHXJWC2FL4AI6",
"arn": "arn:aws:iam::instance-profile/test-three-ec2-instances-elk-demo-Logstash1InstanceProfileC3035819-1F2LI7JM16FVM"
},
"state": {
"code": 16,
"name": "running"
},
"security_groups": [
{
"group_id": "sg-0e5dffa834a036fab",
"group_name": "Ansible_sec_group"
}
],
"client_token": "test-Logst-8UF6RX33BH06",
"virtualization_type": "hvm",
"architecture": "x86_64",
"public_ip_address": "3.x.x.x",
"tags": {
"Name": "logstash1",
"aws:cloudformation:logical-id": "Logstash1A1594E87",
"srg:environment": "TST",
"aws:cloudformation:stack-id": "arn:aws:cloudformation:ap-southeast-2:117557247443:stack/test-three-ec2-instances-elk-demo/ca8ef2b0-63ad-11eb-805f-02630ffccc8c",
"sg:function": "Storage",
"aws:cloudformation:stack-name": "test-three-ec2-instances-elk-demo",
"elastic_role": "logstash",
"sg:owner": "Platforms#paparapa.com"
},
"key_name": "AWS_SRG_key",
"image_id": "ami-09f765d333a8ebb4b",
"ena_support": true,
"hibernation_options": {
"configured": false
},
"capacity_reservation_specification": {
"capacity_reservation_preference": "open"
},
"public_dns_name": "ec2-3-x-x-x.ap-southeast-2.compute.amazonaws.com",
"block_device_mappings": [
{
"device_name": "/dev/xvda",
"ebs": {
"status": "attached",
"delete_on_termination": true,
"attach_time": "2021-01-31T10:22:21+00:00",
"volume_id": "vol-058662934ffba3a68"
}
}
],
"metadata_options": {
"http_endpoint": "enabled",
"state": "applied",
"http_tokens": "optional",
"http_put_response_hop_limit": 1
},
"placement": {
"group_name": "",
"tenancy": "default",
"availability_zone": "ap-southeast-2a"
},
"enclave_options": {
"enabled": false
},
"ami_launch_index": 0,
"hypervisor": "xen",
"network_interfaces": [
{
"status": "in-use",
"description": "",
"subnet_id": "subnet-0d5f856afab8f0eec",
"source_dest_check": true,
"interface_type": "interface",
"ipv6_addresses": [],
"network_interface_id": "eni-09b045668ac59990c",
"private_dns_name": "ip-10-x-x-x.ap-southeast-2.compute.internal",
"attachment": {
"status": "attached",
"device_index": 0,
"attachment_id": "eni-attach-0700cd11dfb27e2dc",
"delete_on_termination": true,
"attach_time": "2021-01-31T10:22:20+00:00"
},
"private_ip_addresses": [
{
"private_ip_address": "10.x.x.x",
"private_dns_name": "ip-10-x-x-x.ap-southeast-2.compute.internal",
"association": {
"public_ip": "3.x.x.x",
"public_dns_name": "ec2-3-x-x-x.ap-southeast-2.compute.amazonaws.com",
"ip_owner_id": "amazon"
},
"primary": true
}
],
"mac_address": "02:d1:13:01:59:b2",
"private_ip_address": "10.x.x.x",
"vpc_id": "vpc-0016dcdf5abe4fef0",
"groups": [
{
"group_id": "sg-0e5dffa834a036fab",
"group_name": "Ansible_sec_group"
}
],
"association": {
"public_ip": "3.x.x.x",
"public_dns_name": "ec2-3-x-x-x.ap-southeast-2.compute.amazonaws.com",
"ip_owner_id": "amazon"
},
"owner_id": "117557247443"
}
],
"launch_time": "2021-01-31T10:22:20+00:00",
"instance_id": "i-0482bb8ca1bef6006",
"instance_type": "t2.medium",
"root_device_name": "/dev/xvda",
"state_transition_reason": "",
"private_ip_address": "10.x.x.x",
"vpc_id": "vpc-0016dcdf5abe4fef0",
"product_codes": []
}
],
"failed": false,
"changed": false
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
The goal is to get the private ip address and append the port number.
With the following task I got the list with node ip address ["10.x.x.x"]
- name: Getting EC2 instance ip address
set_fact:
instance_ip: "{{ logstash_instance | json_query('instances[*].network_interfaces[*].private_ip_address') | flatten }}"
With next task in a play I am trying to append the port number but I am keep getting
"['10.x.x.x:5044']"
- name: Get everything between quotes and append port 5044
set_fact:
logstash_hosts: "{{ instance_ip | map('regex_replace', '^(.*)$', '\\1:5044') | list }}"
Here is the template output:
# ------------------------------ Logstash Output -------------------------------
output.logstash:
hosts: "['10.x.x.x:5044']"
I need to get rid of the double quotes and pass the clean variable ['10.x.x.x:5044'] to my template file.
You can try creating a new list variable with the port number appended to each element, using this approach:
- set_fact:
logstash_hosts: "{{ logstash_hosts|default([]) + [ item ~ ':5044' ] }}"
with_items: "{{ instance_ip }}"
Then in template:
output.logstash:
hosts: {{ logstash_hosts|to_yaml }}
Also since the Logstash configuration is a YAML formatted file, you use YAML list syntax and directly use the instance_ip variable (and avoid set_fact). Then the template will look like this:
output.logstash:
hosts:
{% for ip in instance_ip %}
- {{ ip }}:5044
{% endfor %}

How to be make ansible regularly probing for a fact value and only proceed if the value changes?

I have a the following snippet where I install OS on a virtual machine using ansible, and after it finishes it stops the VM so I can continue the rest of the tasks, I am collecting facts from the red hat virtualization manager regarding the state the vm, and I want to keep waiting until the status of the VM changes from up to down so I can proceed, how can I code this?:
# I am kickstarting the VM
- name: Installing OS
ovirt_vms:
state: running
name: "{{ vm_name }}"
initrd_path: iso://initrd.img
kernel_path: iso://vmlinuz
kernel_params: initrd=initrd.img inst.stage2=cdrom inst.ks=ftp://10.0.1.2/pub/ks.cfg net.ifnames=0 biosdevname=0 BOOT_IMAGE=vmlinuz
# Getting facts about the VM
- name: Gather VM Status
ovirt_vms_facts:
pattern: name={{ vm_name}}
- name: Register VM Status
debug:
msg: "{{ ovirt_vms[0].status }}"
register: vm_status
#Should Keep probing the value of vm_status until it changes from up to down.
????????????? --> What should I do here?
#When Status change continue the work book
I tried to parse the ovirt_vms I gathered from ovirt_vms_facts, and I got the following:
{
"_ansible_parsed": true,
"invocation": {
"module_args": {
"all_content": false,
"pattern": "name=as-vm-type1",
"nested_attributes": [],
"case_sensitive": true,
"fetch_nested": false,
"max": null
}
},
"changed": false,
"_ansible_no_log": false,
"ansible_facts": {
"ovirt_vms": [
{
"disk_attachments": [],
"origin": "ovirt",
"sso": {
"methods": []
},
"affinity_labels": [],
"placement_policy": {
"affinity": "migratable"
},
"watchdogs": [],
"creation_time": "2018-07-15 13:54:10.565000+02:00",
"snapshots": [],
"graphics_consoles": [],
"cluster": {
"href": "/ovirt-engine/api/clusters/a5272863-38a8-469d-998e-c1e1f26f4f5a",
"id": "a5272863-38a8-469d-998e-c1e1f26f4f5a"
},
"href": "/ovirt-engine/api/vms/08406dad-5173-4241-8d42-904ddf3d096a",
"migration": {
"auto_converge": "inherit",
"compressed": "inherit"
},
"io": {
"threads": 0
},
"migration_downtime": -1,
"id": "08406dad-5173-4241-8d42-904ddf3d096a",
"high_availability": {
"priority": 0,
"enabled": false
},
"cdroms": [],
"statistics": [],
"usb": {
"enabled": false
},
"display": {
"allow_override": false,
"disconnect_action": "LOCK_SCREEN",
"file_transfer_enabled": true,
"copy_paste_enabled": true,
"secure_port": 5900,
"smartcard_enabled": false,
"single_qxl_pci": false,
"type": "spice",
"monitors": 1,
"address": "10.254.148.74"
},
"nics": [],
"tags": [],
"name": "as-vm-type1",
"bios": {
"boot_menu": {
"enabled": false
}
},
"stop_time": "2018-07-15 13:54:10.569000+02:00",
"template": {
"href": "/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000",
"id": "00000000-0000-0000-0000-000000000000"
},
"memory": 42949672960,
"type": "server",
"katello_errata": [],
"numa_tune_mode": "interleave",
"status": "up",
"next_run_configuration_exists": false,
"delete_protected": false,
"sessions": [],
"start_time": "2018-07-15 13:54:14.079000+02:00",
"quota": {
"id": "ad014a63-fd76-42da-8369-57dae2dd5979"
},
"applications": [],
"host": {
"href": "/ovirt-engine/api/hosts/56a65d3b-1c0a-4b2a-9c6c-aa96262d9502",
"id": "56a65d3b-1c0a-4b2a-9c6c-aa96262d9502"
},
"memory_policy": {
"max": 171798691840,
"guaranteed": 42949672960
},
"numa_nodes": [],
"permissions": [],
"stateless": false,
"reported_devices": [],
"large_icon": {
"href": "/ovirt-engine/api/icons/2971ddbe-1dbf-4af8-b86a-078cbbe66419",
"id": "2971ddbe-1dbf-4af8-b86a-078cbbe66419"
},
"storage_error_resume_behaviour": "auto_resume",
"cpu_profile": {
"href": "/ovirt-engine/api/cpuprofiles/34000c79-d669-41ef-8d2a-d37d7f925c3c",
"id": "34000c79-d669-41ef-8d2a-d37d7f925c3c"
},
"time_zone": {
"name": "Etc/GMT"
},
"run_once": true,
"original_template": {
"href": "/ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000",
"id": "00000000-0000-0000-0000-000000000000"
},
"start_paused": false,
"host_devices": [],
"small_icon": {
"href": "/ovirt-engine/api/icons/28054380-4723-42db-a8e5-fed8a3778199",
"id": "28054380-4723-42db-a8e5-fed8a3778199"
},
"os": {
"boot": {
"devices": [
"hd",
"cdrom"
]
},
"type": "rhel_7x64"
},
"cpu": {
"architecture": "x86_64",
"topology": {
"cores": 1,
"threads": 1,
"sockets": 8
}
},
"cpu_shares": 1024
}
]
}
}
You can do it as follows:
- name: Wait for VMs to be down
ovirt_vms_facts:
auth: "{{ ovirt_auth }}"
pattern: "name={{ vm_name }}"
until: "ovirt_vms[0].status == 'down'"
retries: 5
delay: 10
You can do it as below::
- name: Register VM Status
debug:
msg: "{{ ovirt_vms[0].status }}"
register: vm_status
until: vm_status.stdout.find("down") != -1
retries: 10
delay: 5
Here it is retrying for 10 times with a delay of 5 seconds.

Resources