Ansible Filter with_items list - ansible

I am using Ansible 2.10.7 and I need to filter a specific item in the with_items list.
Before using with_items msg looks like this:
"ansible_facts": {
"cvp_info": {
"version": "2020.2.3"
},
"devices": [
{
"architecture": "",
"bootupTimeStamp": 1615810038.137913,
"bootupTimestamp": 1615810038.137913,
"complianceCode": "",
Playbook:
---
- name: Playbook to demonstrate cv_container module.
hosts: cvp_servers
connection: local
gather_facts: no
collections:
- arista.cvp
vars:
vars_files:
- vars.yml
tasks:
- name: "collecting facts from CVP {{inventory_hostname}}"
arista.cvp.cv_facts:
facts:
devices
- name: "Print out facts from CVP"
debug:
msg: "{{item.name}}"
with_items: "{{devices}}"
After using the with_items: "{{devices}}", I see it is filtering the big list and then I get this output which I want to filter:
ok: [hq] => (item={'hostname': 'rd-sw055', 'danzEnabled': False, 'mlagEnabled': False, 'streamingStatus': 'active', 'status': 'Registered','bootupTimeStamp': 1605618537.210405, 'internalBuildId': '8c8dfbf2-a4d1-420a-9c9c-59f6aa67a14e', 'taskIdList': [], 'tempAction': None, 'memTotal': 0, 'memFree': 0, 'sslConfigAvailable': False, 'sslEnabledByCVP': False, 'lastSyncUp': 0, 'type': 'netelement', 'dcaKey': None, 'containerName': 'HQ',
'name': 'rd-sw055','deviceSpecificConfiglets': ['rd-sw055'], 'imageBundle': ''}) => {
"msg": "rd-sw055"
ok: [hq] => (item={'hostname': 'rd-sw01', 'danzEnabled': False, 'mlagEnabled': False, 'streamingStatus': 'active', 'status': 'Registered','bootupTimeStamp': 1605618537.210405, 'internalBuildId': '8c8dfbf2-a4d1-420a-9c9c-59f6aa67a14e', 'taskIdList': [], 'tempAction': None, 'memTotal': 0, 'memFree': 0, 'sslConfigAvailable': False, 'sslEnabledByCVP': False, 'lastSyncUp': 0, 'type': 'netelement', 'dcaKey': None, 'containerName': 'HQ',
'name': 'rd-sw01','deviceSpecificConfiglets': ['rd-sw01'], 'imageBundle': ''}) => {
"msg": "rd-sw01"
I want it to show only the item with the 'name': 'rd-sw01' how can I do it?
I have tried using
loop_control:
label: '{{ item.name }}'
At the end of the playbook but this will only show name value and not the whole item values.
End result wanted:
ok: [hq] => (item={'hostname': 'rd-sw01', 'danzEnabled': False, 'mlagEnabled': False, 'streamingStatus': 'active', 'status': 'Registered','bootupTimeStamp': 1605618537.210405, 'internalBuildId': '8c8dfbf2-a4d1-420a-9c9c-59f6aa67a14e', 'taskIdList': [], 'tempAction': None, 'memTotal': 0, 'memFree': 0, 'sslConfigAvailable': False, 'sslEnabledByCVP': False, 'lastSyncUp': 0, 'type': 'netelement', 'dcaKey': None, 'containerName': 'HQ',
'name': 'rd-sw01','deviceSpecificConfiglets': ['rd-sw01'], 'imageBundle': ''}) => {
"msg": "rd-sw01"

You do want a when condition here:
- debug:
var: item
loop: "{{ devices }}"
when: item.name == 'rd-sw01'
loop_control:
label: "{{ item.name }}"
Or even, simpler, skip the loop:
- debug:
var: devices | selectattr("name", "eq", "rd-sw01")

Related

To collect backing_datastore from the vmware_guest_disk_info module output

I'm using the below tasks in the playbook to find the backing datastore in the vmware but I tried various options, which is not helping to gather the backing_datastore name, kindly suggest how to fetch:
- name: Take Snapshot
hosts: patching
serial: 1
vars_files:
- 1credentials.yml
tasks:
- name: Gather data of the registered virtual machines
community.vmware.vmware_vm_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
vm_type: vm
validate_certs: no
delegate_to: localhost
register: vminfo
- debug:
msg: "{{ item.datacenter }}"
loop: "{{ vminfo.virtual_machines}}"
when: item.ip_address == inventory_hostname
- name: Gather Disk facts for the server - {{ inventory_hostname }}
vmware_guest_disk_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ item.datacenter }}"
uuid: "{{ item.uuid }}"
validate_certs: False
delegate_to: localhost
register: disk_fact
loop: "{{ vminfo.virtual_machines }}"
when: item.ip_address == inventory_hostname
- debug:
msg: "{{ item.backing_datastore }}"
loop: "{{ disk_fact }}"
The raw output of the registered variable disk_fact is mentioned below:
ok: [172.17.92.62] => {
"msg": {
"changed": false,
"msg": "All items completed",
"results": [
{
"ansible_loop_var": "item",
"changed": false,
"item": {
"allocated": {},
"attributes": {},
"cluster": "Training",
"datacenter": "opendc-rookie",
"datastore_url": [
{
"name": "Rookie-Core",
"url": "/vmfs/volumes/60ebc5fb-2e22fef3-ec42-1402ec6fadc8"
},
{
"name": "ENV08",
"url": "/vmfs/volumes/60ebd414-4e6253d6-7e29-1402ec6fadc8"
}
],
"esxi_hostname": "172.17.138.13",
"folder": "/opendc-rookie/vm/ENV08",
"guest_fullname": "Microsoft Windows 8.x (64-bit)",
"guest_name": "win1",
"ip_address": "172.17.92.43",
"mac_address": [
"00:50:56:81:2e:64"
],
"moid": "vm-968",
"power_state": "poweredOn",
"tags": [],
"uuid": "4201e3dd-0cf5-1027-7eaf-d2b8804761d9",
"vm_network": {
"00:50:56:81:2e:64": {
"ipv4": [
"172.17.92.43"
],
"ipv6": []
}
}
},
"skip_reason": "Conditional result was False",
"skipped": true
},
{
"ansible_loop_var": "item",
"changed": false,
"failed": false,
"guest_disk_info": {
"0": {
"backing_datastore": "ENV04",
"backing_disk_mode": "persistent",
"backing_diskmode": "persistent",
"backing_eagerlyscrub": false,
"backing_filename": "[ENV04] ansible_automation_platofrm_RHEL8/ansible_automation_platofrm_RHEL8-000005.vmdk",
"backing_thinprovisioned": false,
"backing_type": "FlatVer2",
"backing_uuid": "6000C294-c77a-56fa-180c-d17ac9693433",
"backing_writethrough": false,
"capacity_in_bytes": 25769803776,
"capacity_in_kb": 25165824,
"controller_bus_number": 0,
"controller_key": 1000,
"controller_type": "paravirtual",
"key": 2000,
"label": "Hard disk 1",
"summary": "25,165,824 KB",
"unit_number": 0
}
},
"invocation": {
"module_args": {
"datacenter": "opendc-rookie",
"folder": null,
"hostname": "172.17.168.212",
"moid": null,
"name": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"proxy_host": null,
"proxy_port": null,
"use_instance_uuid": false,
"username": "Administrator#vsphere.local",
"uuid": "42013f12-5142-2b83-1ce7-6c334958afd5",
"validate_certs": false
}
},
"item": {
"allocated": {},
"attributes": {},
"cluster": "Training",
"datacenter": "opendc-rookie",
"datastore_url": [
{
"name": "Rookie-Core",
"url": "/vmfs/volumes/60ebc5fb-2e22fef3-ec42-1402ec6fadc8"
},
{
"name": "ENV04",
"url": "/vmfs/volumes/60ebd318-7c586ef0-5838-1402ec6fadc8"
}
],
"esxi_hostname": "172.17.138.13",
"folder": "/opendc-rookie/vm/Pugaz",
"guest_fullname": "Red Hat Enterprise Linux 8 (64-bit)",
"guest_name": "ansible_automation_platofrm_RHEL8",
"ip_address": "172.17.92.62",
"mac_address": [
"00:50:56:81:c6:e3"
],
"moid": "vm-836",
"power_state": "poweredOn",
"tags": [],
"uuid": "42013f12-5142-2b83-1ce7-6c334958afd5",
"vm_network": {
"00:50:56:81:c6:e3": {
"ipv4": [
"172.17.92.62"
],
"ipv6": [
"fe80::250:56ff:fe81:c6e3"
]
}
}
}
}
Kindly suggest how can I fetch the backing_datastore from the above mentioned output.
The result is a list because there might be more items with the attribute backing_datastore. For example,
backing_datastores: "{{ disk_fact.results|json_query(_query) }}"
_query: '[].guest_disk_info[].*.backing_datastore'
gives
backing_datastores:
- - ENV04
If you want the first one
backing_datastore: "{{ backing_datastores|flatten|first }}"
gives
backing_datastore: ENV04
If you want to iterate results
- debug:
msg: "{{ item.guest_disk_info['0'].backing_datastore }}"
loop: "{{ disk_fact.results }}"
loop_control:
label: "{{ item.item.ip_address }}"
when: not item.skipped|d(false)
gives the attribute backing_datastore of the disk "0"
TASK [debug] **************************************************
skipping: [localhost] => (item=172.17.92.43)
ok: [localhost] => (item=172.17.92.62) =>
msg: ENV04
If you want to get the list of attributes backing_datastore of all disks use json_query in the loop
- debug:
msg: "{{ item.guest_disk_info|json_query('*.backing_datastore') }}"
loop: "{{ disk_fact.results }}"
loop_control:
label: "{{ item.item.ip_address }}"
when: not item.skipped|d(false)
gives
TASK [debug] **************************************************
skipping: [localhost] => (item=172.17.92.43)
ok: [localhost] => (item=172.17.92.62) =>
msg:
- ENV04
Example of a complete playbook for testing
- hosts: localhost
vars_files:
- disk_fact.json
vars:
backing_datastores: "{{ disk_fact.results|json_query(_query) }}"
_query: '[].guest_disk_info[].*.backing_datastore'
backing_datastore: "{{ backing_datastores|flatten|first }}"
tasks:
- debug:
var: backing_datastores
- debug:
var: backing_datastore
- debug:
msg: "{{ item.guest_disk_info['0'].backing_datastore }}"
loop: "{{ disk_fact.results }}"
loop_control:
label: "{{ item.item.ip_address }}"
when: not item.skipped|d(false)

Ansible try to query an AWS TG using community-aws module to get TG status

I'm trying to query an AWS TG using the community-aws module to get TG status,
I want to query a specific TG every few seconds until I see that the state of that TG is "healthy".
So far I have this task in ansible:
- name: Gather information about the target group attached to a particular LB
vars:
ansible_python_interpreter: /usr/bin/python3.6
register: target_health
community.aws.elb_target_group_info:
region: "{{AWS_REGION}}"
target_group_arns: "{{TARGET_GROUP_ARN}}"
collect_targets_health: yes
delegate_to: 127.0.0.1
- debug: msg="return_target_health ={{target_health}}"
- name: iterate items
debug:
msg: "{{ item.targets_health_description }}"
with_items: "{{ target_health.target_groups }}"
The playbook output:
TASK [service : debug] ****************************************************
ok: [service.devbed-vpc.] => {
"msg": "return_target_health ={'target_groups': [{'target_group_arn': 'arn:aws:elasticloadbalancing:us-east-1:4795703XXXXX:targetgroup/Testbed-Vee-8124-TG/b8b282d82426331c', 'target_group_name': 'Testbed-Vee-8124-TG', 'protocol': 'HTTP', 'port': 8124, 'vpc_id': 'vpc-19333d7f', 'health_check_protocol': 'HTTP', 'health_check_port': '8124', 'health_check_enabled': True, 'health_check_interval_seconds': 10, 'health_check_timeout_seconds': 5, 'healthy_threshold_count': 5, 'unhealthy_threshold_count': 2, 'health_check_path': '/health', 'matcher': {'http_code': '200'}, 'load_balancer_arns': ['arn:aws:elasticloadbalancing:us-east-1:4795703XXXXX:loadbalancer/app/Testbed-Vee-ALB/e2b8546cb7196017'], 'target_type': 'instance', 'protocol_version': 'HTTP1', 'stickiness_enabled': 'false', 'deregistration_delay_timeout_seconds': '300', 'stickiness_type': 'lb_cookie', 'stickiness_lb_cookie_duration_seconds': '86400', 'slow_start_duration_seconds': '0', 'load_balancing_algorithm_type': 'round_robin', 'tags': {'Env': 'Testbed'}, 'targets_health_description': [{'target': {'id': 'i-0b9b6e5a2775bXXXX', 'port': 8124}, 'health_check_port': '8124', 'target_health': {'state': 'healthy'}}, {'target': {'id': 'i-0feb307f8bdf6XXXX', 'port': 8124}, 'health_check_port': '8124', 'target_health': {'state': 'healthy'}}]}], 'failed': False, 'changed': False}"
TASK [service : iterate items] ********************************************
ok: [service.devbed-vpc.] => (item={'target_group_arn': 'arn:aws:elasticloadbalancing:us-east-1:4795703XXXXX:targetgroup/Testbed-Vee-8124-TG/b8b282d82426331c', 'target_group_name': 'Testbed-Vee-8124-TG', 'protocol': 'HTTP', 'port': 8124, 'vpc_id': 'vpc-19333d7f', 'health_check_protocol': 'HTTP', 'health_check_port': '8124', 'health_check_enabled': True, 'health_check_interval_seconds': 10, 'health_check_timeout_seconds': 5, 'healthy_threshold_count': 5, 'unhealthy_threshold_count': 2, 'health_check_path': '/health', 'matcher': {'http_code': '200'}, 'load_balancer_arns': ['arn:aws:elasticloadbalancing:us-east-1:4795703XXXXX:loadbalancer/app/Testbed-Vee-ALB/e2b8546cb7196017'], 'target_type': 'instance', 'protocol_version': 'HTTP1', 'stickiness_enabled': 'false', 'deregistration_delay_timeout_seconds': '300', 'stickiness_type': 'lb_cookie', 'stickiness_lb_cookie_duration_seconds': '86400', 'slow_start_duration_seconds': '0', 'load_balancing_algorithm_type': 'round_robin', 'tags': {'Env': 'Testbed'}, 'targets_health_description': [{'target': {'id': 'i-0b9b6e5a2775bXXXX', 'port': 8124}, 'health_check_port': '8124', 'target_health': {'state': 'healthy'}}, {'target': {'id': 'i-0feb307f8bdf6XXXX', 'port': 8124}, 'health_check_port': '8124', 'target_health': {'state': 'healthy'}}]}) => {
"msg": [
{
"health_check_port": "8124",
"target": {
"id": "i-0b9b6e5a2775bXXXX",
"port": 8124
},
"target_health": {
"state": "UNhealthy"
}
},
{
"health_check_port": "8124",
"target": {
"id": "i-0feb307f8bdf6XXXX",
"port": 8124
},
"target_health": {
"state": "healthy"
}
}
]
}
I want to run this task until both "target_health:states" are healthy.
I wasn't able to do it, I was able to put the output to the file and then run a shell script that checks if the string "state": "healthy" is accruing more than once.
But then I realized the file is actually static and I write to it only once and only then I run the script in a loop which doesn't make any sense.
Is there a way for creating this query until I get the proper result I want without writing it to a file?
The retries:, delay:, and until: keywords will interest you
- name: Gather information about the target group attached to a particular LB
vars:
ansible_python_interpreter: /usr/bin/python3.6
register: target_health
community.aws.elb_target_group_info:
region: "{{AWS_REGION}}"
target_group_arns: "{{TARGET_GROUP_ARN}}"
collect_targets_health: yes
retries: 12
delay: 5
until: >-
{{ (target_health.target_groups[0].targets_health_description|length)
== target_health.target_groups[0].targets_health_description
| selectattr("target_health.state", "eq", "healthy") | list | length }}
delegate_to: 127.0.0.1
Alternatively, one could use awscli's wait, if you'd prefer to let awscli stall for you (it's not very ansible-y, but it makes for a ton less playbook log output as ansible retries)
- command: >-
aws --region {{ AWS_REGION }} elbv2 wait
target-in-service --target-group-arn {{ TARGET_GROUP_ARN | quote }}

Get list of ip addesses from multiple citrix hypervisors vm's using ansible

I am using ansible and the xenserver_guest_info module to create a list of ip addresses from multiple virtual machines on my citrix hypervisor server. Currently I just want to print a list of ip's but storing these ip's in an ansible list would be even better. I plan to loop through this list of ip's and run commands on all these vm's using ansible.
Here is my ansible playbook currently. It loops through the dictionary output that the xenserver module returns, trying to extract the ip address.:
- name: Manage VMs
connection: local
hosts: localhost
# Hypervisor server info from vars file
vars_files:
- xen_vars.yml
tasks:
- name: Gather facts
xenserver_guest_info:
hostname: "{{ xen_address}}"
username: "{{ admin_username }}"
password: "{{ admin_password }}"
name: "{{ item }}"
loop: "{{ xen_machines }}"
register: facts
# - name: Get IP's of VM's
- debug:
msg: "{{item.instance.networks}}"
loop: "{{facts.results}}"
It produces the following output given a list of two vm's on my server:
PLAY [Manage VMs] **************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [Gather facts] ************************************************************
ok: [localhost] => (item=Ubuntu 20)
ok: [localhost] => (item=Ubuntu 20 2)
TASK [debug] *******************************************************************
ok: [localhost] => (item={'failed': False, 'changed': False, 'instance': {'state': 'poweredoff', 'name': 'Ubuntu 20', 'name_desc': '', 'uuid': 'cf5db672-67cf-7e8c-6951-f5959ab62e26', 'is_template': False, 'folder': '', 'hardware': {'num_cpus': 1, 'num_cpu_cores_per_socket': 1, 'memory_mb': 1024}, 'disks': [{'size': 21474836480, 'name': 'Ubuntu 20 0', 'name_desc': 'Created by template provisioner', 'sr': 'Local storage', 'sr_uuid': 'd7bb817b-281e-fd9c-33a3-54db8935d596', 'os_device': 'xvda', 'vbd_userdevice': '0'}], 'cdrom': {'type': 'iso', 'iso_name': 'ubuntu-20.04.1-desktop-amd64.iso'}, 'networks': [{'name': 'Pool-wide network associated with eth0', 'mac': 'a2:07:be:29:5f:ad', 'vif_device': '0', 'mtu': '1500', 'ip': '', 'prefix': '', 'netmask': '', 'gateway': '', 'ip6': [], 'prefix6': '', 'gateway6': ''}], 'home_server': 'citrix-mwyqyqaa', 'domid': '-1', 'platform': {'timeoffset': '0', 'videoram': '8', 'hpet': 'true', 'secureboot': 'false', 'device-model': 'qemu-upstream-compat', 'apic': 'true', 'device_id': '0001', 'vga': 'std', 'nx': 'true', 'pae': 'true', 'viridian': 'false', 'acpi': '1'}, 'other_config': {'base_template_name': 'Ubuntu Focal Fossa 20.04', 'import_task': 'OpaqueRef:3b6061a0-a204-4ed4-be20-842a359a70fa', 'mac_seed': 'f6ae87b5-2f00-0717-559e-8b624fe92f35', 'install-methods': 'cdrom,nfs,http,ftp', 'linux_template': 'true'}, 'xenstore_data': {'vm-data': '', 'vm-data/mmio-hole-size': '268435456'}, 'customization_agent': 'custom'}, 'invocation': {'module_args': {'hostname': '192.168.0.187', 'username': 'root', 'password': 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', 'name': 'Ubuntu 20', 'validate_certs': True, 'uuid': None}}, 'item': 'Ubuntu 20', 'ansible_loop_var': 'item'}) => {
"msg": [
{
"gateway": "",
"gateway6": "",
"ip": "192.168.0.2",
"ip6": [],
"mac": "a2:07:be:29:5f:ad",
"mtu": "1500",
"name": "Pool-wide network associated with eth0",
"netmask": "",
"prefix": "",
"prefix6": "",
"vif_device": "0"
}
]
}
ok: [localhost] => (item={'failed': False, 'changed': False, 'instance': {'state': 'poweredoff', 'name': 'Ubuntu 20 2', 'name_desc': '', 'uuid': 'b087832e-81f1-c091-1363-8b8ba8442c8e', 'is_template': False, 'folder': '', 'hardware': {'num_cpus': 1, 'num_cpu_cores_per_socket': 1, 'memory_mb': 1024}, 'disks': [{'size': 21474836480, 'name': 'Ubuntu 20 0', 'name_desc': 'Created by template provisioner', 'sr': 'Local storage', 'sr_uuid': 'd7bb817b-281e-fd9c-33a3-54db8935d596', 'os_device': 'xvda', 'vbd_userdevice': '0'}], 'cdrom': {'type': 'iso', 'iso_name': 'ubuntu-20.04.1-desktop-amd64.iso'}, 'networks': [{'name': 'Pool-wide network associated with eth0', 'mac': 'e6:cd:ca:ff:c3:e0', 'vif_device': '0', 'mtu': '1500', 'ip': '', 'prefix': '', 'netmask': '', 'gateway': '', 'ip6': [], 'prefix6': '', 'gateway6': ''}], 'home_server': 'citrix-mwyqyqaa', 'domid': '-1', 'platform': {'timeoffset': '0', 'videoram': '8', 'hpet': 'true', 'secureboot': 'false', 'device-model': 'qemu-upstream-compat', 'apic': 'true', 'device_id': '0001', 'vga': 'std', 'nx': 'true', 'pae': 'true', 'viridian': 'false', 'acpi': '1'}, 'other_config': {'base_template_name': 'Ubuntu Focal Fossa 20.04', 'import_task': 'OpaqueRef:3b6061a0-a204-4ed4-be20-842a359a70fa', 'mac_seed': '3c56a628-0f68-34f9-fe98-4bf2214a5891', 'install-methods': 'cdrom,nfs,http,ftp', 'linux_template': 'true'}, 'xenstore_data': {'vm-data': '', 'vm-data/mmio-hole-size': '268435456'}, 'customization_agent': 'custom'}, 'invocation': {'module_args': {'hostname': '192.168.0.187', 'username': 'root', 'password': 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER', 'name': 'Ubuntu 20 2', 'validate_certs': True, 'uuid': None}}, 'item': 'Ubuntu 20 2', 'ansible_loop_var': 'item'}) => {
"msg": [
{
"gateway": "",
"gateway6": "",
"ip": "192.168.0.3",
"ip6": [],
"mac": "e6:cd:ca:ff:c3:e0",
"mtu": "1500",
"name": "Pool-wide network associated with eth0",
"netmask": "",
"prefix": "",
"prefix6": "",
"vif_device": "0"
}
]
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I tried accessing just the ip but it hasn't been working well since it is stored in a list.
Again my end goal is to just have ansible spit out a list of ip's like so:
192.168.0.2
192.168.0.3
Or even better to store these in an ansible list for later usage. Any help would be much appreciated.
If the only real problem you have is that the IPs are in a list, and you know (because of the way you have configured your VMs) that the list always has only one entry, then just get the first item in the list:
item.instance.networks[0].ip
If you want to then use these IP addresses to do some Ansible work on the VMs, I suggest that you use add_host to build a new inventory for a second play in your playbook:
- name: Manage VMs
connection: local
hosts: localhost
# Hypervisor server info from vars file
vars_files:
- xen_vars.yml
tasks:
- name: Gather facts
xenserver_guest_info:
hostname: "{{ xen_address}}"
username: "{{ admin_username }}"
password: "{{ admin_password }}"
name: "{{ item }}"
loop: "{{ xen_machines }}"
register: facts
# - name: Get IP's of VM's
- debug:
msg: "{{item.instance.networks[0].ip}}"
loop: "{{facts.results}}"
- name: Build inventory of VMs
add_host:
name: "{{ item.instance.networks[0].ip }}"
groups: vms
# You can add other variables per-host here if you want
loop: "{{ xen_machines }}"
loop_control:
label: "{{ item.instance.name }}"
- name: Do stuff directly to the VMs
hosts: vms # This is the group you just created
connection: ssh
tasks:
- debug:
msg: "{{ Hello from a VM }}"
The tasks below
- set_fact:
ips: "{{ facts.results|
json_query('[].instance.networks[].ip') }}"
- debug:
var: ips
should give
ips:
- 192.168.0.2
- 192.168.0.3

How to extract items with Ansible from stdout using json_query

I am executing shell script with Ansible which returns json output.
- name: Get mlist
become: no
shell: "PYTHONPATH=/home/centos/scripts/users/ python /home/centos/scripts/users/team_members.py {{ parameter }}"
register: account_list
the output looks like
ok: [localhost] => {
"msg": {
"changed": true,
"cmd": "PYTHONPATH=/home/centos/scripts/users/ python /home/centos/scripts/users/team_members.py parameter",
"delta": "0:00:00.530377",
"end": "2019-10-09 08:28:20.222480",
"failed": false,
"rc": 0,
"start": "2019-10-09 08:28:19.692103",
"stderr": "2019-10-09 08:28:19,915 INFO",
"stderr_lines": [
"2019-10-09 08:28:19,915 INFO"
],
"stdout": "[{'id': 'XXX=', 'name': 'XXX', 'login': 'xxx'}, {'id': 'YYY', 'name': 'YYY', 'login': 'yyy'}, {'id': 'ZZZ', 'name': 'zzz', 'login': 'zzz'}]",
"stdout_lines": [
"[{'id': 'XXX=', 'name': 'XXX', 'login': 'xxx'}, {'id': 'YYY', 'name': 'YYY', 'login': 'yyy'}, {'id': 'ZZZ', 'name': 'ZZZ', 'login': 'zzz'}]"
]
}
}
What I would like to do is to extract all login items. So far I've tried various ways
- debug:
msg: "{{ account_list.stdout | to_json | json_query('[*].login') }}"
But this is not working. While putting the .stdout to JSONPath Online Evaluator the .[*].login does what I want I just can't do that with Ansible json_query. Anybody who know how to do that ?
Thank you very much in advance.
I found the solution. I`ve had to update python script to actually output json as previously it didn't output correct json (see use of ' instead of ") and then
- debug:
msg: "{{ item }}"
with_items: "{{ account_list.stdout | from_json | json_query('[*].login') }}"
worked correctly

Setting multiple values in sysctl with Ansible

I have a playbook with several tasks setting values to sysctl. Instead of having a task for each setting, how can I set all the values with one task, using the sysctl module?
Playbook snippet:
- name: Set tcp_keepalive_probes in sysctl
become: yes
sysctl:
name: net.ipv4.tcp_keepalive_probes
value: 3
state: present
reload: yes
- name: Set tcp_keepalive_intvl in sysctl
become: yes
sysctl:
name: net.ipv4.tcp_keepalive_intvl
value: 10
state: present
reload: yes
- name: Set rmem_default in sysctl
become: yes
sysctl:
name: net.core.rmem_default
value: 16777216
state: present
reload: yes
define all the variables in a var file:
e.g.
sysctl:
- name: test
value: test
...
...
playbook:
- hosts: "{{ }}"
tasks:
- name: update sysctl param
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: yes
with_items:
- "{{ hosts }}"
Simple solution: define variable as a dict
Example playbook:
---
- hosts: all
gather_facts: false
become: true
vars:
ansible_python_interpreter: /usr/bin/python3
sysctl_config:
net.ipv4.ip_forward: 1
net.ipv4.conf.all.forwarding: 1
net.ipv6.conf.all.forwarding: 1
tasks:
- name: Change various sysctl-settings
sysctl:
name: '{{ item.key }}'
value: '{{ item.value }}'
sysctl_set: yes
state: present
reload: yes
ignoreerrors: yes
with_dict: '{{ sysctl_config }}'
Output:
TASK [Change various sysctl-settings] **********************************************************************************************************************************************************************
changed: [10.10.10.10] => (item={'key': 'net.ipv4.ip_forward', 'value': 1}) => {
"ansible_loop_var": "item",
"changed": true,
"item": {
"key": "net.ipv4.ip_forward",
"value": 1
}
}
changed: [10.10.10.10] => (item={'key': 'net.ipv4.conf.all.forwarding', 'value': 1}) => {
"ansible_loop_var": "item",
"changed": true,
"item": {
"key": "net.ipv4.conf.all.forwarding",
"value": 1
}
}
changed: [10.10.10.10] => (item={'key': 'net.ipv6.conf.all.forwarding', 'value': 1}) => {
"ansible_loop_var": "item",
"changed": true,
"item": {
"key": "net.ipv6.conf.all.forwarding",
"value": 1
}
}

Resources