How to calculate ansible_uptime_seconds and output this in os.csv - ansible

I am trying to create a csv file that can be used to review certain systems details. One of these items is the system uptime, which is reflected in unix seconds. But in the os.csv output file I would like to see it as days, HH:MM:SS.
Below my yaml script:
---
- name: playbook query system and output to file
hosts: OEL7_systems
vars:
output_file: os.csv
tasks:
- block:
# For permisison setup.
- name: get current user
command: whoami
register: whoami
run_once: yes
- name: clean_file
copy:
dest: "{{ output_file }}"
content: 'hostname,distribution,version,release,uptime'
owner: "{{ whoami.stdout }}"
run_once: yes
- name: fill os information
lineinfile:
path: "{{ output_file }}"
line: "{{ ansible_hostname }},\
{{ ansible_distribution }},\
{{ ansible_distribution_version }},\
{{ ansible_distribution_release }},\
{{ ansible_uptime_seconds }}"
# Tries to prevent concurrent writes.
throttle: 1
delegate_to: localhost
Any help is appreciated.
tried several conversions but can't get it to work.

There is actually a (somewhat hard to find) example in the official documentation on complex data manipulations doing exactly what you are looking for (check at the bottom of the page).
Here is a full example playbook to run it on localhost
---
- hosts: localhost
tasks:
- name: Show the uptime in days/hours/minutes/seconds
ansible.builtin.debug:
msg: Uptime {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
which gives on my machine:
PLAY [localhost] ************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************
ok: [localhost]
TASK [Show the uptime in days/hours/minutes/seconds] ************************************************************************************************************
ok: [localhost] => {
"msg": "Uptime 1 day, 3:56:34"
}
PLAY RECAP ******************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Related

Ansible: How to check multiple servers for a text file value, to decide which servers to run the script on?

I am trying to ask Ansible to check if a server is passive or active based on the value of a specific file in each server, then Ansible will decide which server it runs the next script on.
For example with 2 servers:
Server1
cat /tmp/currentstate
PASSIVE
Server2
cat /tmp/currentstate
ACTIVE
In Ansible
Trigger next set of jobs on server where the output was ACTIVE.
Once the jobs complete, trigger next set of jobs on server where output was PASSIVE
What I have done so far to grab the state, and output the value to Ansible is
- hosts: "{{ hostname1 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of first server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server1
- debug:
msg: "{{ state_server1.stdout }}"
- hosts: "{{ hostname2 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of second server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server2
- debug:
msg: "{{ state_server2.stdout }}"
What I have done so far to trigger the script
- hosts: "{{ active_hostname | mandatory }}"
tasks:
- name: Run the shutdown on active server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: "{{ passive_hostname | mandatory }}"
tasks:
- name: Run the shutdown on passive server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
but I don't know how to set the value of active_hostname & passive_hostname based on the value from the script above.
How can I set the Ansible variable of active_hostname & passive_hostname based on the output of the first section?
A better solution came to my mind is to include hosts in new groups according to their state.
This would be more optimal in case there are more than two hosts.
- hosts: all
gather_facts: no
vars:
ans_script_path: /tmp/
tasks:
- name: Grab state of server
shell: |
cat {{ ans_script_path }}currentstate.log
register: server_state
- add_host:
hostname: "{{ item }}"
# every host will be added to a new group according to its state
groups: "{{ 'active' if hostvars[item].server_state.stdout == 'ACTIVE' else 'passive' }}"
# Shorter, but the new groups will be in capital letters
# groups: "{{ hostvars[item].server_state.stdout }}"
loop: "{{ ansible_play_hosts }}"
changed_when: false
- name: show the groups the host(s) are in
debug:
msg: "{{ group_names }}"
- hosts: active
gather_facts: no
tasks:
- name: Run the shutdown on active server first
shell: hostname -f # changed that for debugging
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: passive
gather_facts: no
tasks:
- name: Run the shutdown on passive server first
shell: hostname -f
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
test-001 is PASSIVE
test-002 is ACTIVE
PLAY [all] ***************************************************************
TASK [Grab state of server] **********************************************
ok: [test-002]
ok: [test-001]
TASK [add_host] **********************************************************
ok: [test-001] => (item=test-001)
ok: [test-001] => (item=test-002)
TASK [show the groups the host(s) are in] ********************************
ok: [test-001] => {
"msg": [
"passive"
]
}
ok: [test-002] => {
"msg": [
"active"
]
}
PLAY [active] *************************************************************
TASK [Run the shutdown on active server first] ****************************
changed: [test-002]
TASK [debug] **************************************************************
ok: [test-002] => {
"msg": "test-002"
}
PLAY [passive] ************************************************************
TASK [Run the shutdown on passive server first] ****************************
changed: [test-001]
TASK [debug] **************************************************************
ok: [test-001] => {
"msg": "test-001"
}
PLAY RECAP ****************************************************************
test-001 : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test-002 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For example, given two remote hosts
shell> ssh admin#test_11 cat /tmp/currentstate.log
ACTIVE
shell> ssh admin#test_13 cat /tmp/currentstate.log
PASSIVE
The playbook below reads the files and runs the commands on active and passive servers
shell> cat pb.yml
- hosts: "{{ host1 }},{{ host2 }}"
gather_facts: false
vars:
server_states: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, ['server_state', 'stdout'])|
list)) }}"
server_active: "{{ server_states|dict2items|
selectattr('value', 'eq', 'ACTIVE')|
map(attribute='key')|list }}"
server_pasive: "{{ server_states|dict2items|
selectattr('value', 'eq', 'PASSIVE')|
map(attribute='key')|list }}"
tasks:
- command: cat /tmp/currentstate.log
register: server_state
- debug:
var: server_state.stdout
- block:
- debug:
var: server_states
- debug:
var: server_active
- debug:
var: server_pasive
run_once: true
- command: echo 'Shutdown active server'
register: out_active
delegate_to: "{{ server_active.0 }}"
- command: echo 'Shutdown passive server'
register: out_pasive
delegate_to: "{{ server_pasive.0 }}"
- debug:
msg: |
{{ server_active.0 }}: [{{ out_active.stdout }}] {{ out_active.start }}
{{ server_pasive.0 }}: [{{ out_pasive.stdout }}] {{ out_pasive.start }}
run_once: true
shell> ansible-playbook pb.yml -e host1=test_11 -e host2=test_13
PLAY [test_11,test_13] ***********************************************************************
TASK [command] *******************************************************************************
changed: [test_13]
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_state.stdout: ACTIVE
ok: [test_13] =>
server_state.stdout: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_states:
test_11: ACTIVE
test_13: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_active:
- test_11
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_pasive:
- test_13
TASK [command] *******************************************************************************
changed: [test_11]
changed: [test_13 -> test_11]
TASK [command] *******************************************************************************
changed: [test_11 -> test_13]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
test_11: [Shutdown active server] 2022-10-27 11:16:00.766309
test_13: [Shutdown passive server] 2022-10-27 11:16:02.501907
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
From the description of your use case I understand that you like to perform tasks on certain servers which a have service role installed (annot.: Terracotta Server) and based on a certain service state.
Therefore, I like to recommend an approach with Custom facts.
Depending on if you have control about where the currentstate.log is placed or how it is structured, you could use in example something like
cat /tmp/ansible/service/terracotta.fact
[currentstate]
ACTIVE = true
PASSIVE = false
or add dynamic facts by adding executable scripts to facts.d ...
Means, alternatively, you can add the current service state to your host facts by creating and running a script in facts.d, which would just read the content of /tmp/currentstate.log.
Then, a sample playbook like
---
- hosts: localhost
become: false
gather_facts: true
fact_path: /tmp/ansible/service
gather_subset:
- "!all"
- "!min"
- "local"
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
when: ansible_local.terracotta.currentstate.active | bool
will result into an output of
TASK [Show Gathered Facts] ******
ok: [localhost] =>
msg:
ansible_local:
terracotta:
currentstate:
active: 'true'
passive: 'false'
gather_subset:
- '!all'
- '!min'
- local
module_setup: true
An other approach is to address How the inventory is build and Group the hosts
[terracotta:children]
terracotta_active
terracotta_passive
[terracotta_active]
terracotta1.example.com
[terracotta_passive]
terracotta2.example.com
You can then just easily and simple define where a playbook or task should run, just by Targeting hosts and groups
ansible-inventory -i hosts--graph
#all:
|--#terracotta:
| |--#terracotta_active:
| | |--terracotta1.example.com
| |--#terracotta_passive:
| | |--terracotta2.example.com
|--#ungrouped:
ansible-inventory -i hosts terracotta_active --graph
#terracotta_active:
|--terracotta1.example.com
or Conditionals based on ansible_facts, in example
when: 'terracotta_active' in group_names
... from my understanding, both would be minimal and simple solutions without re-implementing functionality which seems to be already there.

Play recap and ignored=1 in Ansible

I am checking whether all services related to a docker-compose.yml are running on a system. The code snippet shown below.
---
- name:
shell: docker-compose ps -q "{{ item }}"
register: result
ignore_errors: yes
The code above works as expected. I have to ignore errors otherwise Ansible will not complete. The following result shows ignored=1
PLAY RECAP *******************************************************************************************************
192.168.50.219 : ok=38 changed=12 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
If this completes successfully I want to run a subsequent playbook but don't know how to specify ignored=1 correctly.
---
- name:
include_tasks: do_other_things.yml
when: ignored is false
How do I get the result from PLAY RECAP into something I can test with?
A better idea than trying to cope with the error returned by docker compose ps when the container does not exist would be to use the purposed module: docker_container_info to achieve the same.
Given the playbook:
- hosts: localhost
gather_facts: no
tasks:
- docker_container_info:
name: "{{ item }}"
register: containers
loop:
- node1 # exists
- node404 # does not exists
- debug:
msg: "`{{ item.item }}` is not started"
loop: "{{ containers.results }}"
loop_control:
label: "{{ item.item }}"
when: not item.exists
This would yield:
TASK [docker_container_info] *************************************************
ok: [localhost] => (item=node1)
ok: [localhost] => (item=node404)
TASK [debug] *****************************************************************
skipping: [localhost] => (item=node1)
ok: [localhost] => (item=node404) =>
msg: `node404` is not started

How to delegate facts to localhost from a play targeting remote hosts

ansible version: 2.9.16 running on RHEL 7.9 python ver = 2.7.5 targeting windows 2016 servers. ( should behave the same for linux target servers too)
EDIT: Switched to using host specific variables in inventory to avoid confusion that Iam just trying to print hostnames of a group. Even here its a gross simplification. Pretend that var1 is obtained dynamically for each server instead of being declared in the inventory file.
My playbook has two plays. One targets 3 remote servers ( Note: serial: 0 i.e Concurrently ) and another just the localhost. In play1 I am trying to delegate facts obtained from each of these hosts to the localhost using delegate_facts and delegate_to. The intent is to have these facts delegated to a single host ( localhost ) so I can use it later in a play2 (using hostvars) that targets the localhost. But strangely thats not working. It only has information from the last host from Play1.
Any help will be greatly appreciated.
my inventory file inventory/test.ini looks like this:
[my_servers]
svr1 var1='abc'
svr2 var1='xyz'
svr3 var1='pqr'
My Code:
## Play1
- name: Main play that runs against multiple remote servers and builds a list.
hosts: 'my_servers' # my inventory group that contains 3 servers svr1,svr2,svr3
any_errors_fatal: false
ignore_unreachable: true
gather_facts: true
serial: 0
tasks:
- name: initialize my_server_list as a list and delegate to localhost
set_fact:
my_server_list: []
delegate_facts: yes
delegate_to: localhost
- command: /root/complex_script.sh
register: result
- set_fact:
my_server_list: "{{ my_server_list + hostvars[inventory_hostname]['result.stdout'] }}"
# run_once: true ## Commented as I need to query the hostvars for each host where this executes.
delegate_to: localhost
delegate_facts: true
- name: "Print list - 1"
debug:
msg:
- "{{ hostvars['localhost']['my_server_list'] | default(['NotFound']) | to_nice_yaml }}"
# run_once: true
- name: "Print list - 2"
debug:
msg:
- "{{ my_server_list | default(['NA']) }}"
## Play2
- name: Print my_server_list which was built in Play1
hosts: localhost
gather_facts: true
serial: 0
tasks:
- name: "Print my_server_list without hostvars "
debug:
msg:
- "{{ my_server_list | to_nice_json }}"
# delegate_to: localhost
- name: "Print my_server_list using hostvars"
debug:
msg:
- "{{ hostvars['localhost']['my_server_list'] | to_nice_yaml }}"
# delegate_to: localhost
###Output###
$ ansible-playbook -i inventory/test.ini delegate_facts.yml
PLAY [Main playbook that runs against multiple remote servers and builds a list.] ***********************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [svr3]
ok: [svr1]
ok: [svr2]
TASK [initialize] ***************************************************************************************************************************************************************************
ok: [svr1]
ok: [svr2]
ok: [svr3]
TASK [Build a list of servers] **************************************************************************************************************************************************************
ok: [svr1]
ok: [svr2]
ok: [svr3]
TASK [Print list - 1] ***********************************************************************************************************************************************************************
ok: [svr1] =>
msg:
- |-
- pqr
ok: [svr2] =>
msg:
- |-
- pqr
ok: [svr3] =>
msg:
- |-
- pqr
TASK [Print list - 2] ***********************************************************************************************************************************************************************
ok: [svr1] =>
msg:
- - NA
ok: [svr2] =>
msg:
- - NA
ok: [svr3] =>
msg:
- - NA
PLAY [Print my_server_list] *****************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [Print my_server_list without hostvars] ************************************************************************************************************************************************
ok: [localhost] =>
msg:
- |-
[
"pqr"
]
TASK [Print my_server_list using hostvars] **************************************************************************************************************************************************
ok: [localhost] =>
msg:
- |-
- pqr
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
svr1 : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
svr2 : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
svr3 : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
###Expected Output###
I was expecting the last two debug statements in Play2 to contain the values of var1 for all the servers something like this:
TASK [Print my_server_list using hostvars] **************************************************************************************************************************************************
ok: [localhost] =>
msg:
- |-
- abc
- xyz
- pqr
Use Special Variables, e.g.
- hosts: all
gather_facts: false
tasks:
- set_fact:
my_server_list: "{{ ansible_play_hosts_all }}"
run_once: true
delegate_to: localhost
delegate_facts: true
- hosts: localhost
gather_facts: false
tasks:
- debug:
var: my_server_list
gives
ok: [localhost] =>
my_server_list:
- svr1
- svr2
- svr3
There are many other ways how to create the list, e.g.
- hosts: all
gather_facts: false
tasks:
- debug:
msg: "{{ groups.my_servers }}"
run_once: true
- hosts: all
gather_facts: false
tasks:
- debug:
msg: "{{ hostvars|json_query('*.inventory_hostname') }}"
run_once: true
Q: "Fill the list with outputs gathered by running complex commands."
A: Last example above shows how to create a list from hostvars. Register the result from the complex command, e.g.
shell> ssh admin#srv1 cat /root/complex_script.sh
#!/bin/sh
ifconfig wlan0 | grep inet | cut -w -f3
The playbook
- hosts: all
gather_facts: false
tasks:
- command: /root/complex_script.sh
register: result
- set_fact:
my_server_list: "{{ hostvars|json_query('*.result.stdout') }}"
run_once: true
delegate_to: localhost
delegate_facts: true
- hosts: localhost
gather_facts: false
tasks:
- debug:
var: my_server_list
gives
my_server_list:
- 10.1.0.61
- 10.1.0.62
- 10.1.0.63
Q: "Why the logic of delegating facts to localhost and keep appending them to that list does not work?"
A: The code below (simplified) can't work because the right-hand-side msl value still comes from the hostvars of the inventory_host despite the fact delegate_facts: true. This merely puts the created variable msl into the localhost's hostvars
- hosts: my_servers
tasks:
- set_fact:
msl: "{{ msl|default([]) + [inventory_hostname] }}"
delegate_to: localhost
delegate_facts: true
Quoting from Delegating facts
To assign gathered facts to the delegated host instead of the current host, set delegate_facts to true
As a result of such code, the variable msl will keep the last assigned value only.

How store full host name as variable in ansible?

I need to use one out of two hosts as a variable. I do have inventory_hostname_short of both but I need a full host as a variable. Currently, for testing I am using a hardcoded value. My playbook will run on both hosts at a same time so How I can identify and store as a variable.
host_1_full = 123.abc.de.com
host_2_full = 345.abc.de.com
above both are hosts and I do have
---
- name: Ansible Script
hosts: all
vars:
host1_short : '123'
host2_short : '345'
tasks:
- name: set host
set_fact:
host1_full: "{{inventory_hostname}}"
when: inventory_hostname_short == host1_short
- name: print info
debug:
msg: "host - {{host1_full}}"
- name: block1
block:
- name:running PS1 file
win_shell: "script.ps1"
register: host1_output
when: inventory_hostname_short == host1_short
- name: block2
block:
- name: set host
set_fact:
IN_PARA: "{{ hostvars[host1_full]['host1_output']['stdout']}}"
- name:running PS1 file
win_shell: "main.ps1 -paramater {{ IN_PARA }}"
register: output
when: inventory_hostname_short == host2_short
SO to access any file from different host required full hostname. How can I get that full host name
Given the following inventories/test_inventory.yml
---
all:
hosts:
123.abc.de.com:
345.abc.de.com:
ansible will provide the needed result in inventory_hostname automagically as demonstrated by the following test.yml playbook
---
- name: print long and short inventory name
hosts: all
gather_facts: false
tasks:
- name: print info
debug:
msg: "Host full name is {{ inventory_hostname }}. Short name is {{ inventory_hostname_short }}"
which gives:
$ ansible-playbook -i inventories/test_inventory.yml test.yml
PLAY [print long and short inventory name] *********************************************************************************************************************************************************************************************
TASK [print info] **********************************************************************************************************************************************************************************************************************
ok: [345.abc.de.com] => {
"msg": "Host full name is 345.abc.de.com. Short name is 345"
}
ok: [123.abc.de.com] => {
"msg": "Host full name is 123.abc.de.com. Short name is 123"
}
PLAY RECAP *****************************************************************************************************************************************************************************************************************************
123.abc.de.com : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
345.abc.de.com : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
As already suggested, the ansible_hostname and ansible_fqdn are automatic facts about hosts in your inventory. However, your requirement is a little unique. So we might have to apply a unique method to accomplish this. How about something like this?
Consider example inventory as below:
---
all:
hosts:
192.168.1.101: # 123.abc.de.com
192.168.1.102: # 345.abc.de.com
We can have a play like this:
- hosts: all
vars:
host1_short: 123
host2_short: 345
tasks:
- command: "hostname -f"
register: result
- block:
- set_fact:
host1_full: "{{ result.stdout }}"
- debug:
msg: "Host1 fullname: {{ host1_full }}"
when: host1_short in result.stdout

How to change play_hosts from playbook

I have a file with two playbooks. Inventory is generated dynamically and there is no possibility to change it before starting playbook.
Run with command:
ansible-playbook -b adapter.yml --limit=host_group
adapter.yml
- name: Prepare stage
hosts: all
# The problem is that the inventory contains hosts in the format "x.x.x.x" ie physical address.
# I need to run a third-party role.
# But, it needs hosts in the format "instance-alias", that is, the name of the instance.
tasks:
# for this I create a new host group
- name: Add host in new format
add_host:
name: "{{ item.alias }}"
host: "{{ item.ansible_host }}"
groups: new_format_hosts
with_items: "{{ groups.all }}"
# I create a new play host group that matches the previous one in a new format
- name: Compose new play_hosts group
add_host:
name: "{{ item.alias }}"
groups: new_play_hosts
when: item.ansible_host in play_hosts
with_items: "{{ groups.all }}"
- name: Management stage
hosts: new_format_hosts
# in this playbook I want to change the composition
# of the target hosts and launch an external role
vars:
hostvars: "{{ hostvars }}"
play_hosts: "{{ groups.new_play_hosts }}" # THIS DONT WORK
- name: Run external role
import_role:
name: role_name
tasks_from: file_name
But I can’t change play_hosts so that the launched role uses only new hosts.
How to fix it?
This works for me:
- name: Prepare stage
hosts: localhost
tasks:
- name: Show hostavers
debug:
msg: "{{ hostvars[item]['ansible_host'] }}"
with_items: "{{ groups.all }}"
# for this I create a new host group
- name: Add host in new format
add_host:
name: "{{ hostvars[item].alias }}"
ansible_host: "{{ hostvars[item].ansible_host }}"
groups: new_format_hosts
with_items: "{{ groups.all }}"
- name: Management stage
hosts: new_format_hosts
tasks:
- name: Ping New Format Hosts
ping:
- name: Show ansible_host for each host
debug:
var: ansible_host
- name: Show playhosts
debug:
var: play_hosts
delegate_to: localhost
run_once: yes
This assumes, of course, that alias and ansible_host are set for all the hosts.
The hosts were:
AnsibleTower ansible_host=192.168.124.8 alias=fred
192.168.124.111 ansible_host=192.168.124.111 alias=barney
jaxsatB ansible_host=192.168.124.111 alias=wilma
The relevant output of the playbook was:
PLAY [Management stage] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
Friday 29 May 2020 18:09:26 -0400 (0:00:00.057) 0:00:01.332 ************
ok: [fred]
ok: [barney]
ok: [wilma]
TASK [New Format Hosts] *****************************************************************************************************************************************************************
Friday 29 May 2020 18:09:30 -0400 (0:00:04.308) 0:00:05.641 ************
ok: [barney]
ok: [wilma]
ok: [fred]
TASK [Show ansible_host] ****************************************************************************************************************************************************************
Friday 29 May 2020 18:09:30 -0400 (0:00:00.450) 0:00:06.091 ************
ok: [fred] => {
"ansible_host": "192.168.124.8"
}
ok: [barney] => {
"ansible_host": "192.168.124.111"
}
ok: [wilma] => {
"ansible_host": "192.168.124.111"
}
TASK [Show playhosts] *******************************************************************************************************************************************************************
Friday 29 May 2020 18:09:30 -0400 (0:00:00.109) 0:00:06.200 ************
ok: [fred -> localhost] => {
"play_hosts": [
"fred",
"barney",
"wilma"
]
}
PLAY RECAP ******************************************************************************************************************************************************************************
barney : ok=3 changed=0 unreachable=0 failed=0
fred : ok=4 changed=0 unreachable=0 failed=0
localhost : ok=3 changed=1 unreachable=0 failed=0
wilma : ok=3 changed=0 unreachable=0 failed=0
Friday 29 May 2020 18:09:30 -0400 (0:00:00.030) 0:00:06.231 ************
===============================================================================
You do not need to set the play_hosts variable. That is set by the line hosts: new_format_hosts in the second play.

Resources