Ansible, how to set a global fact using roles? - ansible

I'm trying to use Ansible to deploy a small k3s cluster with just two server nodes at the moment. Deploying the first server node, which I refer to as "master" is easy to set up with Ansible. However, setting up the second server node, which I refer to as "node" is giving me a challenge because I need to pull the value of the node-token from the master and use it to call the k3s install command on the "node" vm.
I'm using Ansible roles, and this is what my playbook looks like:
- hosts: all
roles:
- { role: k3sInstall , when: 'server_type is defined'}
- { role: k3sUnInstall , when: 'server_type is defined'}
This is my main.yml file from the k3sInstall role directory:
- name: Install k3s Server
import_tasks: k3s_install_server.yml
tags:
- k3s_install
This is my k3s_install_server.yml:
---
- name: Install k3s Cluster
block:
- name: Install k3s Master Server
become: yes
shell: "{{ k3s_master_install_cmd }}"
when: server_role == "master"
- name: Get Node-Token file from master server.
become: yes
shell: cat {{ node_token_filepath }}
when: server_role == "master"
register: nodetoken
- name: Print Node-Token
when: server_role == "master"
debug:
msg: "{{ nodetoken.stdout }}"
# msg: "{{ k3s_node_install_cmd }}"
- name: Set Node-Token fact
when: server_role == "master"
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
- name: Print Node-Token fact
when: server_role == "node" or server_role == "master"
debug:
msg: "{{ nodeToken }}"
# - name: Install k3s Node Server
# become: yes
# shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
# when: server_role == "node"
I've commented out the Install k3s Node Servertask because I'm not able to properly reference the nodeToken variable that I'm setting when server_role == master.
This is the output of the debug:
TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
ok: [server1] => {
"msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
}
ok: [server2] => {
"msg": ""
}
My host file:
[p6dualstackservers]
server1 ansible_ssh_host=10.63.60.220
server2 ansible_ssh_host=10.63.60.221
And I have the following host_vars files assigned:
server1.yml:
server_role: master
server2.yml:
server_role: node
I've tried assigning the nodeToken variable inside of k3sInstall/vars/main.yml as well as one level up from the k3sInstall role inside group_vars/all.yml but that didn't help.
I tried searching for a way to use block-level variables but couldn't find anything.

If you set the variable for master only it's not available for other hosts, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: VARIABLE IS NOT DEFINED!
If you want to "apply all results and facts to all the hosts in the same batch" use run_once: true, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
run_once: true
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: K10cf129cfedaf
In your case, add 'run_once: true' to the task
- name: Set Node-Token fact
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: server_role == "master"
run_once: true
The above code works because the condition when: server_role == "master" is applied before run_once: true. Quoting from run_once
"Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterward apply any results and facts to all active hosts in the same batch."
Safer code would be adding a standalone set_fact instead of relying on the precedence of the condition when: and run_once, e.g.
- set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: inventory_hostname == 'master'
- set_fact:
nodeToken: "{{ hostvars['master'].nodeToken }}"
run_once: true

Using when in this use case is probably not the best fit, you would probably be better delegating some tasks to the so-called master server.
What you can do to define what server is the master, based on your inventory variable, is to delegate a fact to localhost, for example.
Then again, to get the token from your file in the master server, you can delegate this task and fact only to this server.
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- set_fact:
master_node: "{{ inventory_hostname }}"
when: server_role == 'master'
delegate_to: localhost
delegate_facts: true
- set_fact:
token: 12345678
run_once: true
delegate_to: "{{ hostvars.localhost.master_node }}"
delegate_facts: true
- debug:
var: hostvars[hostvars.localhost.master_node].token
when: server_role != 'master'
This yields the expected:
PLAY [all] ********************************************************************************************************
TASK [set_fact] ***************************************************************************************************
skipping: [node1]
ok: [node2 -> localhost]
skipping: [node3]
TASK [set_fact] ***************************************************************************************************
ok: [node1 -> node2]
TASK [debug] ******************************************************************************************************
skipping: [node2]
ok: [node1] =>
hostvars[hostvars.localhost.master_node].token: '12345678'
ok: [node3] =>
hostvars[hostvars.localhost.master_node].token: '12345678'
PLAY RECAP ********************************************************************************************************
node1 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

Related

Control ansible play execution inventory hosts

Need assistance how to make one particular task in the playbook run on all hosts.
I have a below task inside a role
---
- name: Task 1
shell: echo "Must be run on one node"
- name: Task 2
shell: echo "Must be run on one node"
- name: Task 3
shell: echo "This must run on all nodes inside inventory"
I am running this task by passing --limit but "Task 3" must run on all hosts in the inventory. I tried using below block but not its not executing
- name: Verify limit is set
debug:
msg: "Must use --limit"
when: ansible_limit is not defined
run_once: true
- name: Print all nodes
debug:
msg: "Running on node {{ item }}"
with_items: "{{ ansible_play_hosts_all }}"
There are many options. For example, create a dictionary that will control the flow
shell> cat group_vars/all
task_hosts:
1: [host_1] # task 1 runs on host_1 only
2: [host_2] # task 2 runs on host_2 only
3: "{{ groups.all }}" # task 3 runs on all hosts
Given the inventory
shell> cat hosts
host_1
host_2
host_3
The playbook
shell> cat pb.yml
- hosts: all
gather_facts: false
tasks:
- name: Task 1
debug:
msg: echo "Must be run on one node"
when: inventory_hostname in task_hosts.1
- name: Task 2
debug:
msg: echo "Must be run on one node"
when: inventory_hostname in task_hosts.2
- name: Task 3
debug:
msg: echo "This must run on all nodes inside inventory"
when: inventory_hostname in task_hosts.3
gives
shell> ansible-playbook pb.yml
PLAY [all] ***********************************************************************************
TASK [Task 1] ********************************************************************************
skipping: [host_2]
ok: [host_1] =>
msg: echo "Must be run on one node"
skipping: [host_3]
TASK [Task 2] ********************************************************************************
skipping: [host_1]
skipping: [host_3]
ok: [host_2] =>
msg: echo "Must be run on one node"
TASK [Task 3] ********************************************************************************
ok: [host_1] =>
msg: echo "This must run on all nodes inside inventory"
ok: [host_2] =>
msg: echo "This must run on all nodes inside inventory"
ok: [host_3] =>
msg: echo "This must run on all nodes inside inventory"
PLAY RECAP ***********************************************************************************
host_1: ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host_2: ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host_3: ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
Notes:
Add one more level if there are more plays in the playbook
shell> cat group_vars/all
play_task_hosts:
1: # play 1
1: [host_1] # task 1 runs on host_1 only
2: [host_2] # task 2 runs on host_2 only
3: "{{ groups.all }}" # task 3 runs on all hosts
You might want to automate the updates if there are many tasks in many plays in many playbooks
shell> cat update_pb.yml
- hosts: localhost
vars:
my_dir: "{{ playbook_dir }}"
my_pb: pb.yml
my_flow: play_task_hosts
pb: "{{ lookup('file', my_pb)|from_yaml }}"
_tasks: "{{ pb|json_query('[].tasks') }}"
_update_tasks: |
{% for play in _tasks %}
-
{% set play_index = loop.index %}
{% for task in play %}
{% set condition = ["inventory_hostname in ",
my_flow, ".",
play_index, ".",
loop.index]|join() %}
- {{ task|combine({"when": [condition]}, list_merge='append_rp') }}
{% endfor %}
{% endfor %}
update_tasks: "{{ _update_tasks|from_yaml }}"
_update_pb: |
{% for play in pb %}
- {{ play|combine({"tasks": update_tasks[loop.index0]}) }}
{% endfor %}
update_pb: "{{ _update_pb|from_yaml }}"
tasks:
- block:
- debug:
var: _tasks
- debug:
var: update_tasks
- debug:
var: update_pb
when: debug|d(false)|bool
- copy:
dest: "{{ my_dir }}/{{ my_pb }}.update"
content: |
{{ update_pb|to_nice_yaml }}
Generally, if you have a different task list for different hosts, it begs for having different roles. Basically, the best practice for Ansible is to have plays to control where code is been executed.
If role start to decide which host should have which tasks, it twists Ansible upside down, and make role wig the inventory and plays. Don't do it. Role should do what was told and do not make cross-host decisions.
If it's a single task and you don't want to have two roles, you may pass a variable from inventory/play to the role. It can be either flag (like doo_foo), configured in inventory, which is checked as when: doo_foo, or as a delegation:
- name: Do foo
run_once: true
do: foo
delegate_to: '{{ foo_delegation_host }}'
Note: there is run_once flag, which forces Ansible to do task one, and delegate_to is controlling where it runs. The value foo_delegation_host should come from the play (as role parameter or a play variable).

How to calculate ansible_uptime_seconds and output this in os.csv

I am trying to create a csv file that can be used to review certain systems details. One of these items is the system uptime, which is reflected in unix seconds. But in the os.csv output file I would like to see it as days, HH:MM:SS.
Below my yaml script:
---
- name: playbook query system and output to file
hosts: OEL7_systems
vars:
output_file: os.csv
tasks:
- block:
# For permisison setup.
- name: get current user
command: whoami
register: whoami
run_once: yes
- name: clean_file
copy:
dest: "{{ output_file }}"
content: 'hostname,distribution,version,release,uptime'
owner: "{{ whoami.stdout }}"
run_once: yes
- name: fill os information
lineinfile:
path: "{{ output_file }}"
line: "{{ ansible_hostname }},\
{{ ansible_distribution }},\
{{ ansible_distribution_version }},\
{{ ansible_distribution_release }},\
{{ ansible_uptime_seconds }}"
# Tries to prevent concurrent writes.
throttle: 1
delegate_to: localhost
Any help is appreciated.
tried several conversions but can't get it to work.
There is actually a (somewhat hard to find) example in the official documentation on complex data manipulations doing exactly what you are looking for (check at the bottom of the page).
Here is a full example playbook to run it on localhost
---
- hosts: localhost
tasks:
- name: Show the uptime in days/hours/minutes/seconds
ansible.builtin.debug:
msg: Uptime {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
which gives on my machine:
PLAY [localhost] ************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************
ok: [localhost]
TASK [Show the uptime in days/hours/minutes/seconds] ************************************************************************************************************
ok: [localhost] => {
"msg": "Uptime 1 day, 3:56:34"
}
PLAY RECAP ******************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible: How to check multiple servers for a text file value, to decide which servers to run the script on?

I am trying to ask Ansible to check if a server is passive or active based on the value of a specific file in each server, then Ansible will decide which server it runs the next script on.
For example with 2 servers:
Server1
cat /tmp/currentstate
PASSIVE
Server2
cat /tmp/currentstate
ACTIVE
In Ansible
Trigger next set of jobs on server where the output was ACTIVE.
Once the jobs complete, trigger next set of jobs on server where output was PASSIVE
What I have done so far to grab the state, and output the value to Ansible is
- hosts: "{{ hostname1 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of first server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server1
- debug:
msg: "{{ state_server1.stdout }}"
- hosts: "{{ hostname2 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of second server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server2
- debug:
msg: "{{ state_server2.stdout }}"
What I have done so far to trigger the script
- hosts: "{{ active_hostname | mandatory }}"
tasks:
- name: Run the shutdown on active server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: "{{ passive_hostname | mandatory }}"
tasks:
- name: Run the shutdown on passive server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
but I don't know how to set the value of active_hostname & passive_hostname based on the value from the script above.
How can I set the Ansible variable of active_hostname & passive_hostname based on the output of the first section?
A better solution came to my mind is to include hosts in new groups according to their state.
This would be more optimal in case there are more than two hosts.
- hosts: all
gather_facts: no
vars:
ans_script_path: /tmp/
tasks:
- name: Grab state of server
shell: |
cat {{ ans_script_path }}currentstate.log
register: server_state
- add_host:
hostname: "{{ item }}"
# every host will be added to a new group according to its state
groups: "{{ 'active' if hostvars[item].server_state.stdout == 'ACTIVE' else 'passive' }}"
# Shorter, but the new groups will be in capital letters
# groups: "{{ hostvars[item].server_state.stdout }}"
loop: "{{ ansible_play_hosts }}"
changed_when: false
- name: show the groups the host(s) are in
debug:
msg: "{{ group_names }}"
- hosts: active
gather_facts: no
tasks:
- name: Run the shutdown on active server first
shell: hostname -f # changed that for debugging
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: passive
gather_facts: no
tasks:
- name: Run the shutdown on passive server first
shell: hostname -f
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
test-001 is PASSIVE
test-002 is ACTIVE
PLAY [all] ***************************************************************
TASK [Grab state of server] **********************************************
ok: [test-002]
ok: [test-001]
TASK [add_host] **********************************************************
ok: [test-001] => (item=test-001)
ok: [test-001] => (item=test-002)
TASK [show the groups the host(s) are in] ********************************
ok: [test-001] => {
"msg": [
"passive"
]
}
ok: [test-002] => {
"msg": [
"active"
]
}
PLAY [active] *************************************************************
TASK [Run the shutdown on active server first] ****************************
changed: [test-002]
TASK [debug] **************************************************************
ok: [test-002] => {
"msg": "test-002"
}
PLAY [passive] ************************************************************
TASK [Run the shutdown on passive server first] ****************************
changed: [test-001]
TASK [debug] **************************************************************
ok: [test-001] => {
"msg": "test-001"
}
PLAY RECAP ****************************************************************
test-001 : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test-002 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For example, given two remote hosts
shell> ssh admin#test_11 cat /tmp/currentstate.log
ACTIVE
shell> ssh admin#test_13 cat /tmp/currentstate.log
PASSIVE
The playbook below reads the files and runs the commands on active and passive servers
shell> cat pb.yml
- hosts: "{{ host1 }},{{ host2 }}"
gather_facts: false
vars:
server_states: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, ['server_state', 'stdout'])|
list)) }}"
server_active: "{{ server_states|dict2items|
selectattr('value', 'eq', 'ACTIVE')|
map(attribute='key')|list }}"
server_pasive: "{{ server_states|dict2items|
selectattr('value', 'eq', 'PASSIVE')|
map(attribute='key')|list }}"
tasks:
- command: cat /tmp/currentstate.log
register: server_state
- debug:
var: server_state.stdout
- block:
- debug:
var: server_states
- debug:
var: server_active
- debug:
var: server_pasive
run_once: true
- command: echo 'Shutdown active server'
register: out_active
delegate_to: "{{ server_active.0 }}"
- command: echo 'Shutdown passive server'
register: out_pasive
delegate_to: "{{ server_pasive.0 }}"
- debug:
msg: |
{{ server_active.0 }}: [{{ out_active.stdout }}] {{ out_active.start }}
{{ server_pasive.0 }}: [{{ out_pasive.stdout }}] {{ out_pasive.start }}
run_once: true
shell> ansible-playbook pb.yml -e host1=test_11 -e host2=test_13
PLAY [test_11,test_13] ***********************************************************************
TASK [command] *******************************************************************************
changed: [test_13]
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_state.stdout: ACTIVE
ok: [test_13] =>
server_state.stdout: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_states:
test_11: ACTIVE
test_13: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_active:
- test_11
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_pasive:
- test_13
TASK [command] *******************************************************************************
changed: [test_11]
changed: [test_13 -> test_11]
TASK [command] *******************************************************************************
changed: [test_11 -> test_13]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
test_11: [Shutdown active server] 2022-10-27 11:16:00.766309
test_13: [Shutdown passive server] 2022-10-27 11:16:02.501907
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
From the description of your use case I understand that you like to perform tasks on certain servers which a have service role installed (annot.: Terracotta Server) and based on a certain service state.
Therefore, I like to recommend an approach with Custom facts.
Depending on if you have control about where the currentstate.log is placed or how it is structured, you could use in example something like
cat /tmp/ansible/service/terracotta.fact
[currentstate]
ACTIVE = true
PASSIVE = false
or add dynamic facts by adding executable scripts to facts.d ...
Means, alternatively, you can add the current service state to your host facts by creating and running a script in facts.d, which would just read the content of /tmp/currentstate.log.
Then, a sample playbook like
---
- hosts: localhost
become: false
gather_facts: true
fact_path: /tmp/ansible/service
gather_subset:
- "!all"
- "!min"
- "local"
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
when: ansible_local.terracotta.currentstate.active | bool
will result into an output of
TASK [Show Gathered Facts] ******
ok: [localhost] =>
msg:
ansible_local:
terracotta:
currentstate:
active: 'true'
passive: 'false'
gather_subset:
- '!all'
- '!min'
- local
module_setup: true
An other approach is to address How the inventory is build and Group the hosts
[terracotta:children]
terracotta_active
terracotta_passive
[terracotta_active]
terracotta1.example.com
[terracotta_passive]
terracotta2.example.com
You can then just easily and simple define where a playbook or task should run, just by Targeting hosts and groups
ansible-inventory -i hosts--graph
#all:
|--#terracotta:
| |--#terracotta_active:
| | |--terracotta1.example.com
| |--#terracotta_passive:
| | |--terracotta2.example.com
|--#ungrouped:
ansible-inventory -i hosts terracotta_active --graph
#terracotta_active:
|--terracotta1.example.com
or Conditionals based on ansible_facts, in example
when: 'terracotta_active' in group_names
... from my understanding, both would be minimal and simple solutions without re-implementing functionality which seems to be already there.

How to delegate facts to localhost from a play targeting remote hosts

ansible version: 2.9.16 running on RHEL 7.9 python ver = 2.7.5 targeting windows 2016 servers. ( should behave the same for linux target servers too)
EDIT: Switched to using host specific variables in inventory to avoid confusion that Iam just trying to print hostnames of a group. Even here its a gross simplification. Pretend that var1 is obtained dynamically for each server instead of being declared in the inventory file.
My playbook has two plays. One targets 3 remote servers ( Note: serial: 0 i.e Concurrently ) and another just the localhost. In play1 I am trying to delegate facts obtained from each of these hosts to the localhost using delegate_facts and delegate_to. The intent is to have these facts delegated to a single host ( localhost ) so I can use it later in a play2 (using hostvars) that targets the localhost. But strangely thats not working. It only has information from the last host from Play1.
Any help will be greatly appreciated.
my inventory file inventory/test.ini looks like this:
[my_servers]
svr1 var1='abc'
svr2 var1='xyz'
svr3 var1='pqr'
My Code:
## Play1
- name: Main play that runs against multiple remote servers and builds a list.
hosts: 'my_servers' # my inventory group that contains 3 servers svr1,svr2,svr3
any_errors_fatal: false
ignore_unreachable: true
gather_facts: true
serial: 0
tasks:
- name: initialize my_server_list as a list and delegate to localhost
set_fact:
my_server_list: []
delegate_facts: yes
delegate_to: localhost
- command: /root/complex_script.sh
register: result
- set_fact:
my_server_list: "{{ my_server_list + hostvars[inventory_hostname]['result.stdout'] }}"
# run_once: true ## Commented as I need to query the hostvars for each host where this executes.
delegate_to: localhost
delegate_facts: true
- name: "Print list - 1"
debug:
msg:
- "{{ hostvars['localhost']['my_server_list'] | default(['NotFound']) | to_nice_yaml }}"
# run_once: true
- name: "Print list - 2"
debug:
msg:
- "{{ my_server_list | default(['NA']) }}"
## Play2
- name: Print my_server_list which was built in Play1
hosts: localhost
gather_facts: true
serial: 0
tasks:
- name: "Print my_server_list without hostvars "
debug:
msg:
- "{{ my_server_list | to_nice_json }}"
# delegate_to: localhost
- name: "Print my_server_list using hostvars"
debug:
msg:
- "{{ hostvars['localhost']['my_server_list'] | to_nice_yaml }}"
# delegate_to: localhost
###Output###
$ ansible-playbook -i inventory/test.ini delegate_facts.yml
PLAY [Main playbook that runs against multiple remote servers and builds a list.] ***********************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [svr3]
ok: [svr1]
ok: [svr2]
TASK [initialize] ***************************************************************************************************************************************************************************
ok: [svr1]
ok: [svr2]
ok: [svr3]
TASK [Build a list of servers] **************************************************************************************************************************************************************
ok: [svr1]
ok: [svr2]
ok: [svr3]
TASK [Print list - 1] ***********************************************************************************************************************************************************************
ok: [svr1] =>
msg:
- |-
- pqr
ok: [svr2] =>
msg:
- |-
- pqr
ok: [svr3] =>
msg:
- |-
- pqr
TASK [Print list - 2] ***********************************************************************************************************************************************************************
ok: [svr1] =>
msg:
- - NA
ok: [svr2] =>
msg:
- - NA
ok: [svr3] =>
msg:
- - NA
PLAY [Print my_server_list] *****************************************************************************************************************************************************************
TASK [Gathering Facts] **********************************************************************************************************************************************************************
ok: [localhost]
TASK [Print my_server_list without hostvars] ************************************************************************************************************************************************
ok: [localhost] =>
msg:
- |-
[
"pqr"
]
TASK [Print my_server_list using hostvars] **************************************************************************************************************************************************
ok: [localhost] =>
msg:
- |-
- pqr
PLAY RECAP **********************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
svr1 : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
svr2 : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
svr3 : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
###Expected Output###
I was expecting the last two debug statements in Play2 to contain the values of var1 for all the servers something like this:
TASK [Print my_server_list using hostvars] **************************************************************************************************************************************************
ok: [localhost] =>
msg:
- |-
- abc
- xyz
- pqr
Use Special Variables, e.g.
- hosts: all
gather_facts: false
tasks:
- set_fact:
my_server_list: "{{ ansible_play_hosts_all }}"
run_once: true
delegate_to: localhost
delegate_facts: true
- hosts: localhost
gather_facts: false
tasks:
- debug:
var: my_server_list
gives
ok: [localhost] =>
my_server_list:
- svr1
- svr2
- svr3
There are many other ways how to create the list, e.g.
- hosts: all
gather_facts: false
tasks:
- debug:
msg: "{{ groups.my_servers }}"
run_once: true
- hosts: all
gather_facts: false
tasks:
- debug:
msg: "{{ hostvars|json_query('*.inventory_hostname') }}"
run_once: true
Q: "Fill the list with outputs gathered by running complex commands."
A: Last example above shows how to create a list from hostvars. Register the result from the complex command, e.g.
shell> ssh admin#srv1 cat /root/complex_script.sh
#!/bin/sh
ifconfig wlan0 | grep inet | cut -w -f3
The playbook
- hosts: all
gather_facts: false
tasks:
- command: /root/complex_script.sh
register: result
- set_fact:
my_server_list: "{{ hostvars|json_query('*.result.stdout') }}"
run_once: true
delegate_to: localhost
delegate_facts: true
- hosts: localhost
gather_facts: false
tasks:
- debug:
var: my_server_list
gives
my_server_list:
- 10.1.0.61
- 10.1.0.62
- 10.1.0.63
Q: "Why the logic of delegating facts to localhost and keep appending them to that list does not work?"
A: The code below (simplified) can't work because the right-hand-side msl value still comes from the hostvars of the inventory_host despite the fact delegate_facts: true. This merely puts the created variable msl into the localhost's hostvars
- hosts: my_servers
tasks:
- set_fact:
msl: "{{ msl|default([]) + [inventory_hostname] }}"
delegate_to: localhost
delegate_facts: true
Quoting from Delegating facts
To assign gathered facts to the delegated host instead of the current host, set delegate_facts to true
As a result of such code, the variable msl will keep the last assigned value only.

How to change play_hosts from playbook

I have a file with two playbooks. Inventory is generated dynamically and there is no possibility to change it before starting playbook.
Run with command:
ansible-playbook -b adapter.yml --limit=host_group
adapter.yml
- name: Prepare stage
hosts: all
# The problem is that the inventory contains hosts in the format "x.x.x.x" ie physical address.
# I need to run a third-party role.
# But, it needs hosts in the format "instance-alias", that is, the name of the instance.
tasks:
# for this I create a new host group
- name: Add host in new format
add_host:
name: "{{ item.alias }}"
host: "{{ item.ansible_host }}"
groups: new_format_hosts
with_items: "{{ groups.all }}"
# I create a new play host group that matches the previous one in a new format
- name: Compose new play_hosts group
add_host:
name: "{{ item.alias }}"
groups: new_play_hosts
when: item.ansible_host in play_hosts
with_items: "{{ groups.all }}"
- name: Management stage
hosts: new_format_hosts
# in this playbook I want to change the composition
# of the target hosts and launch an external role
vars:
hostvars: "{{ hostvars }}"
play_hosts: "{{ groups.new_play_hosts }}" # THIS DONT WORK
- name: Run external role
import_role:
name: role_name
tasks_from: file_name
But I can’t change play_hosts so that the launched role uses only new hosts.
How to fix it?
This works for me:
- name: Prepare stage
hosts: localhost
tasks:
- name: Show hostavers
debug:
msg: "{{ hostvars[item]['ansible_host'] }}"
with_items: "{{ groups.all }}"
# for this I create a new host group
- name: Add host in new format
add_host:
name: "{{ hostvars[item].alias }}"
ansible_host: "{{ hostvars[item].ansible_host }}"
groups: new_format_hosts
with_items: "{{ groups.all }}"
- name: Management stage
hosts: new_format_hosts
tasks:
- name: Ping New Format Hosts
ping:
- name: Show ansible_host for each host
debug:
var: ansible_host
- name: Show playhosts
debug:
var: play_hosts
delegate_to: localhost
run_once: yes
This assumes, of course, that alias and ansible_host are set for all the hosts.
The hosts were:
AnsibleTower ansible_host=192.168.124.8 alias=fred
192.168.124.111 ansible_host=192.168.124.111 alias=barney
jaxsatB ansible_host=192.168.124.111 alias=wilma
The relevant output of the playbook was:
PLAY [Management stage] *****************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************************
Friday 29 May 2020 18:09:26 -0400 (0:00:00.057) 0:00:01.332 ************
ok: [fred]
ok: [barney]
ok: [wilma]
TASK [New Format Hosts] *****************************************************************************************************************************************************************
Friday 29 May 2020 18:09:30 -0400 (0:00:04.308) 0:00:05.641 ************
ok: [barney]
ok: [wilma]
ok: [fred]
TASK [Show ansible_host] ****************************************************************************************************************************************************************
Friday 29 May 2020 18:09:30 -0400 (0:00:00.450) 0:00:06.091 ************
ok: [fred] => {
"ansible_host": "192.168.124.8"
}
ok: [barney] => {
"ansible_host": "192.168.124.111"
}
ok: [wilma] => {
"ansible_host": "192.168.124.111"
}
TASK [Show playhosts] *******************************************************************************************************************************************************************
Friday 29 May 2020 18:09:30 -0400 (0:00:00.109) 0:00:06.200 ************
ok: [fred -> localhost] => {
"play_hosts": [
"fred",
"barney",
"wilma"
]
}
PLAY RECAP ******************************************************************************************************************************************************************************
barney : ok=3 changed=0 unreachable=0 failed=0
fred : ok=4 changed=0 unreachable=0 failed=0
localhost : ok=3 changed=1 unreachable=0 failed=0
wilma : ok=3 changed=0 unreachable=0 failed=0
Friday 29 May 2020 18:09:30 -0400 (0:00:00.030) 0:00:06.231 ************
===============================================================================
You do not need to set the play_hosts variable. That is set by the line hosts: new_format_hosts in the second play.

Resources