Control ansible play execution inventory hosts - ansible

Need assistance how to make one particular task in the playbook run on all hosts.
I have a below task inside a role
---
- name: Task 1
shell: echo "Must be run on one node"
- name: Task 2
shell: echo "Must be run on one node"
- name: Task 3
shell: echo "This must run on all nodes inside inventory"
I am running this task by passing --limit but "Task 3" must run on all hosts in the inventory. I tried using below block but not its not executing
- name: Verify limit is set
debug:
msg: "Must use --limit"
when: ansible_limit is not defined
run_once: true
- name: Print all nodes
debug:
msg: "Running on node {{ item }}"
with_items: "{{ ansible_play_hosts_all }}"

There are many options. For example, create a dictionary that will control the flow
shell> cat group_vars/all
task_hosts:
1: [host_1] # task 1 runs on host_1 only
2: [host_2] # task 2 runs on host_2 only
3: "{{ groups.all }}" # task 3 runs on all hosts
Given the inventory
shell> cat hosts
host_1
host_2
host_3
The playbook
shell> cat pb.yml
- hosts: all
gather_facts: false
tasks:
- name: Task 1
debug:
msg: echo "Must be run on one node"
when: inventory_hostname in task_hosts.1
- name: Task 2
debug:
msg: echo "Must be run on one node"
when: inventory_hostname in task_hosts.2
- name: Task 3
debug:
msg: echo "This must run on all nodes inside inventory"
when: inventory_hostname in task_hosts.3
gives
shell> ansible-playbook pb.yml
PLAY [all] ***********************************************************************************
TASK [Task 1] ********************************************************************************
skipping: [host_2]
ok: [host_1] =>
msg: echo "Must be run on one node"
skipping: [host_3]
TASK [Task 2] ********************************************************************************
skipping: [host_1]
skipping: [host_3]
ok: [host_2] =>
msg: echo "Must be run on one node"
TASK [Task 3] ********************************************************************************
ok: [host_1] =>
msg: echo "This must run on all nodes inside inventory"
ok: [host_2] =>
msg: echo "This must run on all nodes inside inventory"
ok: [host_3] =>
msg: echo "This must run on all nodes inside inventory"
PLAY RECAP ***********************************************************************************
host_1: ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host_2: ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host_3: ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
Notes:
Add one more level if there are more plays in the playbook
shell> cat group_vars/all
play_task_hosts:
1: # play 1
1: [host_1] # task 1 runs on host_1 only
2: [host_2] # task 2 runs on host_2 only
3: "{{ groups.all }}" # task 3 runs on all hosts
You might want to automate the updates if there are many tasks in many plays in many playbooks
shell> cat update_pb.yml
- hosts: localhost
vars:
my_dir: "{{ playbook_dir }}"
my_pb: pb.yml
my_flow: play_task_hosts
pb: "{{ lookup('file', my_pb)|from_yaml }}"
_tasks: "{{ pb|json_query('[].tasks') }}"
_update_tasks: |
{% for play in _tasks %}
-
{% set play_index = loop.index %}
{% for task in play %}
{% set condition = ["inventory_hostname in ",
my_flow, ".",
play_index, ".",
loop.index]|join() %}
- {{ task|combine({"when": [condition]}, list_merge='append_rp') }}
{% endfor %}
{% endfor %}
update_tasks: "{{ _update_tasks|from_yaml }}"
_update_pb: |
{% for play in pb %}
- {{ play|combine({"tasks": update_tasks[loop.index0]}) }}
{% endfor %}
update_pb: "{{ _update_pb|from_yaml }}"
tasks:
- block:
- debug:
var: _tasks
- debug:
var: update_tasks
- debug:
var: update_pb
when: debug|d(false)|bool
- copy:
dest: "{{ my_dir }}/{{ my_pb }}.update"
content: |
{{ update_pb|to_nice_yaml }}

Generally, if you have a different task list for different hosts, it begs for having different roles. Basically, the best practice for Ansible is to have plays to control where code is been executed.
If role start to decide which host should have which tasks, it twists Ansible upside down, and make role wig the inventory and plays. Don't do it. Role should do what was told and do not make cross-host decisions.
If it's a single task and you don't want to have two roles, you may pass a variable from inventory/play to the role. It can be either flag (like doo_foo), configured in inventory, which is checked as when: doo_foo, or as a delegation:
- name: Do foo
run_once: true
do: foo
delegate_to: '{{ foo_delegation_host }}'
Note: there is run_once flag, which forces Ansible to do task one, and delegate_to is controlling where it runs. The value foo_delegation_host should come from the play (as role parameter or a play variable).

Related

How to calculate ansible_uptime_seconds and output this in os.csv

I am trying to create a csv file that can be used to review certain systems details. One of these items is the system uptime, which is reflected in unix seconds. But in the os.csv output file I would like to see it as days, HH:MM:SS.
Below my yaml script:
---
- name: playbook query system and output to file
hosts: OEL7_systems
vars:
output_file: os.csv
tasks:
- block:
# For permisison setup.
- name: get current user
command: whoami
register: whoami
run_once: yes
- name: clean_file
copy:
dest: "{{ output_file }}"
content: 'hostname,distribution,version,release,uptime'
owner: "{{ whoami.stdout }}"
run_once: yes
- name: fill os information
lineinfile:
path: "{{ output_file }}"
line: "{{ ansible_hostname }},\
{{ ansible_distribution }},\
{{ ansible_distribution_version }},\
{{ ansible_distribution_release }},\
{{ ansible_uptime_seconds }}"
# Tries to prevent concurrent writes.
throttle: 1
delegate_to: localhost
Any help is appreciated.
tried several conversions but can't get it to work.
There is actually a (somewhat hard to find) example in the official documentation on complex data manipulations doing exactly what you are looking for (check at the bottom of the page).
Here is a full example playbook to run it on localhost
---
- hosts: localhost
tasks:
- name: Show the uptime in days/hours/minutes/seconds
ansible.builtin.debug:
msg: Uptime {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
which gives on my machine:
PLAY [localhost] ************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************
ok: [localhost]
TASK [Show the uptime in days/hours/minutes/seconds] ************************************************************************************************************
ok: [localhost] => {
"msg": "Uptime 1 day, 3:56:34"
}
PLAY RECAP ******************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible: How to check multiple servers for a text file value, to decide which servers to run the script on?

I am trying to ask Ansible to check if a server is passive or active based on the value of a specific file in each server, then Ansible will decide which server it runs the next script on.
For example with 2 servers:
Server1
cat /tmp/currentstate
PASSIVE
Server2
cat /tmp/currentstate
ACTIVE
In Ansible
Trigger next set of jobs on server where the output was ACTIVE.
Once the jobs complete, trigger next set of jobs on server where output was PASSIVE
What I have done so far to grab the state, and output the value to Ansible is
- hosts: "{{ hostname1 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of first server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server1
- debug:
msg: "{{ state_server1.stdout }}"
- hosts: "{{ hostname2 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of second server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server2
- debug:
msg: "{{ state_server2.stdout }}"
What I have done so far to trigger the script
- hosts: "{{ active_hostname | mandatory }}"
tasks:
- name: Run the shutdown on active server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: "{{ passive_hostname | mandatory }}"
tasks:
- name: Run the shutdown on passive server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
but I don't know how to set the value of active_hostname & passive_hostname based on the value from the script above.
How can I set the Ansible variable of active_hostname & passive_hostname based on the output of the first section?
A better solution came to my mind is to include hosts in new groups according to their state.
This would be more optimal in case there are more than two hosts.
- hosts: all
gather_facts: no
vars:
ans_script_path: /tmp/
tasks:
- name: Grab state of server
shell: |
cat {{ ans_script_path }}currentstate.log
register: server_state
- add_host:
hostname: "{{ item }}"
# every host will be added to a new group according to its state
groups: "{{ 'active' if hostvars[item].server_state.stdout == 'ACTIVE' else 'passive' }}"
# Shorter, but the new groups will be in capital letters
# groups: "{{ hostvars[item].server_state.stdout }}"
loop: "{{ ansible_play_hosts }}"
changed_when: false
- name: show the groups the host(s) are in
debug:
msg: "{{ group_names }}"
- hosts: active
gather_facts: no
tasks:
- name: Run the shutdown on active server first
shell: hostname -f # changed that for debugging
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: passive
gather_facts: no
tasks:
- name: Run the shutdown on passive server first
shell: hostname -f
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
test-001 is PASSIVE
test-002 is ACTIVE
PLAY [all] ***************************************************************
TASK [Grab state of server] **********************************************
ok: [test-002]
ok: [test-001]
TASK [add_host] **********************************************************
ok: [test-001] => (item=test-001)
ok: [test-001] => (item=test-002)
TASK [show the groups the host(s) are in] ********************************
ok: [test-001] => {
"msg": [
"passive"
]
}
ok: [test-002] => {
"msg": [
"active"
]
}
PLAY [active] *************************************************************
TASK [Run the shutdown on active server first] ****************************
changed: [test-002]
TASK [debug] **************************************************************
ok: [test-002] => {
"msg": "test-002"
}
PLAY [passive] ************************************************************
TASK [Run the shutdown on passive server first] ****************************
changed: [test-001]
TASK [debug] **************************************************************
ok: [test-001] => {
"msg": "test-001"
}
PLAY RECAP ****************************************************************
test-001 : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test-002 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For example, given two remote hosts
shell> ssh admin#test_11 cat /tmp/currentstate.log
ACTIVE
shell> ssh admin#test_13 cat /tmp/currentstate.log
PASSIVE
The playbook below reads the files and runs the commands on active and passive servers
shell> cat pb.yml
- hosts: "{{ host1 }},{{ host2 }}"
gather_facts: false
vars:
server_states: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, ['server_state', 'stdout'])|
list)) }}"
server_active: "{{ server_states|dict2items|
selectattr('value', 'eq', 'ACTIVE')|
map(attribute='key')|list }}"
server_pasive: "{{ server_states|dict2items|
selectattr('value', 'eq', 'PASSIVE')|
map(attribute='key')|list }}"
tasks:
- command: cat /tmp/currentstate.log
register: server_state
- debug:
var: server_state.stdout
- block:
- debug:
var: server_states
- debug:
var: server_active
- debug:
var: server_pasive
run_once: true
- command: echo 'Shutdown active server'
register: out_active
delegate_to: "{{ server_active.0 }}"
- command: echo 'Shutdown passive server'
register: out_pasive
delegate_to: "{{ server_pasive.0 }}"
- debug:
msg: |
{{ server_active.0 }}: [{{ out_active.stdout }}] {{ out_active.start }}
{{ server_pasive.0 }}: [{{ out_pasive.stdout }}] {{ out_pasive.start }}
run_once: true
shell> ansible-playbook pb.yml -e host1=test_11 -e host2=test_13
PLAY [test_11,test_13] ***********************************************************************
TASK [command] *******************************************************************************
changed: [test_13]
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_state.stdout: ACTIVE
ok: [test_13] =>
server_state.stdout: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_states:
test_11: ACTIVE
test_13: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_active:
- test_11
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_pasive:
- test_13
TASK [command] *******************************************************************************
changed: [test_11]
changed: [test_13 -> test_11]
TASK [command] *******************************************************************************
changed: [test_11 -> test_13]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
test_11: [Shutdown active server] 2022-10-27 11:16:00.766309
test_13: [Shutdown passive server] 2022-10-27 11:16:02.501907
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
From the description of your use case I understand that you like to perform tasks on certain servers which a have service role installed (annot.: Terracotta Server) and based on a certain service state.
Therefore, I like to recommend an approach with Custom facts.
Depending on if you have control about where the currentstate.log is placed or how it is structured, you could use in example something like
cat /tmp/ansible/service/terracotta.fact
[currentstate]
ACTIVE = true
PASSIVE = false
or add dynamic facts by adding executable scripts to facts.d ...
Means, alternatively, you can add the current service state to your host facts by creating and running a script in facts.d, which would just read the content of /tmp/currentstate.log.
Then, a sample playbook like
---
- hosts: localhost
become: false
gather_facts: true
fact_path: /tmp/ansible/service
gather_subset:
- "!all"
- "!min"
- "local"
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
when: ansible_local.terracotta.currentstate.active | bool
will result into an output of
TASK [Show Gathered Facts] ******
ok: [localhost] =>
msg:
ansible_local:
terracotta:
currentstate:
active: 'true'
passive: 'false'
gather_subset:
- '!all'
- '!min'
- local
module_setup: true
An other approach is to address How the inventory is build and Group the hosts
[terracotta:children]
terracotta_active
terracotta_passive
[terracotta_active]
terracotta1.example.com
[terracotta_passive]
terracotta2.example.com
You can then just easily and simple define where a playbook or task should run, just by Targeting hosts and groups
ansible-inventory -i hosts--graph
#all:
|--#terracotta:
| |--#terracotta_active:
| | |--terracotta1.example.com
| |--#terracotta_passive:
| | |--terracotta2.example.com
|--#ungrouped:
ansible-inventory -i hosts terracotta_active --graph
#terracotta_active:
|--terracotta1.example.com
or Conditionals based on ansible_facts, in example
when: 'terracotta_active' in group_names
... from my understanding, both would be minimal and simple solutions without re-implementing functionality which seems to be already there.

Display Ansible playbook with lookups interpolated

I have an Ansible playbook that looks, in part, like this:
...
environment:
F2B_DB_PURGE_AGE: "{{ lookup('env','F2B_DB_PURGE_AGE') }}"
F2B_LOG_LEVEL: "{{ lookup('env','F2B_LOG_LEVEL') }}"
SSMTP_HOST: "{{ lookup('env','SSMTP_HOST') }}"
SSMTP_PORT: "{{ lookup('env','SSMTP_PORT') }}"
SSMTP_TLS: "{{ lookup('env','SSMTP_TLS') }}"
...
Is there any way to run ansible-playbook so that it will show the results of the YAML file after replacing the lookups with their values? That is, I would like to be able to run something like ansible-playbook file.yaml --dry-run and see on standard output (assuming the environment variables were set appropriately):
...
environment:
F2B_DB_PURGE_AGE: "20"
F2B_LOG_LEVEL: "debug"
SSMTP_HOST: "smtp.example.com"
SSMTP_PORT: "487"
SSMTP_TLS: "true"
...
Set the environment for testing
shell> cat env.sh
#!/usr/bin/bash
export F2B_DB_PURGE_AGE="20"
export F2B_LOG_LEVEL="debug"
export SSMTP_HOST="smtp.example.com"
export SSMTP_PORT="487"
export SSMTP_TLS="true"
shell> source env.sh
Given the inventory
shell> cat hosts
localhost ansible_connection=local
Q: "Run something like ansible-playbook file.yaml --dry-run and see environment"
A: The below playbook does the job
shell> cat file.yml
- hosts: all
vars:
my_environment:
F2B_DB_PURGE_AGE: "{{ lookup('env','F2B_DB_PURGE_AGE') }}"
F2B_LOG_LEVEL: "{{ lookup('env','F2B_LOG_LEVEL') }}"
SSMTP_HOST: "{{ lookup('env','SSMTP_HOST') }}"
SSMTP_PORT: "{{ lookup('env','SSMTP_PORT') }}"
SSMTP_TLS: "{{ lookup('env','SSMTP_TLS') }}"
tasks:
- block:
- debug:
msg: |
my_environment:
{{ my_environment|to_nice_yaml|indent(2) }}
- meta: end_play
when: dry_run|d(false)|bool
- debug:
msg: Continue ...
Set dry_run=true
shell> ansible-playbook file.yml -e dry_run=true
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
msg: |-
my_environment:
F2B_DB_PURGE_AGE: '20'
F2B_LOG_LEVEL: debug
SSMTP_HOST: smtp.example.com
SSMTP_PORT: '487'
SSMTP_TLS: 'true'
TASK [meta] **********************************************************************************
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
By default, the playbook will execute tasks
shell> ansible-playbook file.yml
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
skipping: [localhost]
TASK [meta] **********************************************************************************
skipping: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] =>
msg: Continue ...
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Optionally, let the playbook gather facts and use the dictionary ansible_env. Use the filer ansible.utils.keep_keys to select your variables
- hosts: all
gather_facts: true
vars:
my_environment_vars:
- F2B_DB_PURGE_AGE
- F2B_LOG_LEVEL
- SSMTP_HOST
- SSMTP_PORT
- SSMTP_TLS
my_environment: "{{ ansible_env|
ansible.utils.keep_keys(target=my_environment_vars) }}"
tasks:
- block:
- debug:
msg: |
my_environment:
{{ my_environment|to_nice_yaml|indent(2) }}
- meta: end_play
when: dry_run|d(false)|bool
- debug:
msg: Continue ...

Ansible, how to set a global fact using roles?

I'm trying to use Ansible to deploy a small k3s cluster with just two server nodes at the moment. Deploying the first server node, which I refer to as "master" is easy to set up with Ansible. However, setting up the second server node, which I refer to as "node" is giving me a challenge because I need to pull the value of the node-token from the master and use it to call the k3s install command on the "node" vm.
I'm using Ansible roles, and this is what my playbook looks like:
- hosts: all
roles:
- { role: k3sInstall , when: 'server_type is defined'}
- { role: k3sUnInstall , when: 'server_type is defined'}
This is my main.yml file from the k3sInstall role directory:
- name: Install k3s Server
import_tasks: k3s_install_server.yml
tags:
- k3s_install
This is my k3s_install_server.yml:
---
- name: Install k3s Cluster
block:
- name: Install k3s Master Server
become: yes
shell: "{{ k3s_master_install_cmd }}"
when: server_role == "master"
- name: Get Node-Token file from master server.
become: yes
shell: cat {{ node_token_filepath }}
when: server_role == "master"
register: nodetoken
- name: Print Node-Token
when: server_role == "master"
debug:
msg: "{{ nodetoken.stdout }}"
# msg: "{{ k3s_node_install_cmd }}"
- name: Set Node-Token fact
when: server_role == "master"
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
- name: Print Node-Token fact
when: server_role == "node" or server_role == "master"
debug:
msg: "{{ nodeToken }}"
# - name: Install k3s Node Server
# become: yes
# shell: "{{ k3s_node_install_cmd }}{{ nodeToken }}"
# when: server_role == "node"
I've commented out the Install k3s Node Servertask because I'm not able to properly reference the nodeToken variable that I'm setting when server_role == master.
This is the output of the debug:
TASK [k3sInstall : Print Node-Token fact] ***************************************************************************************************************************************************************************************************************************************************************************
ok: [server1] => {
"msg": "K10cf129cfedafcb083655a1780e4be994621086f780a66d9720e77163d36147051::server:aa2837148e402f675604a56602a5bbf8"
}
ok: [server2] => {
"msg": ""
}
My host file:
[p6dualstackservers]
server1 ansible_ssh_host=10.63.60.220
server2 ansible_ssh_host=10.63.60.221
And I have the following host_vars files assigned:
server1.yml:
server_role: master
server2.yml:
server_role: node
I've tried assigning the nodeToken variable inside of k3sInstall/vars/main.yml as well as one level up from the k3sInstall role inside group_vars/all.yml but that didn't help.
I tried searching for a way to use block-level variables but couldn't find anything.
If you set the variable for master only it's not available for other hosts, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: VARIABLE IS NOT DEFINED!
If you want to "apply all results and facts to all the hosts in the same batch" use run_once: true, e.g.
- hosts: master,node
tasks:
- set_fact:
nodeToken: K10cf129cfedaf
when: inventory_hostname == 'master'
run_once: true
- debug:
var: nodeToken
gives
ok: [master] =>
nodeToken: K10cf129cfedaf
ok: [node] =>
nodeToken: K10cf129cfedaf
In your case, add 'run_once: true' to the task
- name: Set Node-Token fact
set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: server_role == "master"
run_once: true
The above code works because the condition when: server_role == "master" is applied before run_once: true. Quoting from run_once
"Boolean that will bypass the host loop, forcing the task to attempt to execute on the first host available and afterward apply any results and facts to all active hosts in the same batch."
Safer code would be adding a standalone set_fact instead of relying on the precedence of the condition when: and run_once, e.g.
- set_fact:
nodeToken: "{{ nodetoken.stdout }}"
when: inventory_hostname == 'master'
- set_fact:
nodeToken: "{{ hostvars['master'].nodeToken }}"
run_once: true
Using when in this use case is probably not the best fit, you would probably be better delegating some tasks to the so-called master server.
What you can do to define what server is the master, based on your inventory variable, is to delegate a fact to localhost, for example.
Then again, to get the token from your file in the master server, you can delegate this task and fact only to this server.
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- set_fact:
master_node: "{{ inventory_hostname }}"
when: server_role == 'master'
delegate_to: localhost
delegate_facts: true
- set_fact:
token: 12345678
run_once: true
delegate_to: "{{ hostvars.localhost.master_node }}"
delegate_facts: true
- debug:
var: hostvars[hostvars.localhost.master_node].token
when: server_role != 'master'
This yields the expected:
PLAY [all] ********************************************************************************************************
TASK [set_fact] ***************************************************************************************************
skipping: [node1]
ok: [node2 -> localhost]
skipping: [node3]
TASK [set_fact] ***************************************************************************************************
ok: [node1 -> node2]
TASK [debug] ******************************************************************************************************
skipping: [node2]
ok: [node1] =>
hostvars[hostvars.localhost.master_node].token: '12345678'
ok: [node3] =>
hostvars[hostvars.localhost.master_node].token: '12345678'
PLAY RECAP ********************************************************************************************************
node1 : ok=2 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node2 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node3 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

Is there a way to show registered handlers or listeners in Ansible?

I'm trying to debug a "handler...was not found in either the main handlers list nor in the listening handlers list" issue.
There are --list-host, --list-tags, and --list-tags options, but nothing for listing registered handlers. I've run ansible-playbook with the "debug" strategy and with -vvvv, but neither of those seem to provide any insight. I don't see any "magic" variables that might contain this information.
Is there any way to show/dump these handlers and/or listeners?
Q: "Is there any way to show/dump these handlers and/or listeners?"
A: No. In ansible-playbook, there is no such option (similar to --list-host, --list-tags, ...) to show/dump handlers and/or listeners.
Note: You can write a playbook and list the handlers on your own. For example, the playbook below reads the file from the variable pb_file and writes the lists of the handlers in each play
shell> cat list-handlers.yml
- name: List handlers
hosts: localhost
vars:
pb_file: "{{ pb_file|default('playbook.yml') }}"
pb_dict: "{{ lookup('file', pb_file)|from_yaml }}"
pb_handlers: "{{ dict(pb_dict|json_query('[].[name,handlers[].name]')) }}"
tasks:
- debug:
msg: |
playbook: {{ pb_file }}
{% for play,handlers in pb_handlers.items() %}
play {{ '#' }}{{ loop.index }} {{ play }}
HANDLERS: {{ handlers|d([], true)|join(', ') }}
{% endfor %}
when: pb_file is exists
Given the playbook below for testing
shell> cat pb.yml
- name: Play1 test handlers
hosts: target1,target2
tasks:
- debug:
msg: Notify handler1
changed_when: true
notify: handler1
- debug:
msg: Notify handler2
changed_when: true
notify: handler2
handlers:
- name: handler1
debug:
msg: Run handler1
- name: handler2
debug:
msg: Run handler2
- name: Play2 test handlers
hosts: target1,target2
tasks:
- debug:
msg: Play2
List the handlers
shell> ansible-playbook list-handlers.yml -e pb_file=pb.yml
PLAY [List handlers] *****************************************************************************************
TASK [debug] *************************************************************************************************
ok: [localhost] =>
msg: |2-
playbook: pb.yml
play #1 Play1 test handlers
HANDLERS: handler1, handler2
play #2 Play2 test handlers
HANDLERS:
PLAY RECAP ***************************************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Resources