Run an Ansible handler only once for the entire playbook - ansible

I would like to run a handler only once in an entire playbook.
I attempted using an include statement in the following in the playbook file, but this resulted in the handler being run multiple times, once for each play:
- name: Configure common config
hosts: all
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: common }
handlers:
- include: handlers/main.yml
- name: Configure metadata config
hosts: metadata
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: metadata }
handlers:
- include: handlers/main.yml
Here is the content of handlers/main.yml:
- name: restart autofs
service:
name: autofs.service
state: restarted
Here is an example of one of the tasks that notifies the handler:
- name: Configure automount - /opt/local/xxx in /etc/auto.direct
lineinfile:
dest: /etc/auto.direct
regexp: "^/opt/local/xxx"
line: "/opt/local/xxx -acdirmin=0,acdirmax=0,rdirplus,rw,hard,intr,bg,retry=2 nfs_server:/vol/xxx"
notify: restart autofs
How can I get the playbook to only execute the handler once for the entire playbook?

The answer
The literal answer to the question in the title is: no.
Playbook is a list of plays. Playbook has no namespace, no variables, no state. All the configuration, logic, and tasks are defined in plays.
Handler is a task with a different calling schedule (not sequential, but conditional, once at the end of a play, or triggered by the meta: flush_handlers task).
A handler belongs to a play, not a playbook, and there is no way to trigger it outside of the play (i.e. at the end of the playbook).
Solution
The solution to the problem is possible without referring to handlers.
You can use group_by module to create an ad-hoc group based on the result of the tasks at the bottom of each play.
Then you can define a separate play at the end of the playbook restarting the service on targets belonging to the above ad-hoc group.
Refer to the below stub for the idea:
- hosts: all
roles:
# roles declaration
tasks:
- # an example task modifying Nginx configuration
register: nginx_configuration
# ... other tasks ...
- name: the last task in the play
group_by:
key: hosts_to_restart_{{ 'nginx' if nginx_configuration is changed else '' }}
# ... other plays ...
- hosts: hosts_to_restart_nginx
gather_facts: no
tasks:
- service:
name: nginx
state: restarted

Possible solution
Use handlers to add hosts to in-memory inventory. Then add play to run restart service only for these hosts.
See this example:
If task is changed, it notify mark to restart to set fact, that host needs service restart.
Second handler add host is quite special, because add_host task only run once for whole play even in handler, see also documentation. But if notified, it will run after marking is done implied from handlers order.
Handler loops over hosts on which tasks were run and check if host service needs restart, if yes, add to special hosts_to_restart group.
Because facts are persistent across plays, notify third handler clear mark for affected hosts.
A lot of lines you hide with moving handlers to separate file and include them.
inventory file
10.1.1.[1:10]
[primary]
10.1.1.1
10.1.1.5
test.yml
---
- hosts: all
gather_facts: no
tasks:
- name: Random change to notify trigger
debug: msg="test"
changed_when: "1|random == 1"
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: primary
gather_facts: no
tasks:
- name: Change to notify trigger
debug: msg="test"
changed_when: true
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: hosts_to_restart
gather_facts: no
tasks:
- name: Restart service
debug: msg="Service restarted"
changed_when: true

A handler triggered in post_tasks will run after everything else. And the handler can be set to run_once: true.

It's not clear to me what your handler should do. Anyway, as for official documentation, handlers
are triggered at the end of each block of tasks in a play,
and will only be triggered once even if notified by multiple different
tasks [...] As of Ansible 2.2, handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
So handlers are notified / executed once for each block of tasks.
May be you get your goal just keeping handlers after "all" target hosts, but it doesn't seem a clean use of handlers.
.

Related

How to run tasks file as a playbook and as an include_file

I have a generic tasks, like update a DNS records based on my machine -> server
Which I use it in my playbooks with include_tasks like so:
- name: (include) Update DNS
include_tasks: task_update_dns.yml
I have many of these "generic tasks" simply acting as a "shared task" in many playbooks.
But in some cases, I just want to run one of these generic tasks on a server, but running the following gives the below error:
ansible-playbook playbooks/task_update_dns.yml --limit $myserver
# ERROR
ERROR! 'set_fact' is not a valid attribute for a Play
Simply because it's not a "playbook" they are just "tasks"
File: playbooks/task_update_dns.yml
---
- name: dig mydomain.com
set_fact:
my_hosts: '{{ lookup("dig", "mydomain.com").split(",") }}'
tags: always
- name: Set entries
blockinfile:
....
I know I can write a playbook, empty, that only "include" the task file, but I don't want to create a shallow playbook now for each task.
Is there a way to configure the task file in such way that I'll be able to run it for both include_tasks and as a "stand alone"/play command line ?
Have you tried to use custom tags for those tasks?
Create a playbook with all the tasks
---
- name: Update web servers
hosts: webservers
remote_user: root
tasks:
- name: dig mydomain.com
set_fact:
my_hosts: '{{ lookup("dig", "mydomain.com").split(",") }}'
tags: always
- name: Set entries
blockinfile:
tags:
- set_entries
- name: (include) Update DNS
include_tasks: task_update_dns_2.yml
tags:
- dns
...
when you need to run only that specific task, you just need to add the --tag parameter in the execution with the task name
ansible-playbook playbooks/task_update_dns.yml --limit $myserver --tag set_entries

Access hosts in play filtered by task

I have a task that checks the redis service status on the host list below
- hosts: 192.168.0.1, 192.168.0.2, 192.168.0.3
tasks:
- command:
cmd: service redis-server status
register: result
- debug:
var: result
After checking I need to access hosts where service does not exist.
And they should be accessible as variable to proceed with them in the next tasks.
Can someone please help?
Similar to Ansible facts it is also possible to gather service_facts. In example
- name: Set facts SERVICE
set_fact:
SERVICE: "redis-server.service"
- name: Gathering Service Facts
service_facts:
- name: Show ansible_facts.services
debug:
msg:
- "{{ ansible_facts.services[SERVICE].status }}"
If you like to perform tasks after on a service of which you don't the status, with Conditionals you can check the state.
If the service is not installed at that time, the specific variable (key) would not be defined. You would perform Conditionals based on variables then
when: ansible_facts.services[SERVICE] is defined
when: ansible_facts.services['redis-server.service'] is defined
Also it is recommend to use the Ansible service module to perform tasks on the service
- name: Start redis-server, if not started
service:
name: redis-server
state: started
instead of using the command module.
Further services related Q&A
How to check service exists and is not installed in the server using service_facts module in an Ansible playbook?
Ansible: How to start stopped services?
Ansible: How to get disabled but running services?
How to list only the running services with ansible_facts?
Finally found the solution that perfectly matches.
- name: Check that redis service exists
systemd:
name: "redis"
register: redis_status
changed_when: redis_status.status.ActiveState == "inactive"
- set_fact:
_dict: "{{ dict(ansible_play_hosts|zip(
ansible_play_hosts|map('extract', hostvars, 'redis_status'))) }}"
run_once: true
- set_fact:
_changed: "{{ (_dict|dict2items|json_query('[?value.changed].key'))| join(',') }}"
run_once: true

Is ist somehow possible to create a template of an ansible handler?

I am facing the problem that I need to define a set of similar handlers in ansible that differ only in a name.
Let me give an example. Here are some tasks:
# tasks/main.yml
- name: Install config of OpenVPN instance 1
notify: restart openvpn-1
...
- name: Install config of OpenVPN instance 2
notify: restart openvpn-2
...
# Multiple more of that pattern.
You might think that each instance has a slightly different configuration that can be handled here.
All right. Bow the handlers:
# handlers/main.yml
- name: restart openvpn-1
systemd:
name: openvpn-server#instance1
state: restarted
- name: restart openvpn-2
systemd:
name: openvpn-server#instance2
state: restarted
# ...
You see this is quite some duplicated code (not good).
I was thinking of doing something like:
# Handler template or so
- name: restart-openvpn-{{ item }}
systemd:
name: openvpn-server#instance{{ item }}
state: restarted
loop:
- "1"
- "2"
# ...
This is not working, I tried it.
I found this post but this works here not so ideal as it assumes the task to be running in a loop. Instead I have a set of individual tasks that trigger single instances of the handlers. Also this is already in a role.
So the short question is: How can I create a handler template to avoid code redundancy?
Your code works.
E.g. I took two services and tell 'em to restart using your approach:
- hosts: localhost
connection: local
become: true
tasks:
- name: restart-{{ item }}
systemd:
name: "{{ item }}.service"
state: restarted
loop:
- "whoopsie"
- "wpa_supplicant"
O/P is:
PLAY RECAP **********
localhost : ok=2
The main difference from your code: add quotes.
As per Ansible Documentation on Using Variables:
YAML syntax requires that if you start a value with {{ foo }} you quote the whole line, since it wants to be sure you aren’t trying to start a YAML dictionary. This is covered on the YAML Syntax documentation.
This won’t work:
- hosts: app_servers
vars:
app_path: {{ base_path }}/22
Do it like this and you’ll be fine:
- hosts: app_servers
vars:
app_path: "{{ base_path }}/22"

How to set an Ansible tag from within a playbook?

I would like to skip some of the plays based on a conditional value.
I could have a task at the beginning that would skip some of the other tasks when a condition is met.
I know I could always use set_fact and use this as a condition to run the other plays (roles and tasks) or evaluate the said condition directly in each play I want to skip.
However, since the plays already have a tagging system in place, is there something like set_tags in Ansible that I could leverage on to avoid having to add conditions all over my plays and bloat my already heavy playbook.
Something such as:
- name: skip terraform tasks if no file is provided
hosts: localhost
tasks:
set_tags:
- name: terraform
state: skip
when: terraform_files == ""
The two plays I'd like to skip:
- name: bootstrap a testing infrastructure
hosts: localhost
roles:
- role: terraform
state: present
tags:
- create_servers
- terraform
[...]
- name: flush the testing infrastructure
hosts: localhost
roles:
- role: terraform
state: absent
tags:
- destroy_servers
- terraform
If I understood correctly, the following should do the job:
- name: bootstrap a testing infrastructure
hosts: localhost
roles:
- role: terraform
state: present
when: terraform_files != ""
tags:
- create_servers
- terraform
[...]
- name: flush the testing infrastructure
hosts: localhost
roles:
- role: terraform
state: absent
when: terraform_files != ""
tags:
- destroy_servers
- terraform
This is basically the same as setting the when clause on every single tasks in the role. So each task will be inspected but skipped if the condition is false.

How to run an ansible task only once regardless of how many targets there are

Consider the following Ansible task:
- name: stop tomcat
gather_facts: false
hosts: pod1
pre_tasks:
- include_vars:
dir: "vars/{{ environment }}"
vars:
hipchat_message: "stop tomcat pod1 done."
hipchat_notify: "yes"
tasks:
- include: tasks/stopTomcat8AndClearCache.yml
- include: tasks/stopHttpd.yml
- include: tasks/hipchatNotification.yml
This stops tomcat on n number of servers. What I want it to do is send a hipchat notification when it's done doing this. However, this code sends a separate hipchat message for each server the task happens on. This floods the hipchat window with redundant messages. Is there a way to make the hipchat task happen once after the stop tomcat/stop httpd tasks have been done on all the targets? I want the task to shut down tomcat on all the servers, then send one hip chat message saying "tomcat stopped on pod 1".
You can conditionally run the hipchat notification task on only one of the pod1 hosts.
- include: tasks/hipChatNotification.yml
when: inventory_hostname == groups.pod1[0]
Alternately you could only run it on localhost if you don't need any of the variables from the previous play.
- name: Run notification
gather_facts: false
hosts: localhost
tasks:
- include: tasks/hipchatNotification.yml
You also could use the run_once flag on the task itself.
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
- name: Run this block only once per host group
block:
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
Ansible handlers are made for this type of problem where you want to run a task once at the end of an operation even though it may have been triggered multiple times in the play.
You can define a handler section in your playbook and notify it in the tasks, the handlers will not run unless notified by a task, and will only run once regardless of how many times they are notified.
handlers:
- name: hipchat notify
hipchat:
room: someroom
msg: tomcat stopped on pod 1
In your play tasks just include a "notify" on the tasks that should trigger the handler and if they change it will run the handler after all tasks have executed.
- name: Stop service httpd, if started
service:
name: httpd
state: stopped
notify:
- hipchat notify

Resources