Why are my Ansible handlers not firing? - ansible

I have a playbook that installs tomcat and then deploys some web applications. The web application deploy task(s) notifies a handler to restart tomcat. But the handler never fires. I am using a handler to manage the tomcat service because I understand from the docs that handlers should only fire once even if called multiple times. Am I missing something obvious?
This is the playbook:
---
- hosts: all
become: true
become_user: root
roles:
- role: common
- role: nginx
- role: tomcat
- role: launchpad
- role: manager
- role: reporting
handlers:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
This is one of the roles that deploys the web app:
---
- name: Remove the current installation of LaunchPad
file: path={{etitomcat_path}}/webapps/{{launchpad_module}} state=absent
- name: Remove the current war file for {{launchpad_module}}
file: path={{etitomcat_path}}/webapps/{{launchpad_module}}.war state=absent
- name: Download the latest snapshot of LaunchPad and deploy it to {{etitomcat_path}}
get_url: url={{launchpad_source_url}} dest={{etitomcat_path}}/webapps/{{launchpad_module}}.war mode=0744 owner={{etitomcat_user}} group={{etitomcat_group}} force=yes
notify: "restart_eti_tomcat"
This is the handler:
- name: "Restart ETI Tomcat"
service: name=etitomcat state=restarted
become: true
become_user: root
listen: "restart_eti_tomcat"
- name: "Start ETI Tomcat"
service: name=etitomcat state=started
become: true
become_user: root
listen: "start_eti_tomcat"
- name: "Stop ETI Tomcat"
service: name=etitomcat state=stopped
become: true
become_user: root
listen: "stop_eti_tomcat"

Adding static: yes should resolve this issue when using Ansible >= 2.1.
handlers:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml
static: yes
Take a look at this Github issue, the linked google groups thread might contain valuable information as well.
edit
As pointed out by #rhythmicdevil the documentation notes:
You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be static.

This may be beside the point but I'll add this regardless as the question headline is rather wide and this is the question I found when googling and below is the solution for the particular issue I had.
Do take into consideration that the handlers only triggers when there is a change registered in the corresponding task. Even if you run the play with the highest verbosity level there will be NO entry like this which spells this out.
RUNNING HANDLER [base : somehandler ] ***********************
Unchanged: Skipping
And when they are triggered after a change it will be after all the tasks has been performed.
This really did my head in since the tasks notified you regardless of if they really did something or not whereas handlers stays quiet.
For example, if you have a task A that you have been running a few times until it works like you expect.
Then after that connect a handler B to restart the service, nothing will happen unless you wipe the operation the task A are doing or change it.
As long as task A registers no change it will not trigger handler B.
This is the behavior for ansible 2.2.1 anyways.

You can add the tasks and post_tasks section instead of handlers, hope it will work for you:
---
- hosts: all
become: true
become_user: root
tasks:
- role: common
- role: nginx
- role: tomcat
- role: launchpad
- role: manager
- role: reporting
post_tasks:
- include: /tomcat/handlers/etitomcat_service_ctrl.yml

Related

Restart service when service file changes when using Ansible

I am creating a systemd service using template module
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
The contents of the sonarqube.service can change of course. On change I want to restart the service. How can I do this?
There are two solutions.
Register + When changed
You can register template module output (with its status change),
register: service_conf
and then use when clause.
when: service_conf.changed
For example:
---
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
register: service_conf
- name: restart service
service:
name: sonarqube
state: restarted
when: service_conf.changed
Handler + Notify
You define your restart service task as handler. And then in your template task you notify the handler.
tasks:
- name: Add Sonarqube to Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: Restart Sonarqube
- …
handlers:
- name: Restart Sonarqube
service:
name: sonarqube
state: restarted
More info can be found in Ansible Doc.
Difference between those 2?
In the first case, the service will restart directly. In the case of the handler the restart will happen at the end of the play.
Another difference will be, if you have several tasks changes that need to restart of your service, you simply add the notify to all of them.
The handler will run if any of those task get a changed status. With the first solution, you will have to register several return. And it will generate a longer when clause_1 or clause_2 or …
The handler will run only once even if notified several times.
This calls for a handler
---
- name: Testplaybook
hosts: all
handlers:
- name: restart_service
service:
name: <servicename>
state: restarted
tasks:
- template:
src: ...
dest: ...
notify:
- restart_service
The handler will automatically get notified by the module when something changed. See the documentatation for further information on handlers.
Since you are using systemd, you will also need to execute daemon-reload because you updated the service file.
The task just templates the service file and notifies a handler:
- name: Systemd service
template:
src: sonar.unit.j2
dest: /etc/systemd/system/sonarqube.service
when: "ansible_service_mgr == 'systemd'"
notify: restart sonarqube systemd
Based on the presence of your specific when clause above, I'm assuming you might want to specify separate handlers in the case that systemd is not in use. The handler for the systemd case would look like the following:
- name: restart sonarqube systemd
systemd:
name: sonarqube
state: restarted
daemon_reload: yes

How to run an ansible task only once regardless of how many targets there are

Consider the following Ansible task:
- name: stop tomcat
gather_facts: false
hosts: pod1
pre_tasks:
- include_vars:
dir: "vars/{{ environment }}"
vars:
hipchat_message: "stop tomcat pod1 done."
hipchat_notify: "yes"
tasks:
- include: tasks/stopTomcat8AndClearCache.yml
- include: tasks/stopHttpd.yml
- include: tasks/hipchatNotification.yml
This stops tomcat on n number of servers. What I want it to do is send a hipchat notification when it's done doing this. However, this code sends a separate hipchat message for each server the task happens on. This floods the hipchat window with redundant messages. Is there a way to make the hipchat task happen once after the stop tomcat/stop httpd tasks have been done on all the targets? I want the task to shut down tomcat on all the servers, then send one hip chat message saying "tomcat stopped on pod 1".
You can conditionally run the hipchat notification task on only one of the pod1 hosts.
- include: tasks/hipChatNotification.yml
when: inventory_hostname == groups.pod1[0]
Alternately you could only run it on localhost if you don't need any of the variables from the previous play.
- name: Run notification
gather_facts: false
hosts: localhost
tasks:
- include: tasks/hipchatNotification.yml
You also could use the run_once flag on the task itself.
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
- name: Run this block only once per host group
block:
- name: Do a thing on the first host in a group.
debug:
msg: "Yay only prints once"
run_once: true
Ansible handlers are made for this type of problem where you want to run a task once at the end of an operation even though it may have been triggered multiple times in the play.
You can define a handler section in your playbook and notify it in the tasks, the handlers will not run unless notified by a task, and will only run once regardless of how many times they are notified.
handlers:
- name: hipchat notify
hipchat:
room: someroom
msg: tomcat stopped on pod 1
In your play tasks just include a "notify" on the tasks that should trigger the handler and if they change it will run the handler after all tasks have executed.
- name: Stop service httpd, if started
service:
name: httpd
state: stopped
notify:
- hipchat notify

Run an Ansible handler only once for the entire playbook

I would like to run a handler only once in an entire playbook.
I attempted using an include statement in the following in the playbook file, but this resulted in the handler being run multiple times, once for each play:
- name: Configure common config
hosts: all
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: common }
handlers:
- include: handlers/main.yml
- name: Configure metadata config
hosts: metadata
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: metadata }
handlers:
- include: handlers/main.yml
Here is the content of handlers/main.yml:
- name: restart autofs
service:
name: autofs.service
state: restarted
Here is an example of one of the tasks that notifies the handler:
- name: Configure automount - /opt/local/xxx in /etc/auto.direct
lineinfile:
dest: /etc/auto.direct
regexp: "^/opt/local/xxx"
line: "/opt/local/xxx -acdirmin=0,acdirmax=0,rdirplus,rw,hard,intr,bg,retry=2 nfs_server:/vol/xxx"
notify: restart autofs
How can I get the playbook to only execute the handler once for the entire playbook?
The answer
The literal answer to the question in the title is: no.
Playbook is a list of plays. Playbook has no namespace, no variables, no state. All the configuration, logic, and tasks are defined in plays.
Handler is a task with a different calling schedule (not sequential, but conditional, once at the end of a play, or triggered by the meta: flush_handlers task).
A handler belongs to a play, not a playbook, and there is no way to trigger it outside of the play (i.e. at the end of the playbook).
Solution
The solution to the problem is possible without referring to handlers.
You can use group_by module to create an ad-hoc group based on the result of the tasks at the bottom of each play.
Then you can define a separate play at the end of the playbook restarting the service on targets belonging to the above ad-hoc group.
Refer to the below stub for the idea:
- hosts: all
roles:
# roles declaration
tasks:
- # an example task modifying Nginx configuration
register: nginx_configuration
# ... other tasks ...
- name: the last task in the play
group_by:
key: hosts_to_restart_{{ 'nginx' if nginx_configuration is changed else '' }}
# ... other plays ...
- hosts: hosts_to_restart_nginx
gather_facts: no
tasks:
- service:
name: nginx
state: restarted
Possible solution
Use handlers to add hosts to in-memory inventory. Then add play to run restart service only for these hosts.
See this example:
If task is changed, it notify mark to restart to set fact, that host needs service restart.
Second handler add host is quite special, because add_host task only run once for whole play even in handler, see also documentation. But if notified, it will run after marking is done implied from handlers order.
Handler loops over hosts on which tasks were run and check if host service needs restart, if yes, add to special hosts_to_restart group.
Because facts are persistent across plays, notify third handler clear mark for affected hosts.
A lot of lines you hide with moving handlers to separate file and include them.
inventory file
10.1.1.[1:10]
[primary]
10.1.1.1
10.1.1.5
test.yml
---
- hosts: all
gather_facts: no
tasks:
- name: Random change to notify trigger
debug: msg="test"
changed_when: "1|random == 1"
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: primary
gather_facts: no
tasks:
- name: Change to notify trigger
debug: msg="test"
changed_when: true
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: hosts_to_restart
gather_facts: no
tasks:
- name: Restart service
debug: msg="Service restarted"
changed_when: true
A handler triggered in post_tasks will run after everything else. And the handler can be set to run_once: true.
It's not clear to me what your handler should do. Anyway, as for official documentation, handlers
are triggered at the end of each block of tasks in a play,
and will only be triggered once even if notified by multiple different
tasks [...] As of Ansible 2.2, handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
So handlers are notified / executed once for each block of tasks.
May be you get your goal just keeping handlers after "all" target hosts, but it doesn't seem a clean use of handlers.
.

Ansible handlers and shell module

I have a playbook with the following task and handler section (just a snippet):
tasks:
- name: 'Run legacy script and power off'
debug: msg="Preparing for reboot"
notify: Legacy sysprep
handlers:
- name: Enable Service1
service: name=service1 enabled=yes state=restarted
- name: Legacy sysprep
shell: /var/scripts/prep-reboot.sh
When I run the playbook, I see the debug message for the task that calls the Legacy sysprep handler, and I see the Enable Service1 handler executed, but the Legacy sysprep handler isn't called (it doesn't show up in the playbook output, nor does it run on the system) and the servers are not rebooted (part of the script).
Yes, I plan to migrate the prep-reboot.sh script to an Ansible playbook, but I was surprised that apparently the shell module doesn't seem to work? Or is there an error I've overlooked? Running with -vvv doesn't report anything unexpected.
Ansible and Ansible-playbook version 2.1.1.0 running on RHEL 6.8.
Thank you, #techraf you got me pointed in the right direction. I ended up adding the changed_when: true to the debug line and that forces that play to register a change which then triggers the appropriate handler.
Here is my actual test playbook for reference:
---
- name: Testing forced handler
hosts: testsys_only
gather_facts: True
tasks:
- name: 'Run legacy script and power off'
debug: msg="Preparing for reboot"
changed_when: true
notify: Legacy sysprep
handlers:
- name: Enable Service1
service: name=service1 enabled=yes state=restarted
- name: Legacy sysprep
shell: /var/scripts/prep-reboot.sh

How to force handler to run before executing a task in Ansible?

I have a playbook which should configure on specified IP, and than connect to this app to configure stuff inside.
I've got a problem: I need to restart app after I've changed anything in app config, and if I do not restart app, connection to it failed (no connection because app knows nothing about new config with new IP address I'm trying to access).
My current playbook:
tasks:
- name: Configure app
template: src=app.conf.j2 dest=/etc/app.conf
notify: restart app
- name: Change data in app
configure_app: host={{new_ip}} data={{data}}
handlers:
- name: restart app
service: name=app state=restarted
I need to force the handler to run if configure_app changed before executing 'Change data in app'.
If you want to force the handler to run in between the two tasks instead of at the end of the play, you need to put this between the two tasks:
- meta: flush_handlers
Example taken from the ansible documentation :
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
Note that this will cause all pending handlers to run at that point, not just that specific one.

Resources