Ansible won't see a handler when using the group_by - ansible

I used to have simple playbook (something like this) which I run on all over my machines (RH & Debian based) to update them, and for each machine which was updated run a script (notify handler).
Recently I tried to test a new module called group_by, so instead using the when condition to run yum update when ansible_distribution == "CentOS", I will first gather the facts and group the host based on there ansible_pkg_mgr as key and then I was looking to run yum update on all the hosts which the key is PackageManager_yum , see the play book example:
---
- hosts: all
gather_facts: false
remote_user: root
tasks:
- name: Gathering facts
setup:
- name: Create a group of all hosts by operating system
group_by: key=PackageManager_{{ansible_pkg_mgr}}
- hosts: PackageManager_apt
gather_facts: false
tasks:
- name: Update DEB Family
apt:
upgrade=dist
autoremove=yes
install_recommends=no
update_cache=yes
when: ansible_os_family == "Debian"
register: update_status
notify: updateX
tags:
- deb
- apt_update
- update
- hosts: PackageManager_yum
gather_facts: false
tasks:
- name: Update RPM Family
yum: name=* state=latest
when: ansible_os_family == "RedHat"
register: update_status
notify: updateX
tags:
- rpm
- yum
- yum_update
handlers:
- name: updateX
command: /usr/local/bin/update
And this is the error message I get,
PLAY [all] ********************************************************************
TASK [Gathering facts] *********************************************************
Wednesday 21 December 2016 11:26:17 +0200 (0:00:00.031) 0:00:00.031 ****
....
TASK [Create a group of all hosts by operating system] *************************
Wednesday 21 December 2016 11:26:26 +0200 (0:00:01.443) 0:00:09.242 ****
TASK [Update DEB Family] *******************************************************
Wednesday 21 December 2016 11:26:26 +0200 (0:00:00.211) 0:00:09.454 ****
ERROR! The requested handler 'updateX' was not found in either the main handlers list nor in the listening handlers list
thanks in advance.

You defined handlers only in one of your plays. It's quite clear if you look at the indentation.
The play which you execute for PackageManager_apt does not have the handlers at all (it has no access to the updateX handler defined in a separate play), so Ansible complains.
If you don't want to duplicate the code, you can save the handler to a separate file (let's name it handlers.yml) and include in both plays with:
handlers:
- name: Include common handlers
include: handlers.yml
Note: there's a remark in Handlers: Running Operations On Change section regarding including handlers:
You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be static.
Finally, you should rather consider converting your playbook to a role.
A common method to achieve what you want is to include the tasks (in tasks/main.yml) using file names with the architecture in their names:
- include: "{{ architecture_specific_tasks_file }}"
with_first_found:
- "tasks-for-{{ ansible_distribution }}.yml"
- "tasks-for-{{ ansible_os_family }}.yml"
loop_control:
loop_var: architecture_specific_tasks_file
Handlers are then defined in handlers/main.yml.

Related

How do I tell Ansible to include localhost on the task?

I have this task:
- name: Install OpenJDK
become: true
apt:
name: openjdk-8-jre-headless
cache_valid_time: 60
state: latest
I want to run it in all hosts, including localhost. How can I tell Ansible to include localhost in the hosts for just one play?
You just add localhost to the pattern of targeted hosts in your play. Note that, unless your re-define it in your inventory, localhost is implicit and does not match the all special group.
The global idea
---
- name: This play will target all hosts in inventory
hosts: all
tasks:
- debug:
msg: I'm a dummy task
- name: This play will target all inventory hosts AND implicit localhost
hosts: all:localhost
tasks:
- debug:
msg: Yet an other dummy task

Running tasks on dynamically collected EC2 instances

I'm trying to write a role with tasks that filters out several EC2 instances, adds them to the inventory and then stops a PHP service on them.
This is how far I've gotten, the add_host idea I'm copying from here: http://docs.catalystcloud.io/tutorials/ansible-create-x-servers-using-in-memory-inventory.html
My service task does not appear to be running on the target instances but instead on the host specified in the playbook which runs this role.
---
- name: Collect ec2 data
connection: local
ec2_remote_facts:
region: "us-east-1"
filters:
"tag:Name": MY_TAG
register: ec2_info
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- name: Stop the service
hosts: foobar
become: yes
gather_facts: false
service: name=yii-queue#1 state=stopped enabled=yes
UPDATE: when I try baptistemm's suggestion, I get this:
PLAY [MAIN_HOST_NAME] ***************************
TASK [ec2-manage : Collect ec2 data]
*******************************************
ok: [MAIN_HOST_NAME]
TASK [ec2-manage : Add the hosts to a new group]
*******************************************************
PLAY RECAP **********************************************************
MAIN_HOST_NAME : ok=1 changed=0 unreachable=0 failed=0
UPDATE #2 - yes, the ec2_remote_tags filter does return instances (using the real tag value not the fake one I put in this post). Also, I have seen ec2_instance_facts but I ran into some issues with that (boto3 required although I have a workaround there, still I'm trying to fix the current issue first).
If you want to play tasks on a group of targets you need to define a new play after you created the in-memory list of targets with add_host (as mentionned in your link). So to do that you need like this:
… # omit the code before
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- hosts: foobar
gather_facts: false
become: yes
tasks:
- name: Stop the service
hosts: foobar
service: name=yii-queue#1 state=stopped enabled=yes

Run an Ansible handler only once for the entire playbook

I would like to run a handler only once in an entire playbook.
I attempted using an include statement in the following in the playbook file, but this resulted in the handler being run multiple times, once for each play:
- name: Configure common config
hosts: all
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: common }
handlers:
- include: handlers/main.yml
- name: Configure metadata config
hosts: metadata
become: true
vars:
OE: "{{ ansible_hostname[5] }}"
roles:
- { role: metadata }
handlers:
- include: handlers/main.yml
Here is the content of handlers/main.yml:
- name: restart autofs
service:
name: autofs.service
state: restarted
Here is an example of one of the tasks that notifies the handler:
- name: Configure automount - /opt/local/xxx in /etc/auto.direct
lineinfile:
dest: /etc/auto.direct
regexp: "^/opt/local/xxx"
line: "/opt/local/xxx -acdirmin=0,acdirmax=0,rdirplus,rw,hard,intr,bg,retry=2 nfs_server:/vol/xxx"
notify: restart autofs
How can I get the playbook to only execute the handler once for the entire playbook?
The answer
The literal answer to the question in the title is: no.
Playbook is a list of plays. Playbook has no namespace, no variables, no state. All the configuration, logic, and tasks are defined in plays.
Handler is a task with a different calling schedule (not sequential, but conditional, once at the end of a play, or triggered by the meta: flush_handlers task).
A handler belongs to a play, not a playbook, and there is no way to trigger it outside of the play (i.e. at the end of the playbook).
Solution
The solution to the problem is possible without referring to handlers.
You can use group_by module to create an ad-hoc group based on the result of the tasks at the bottom of each play.
Then you can define a separate play at the end of the playbook restarting the service on targets belonging to the above ad-hoc group.
Refer to the below stub for the idea:
- hosts: all
roles:
# roles declaration
tasks:
- # an example task modifying Nginx configuration
register: nginx_configuration
# ... other tasks ...
- name: the last task in the play
group_by:
key: hosts_to_restart_{{ 'nginx' if nginx_configuration is changed else '' }}
# ... other plays ...
- hosts: hosts_to_restart_nginx
gather_facts: no
tasks:
- service:
name: nginx
state: restarted
Possible solution
Use handlers to add hosts to in-memory inventory. Then add play to run restart service only for these hosts.
See this example:
If task is changed, it notify mark to restart to set fact, that host needs service restart.
Second handler add host is quite special, because add_host task only run once for whole play even in handler, see also documentation. But if notified, it will run after marking is done implied from handlers order.
Handler loops over hosts on which tasks were run and check if host service needs restart, if yes, add to special hosts_to_restart group.
Because facts are persistent across plays, notify third handler clear mark for affected hosts.
A lot of lines you hide with moving handlers to separate file and include them.
inventory file
10.1.1.[1:10]
[primary]
10.1.1.1
10.1.1.5
test.yml
---
- hosts: all
gather_facts: no
tasks:
- name: Random change to notify trigger
debug: msg="test"
changed_when: "1|random == 1"
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: primary
gather_facts: no
tasks:
- name: Change to notify trigger
debug: msg="test"
changed_when: true
notify:
- mark to restart
- add host
- clear mark
handlers:
- name: mark to restart
set_fact: restart_service=true
- name: add host
add_host:
name: "{{item}}"
groups: "hosts_to_restart"
when: hostvars[item].restart_service is defined and hostvars[item].restart_service
with_items: "{{ansible_play_batch}}"
- name: clear mark
set_fact: restart_service=false
- hosts: hosts_to_restart
gather_facts: no
tasks:
- name: Restart service
debug: msg="Service restarted"
changed_when: true
A handler triggered in post_tasks will run after everything else. And the handler can be set to run_once: true.
It's not clear to me what your handler should do. Anyway, as for official documentation, handlers
are triggered at the end of each block of tasks in a play,
and will only be triggered once even if notified by multiple different
tasks [...] As of Ansible 2.2, handlers can also “listen” to generic topics, and tasks can notify those topics as follows:
So handlers are notified / executed once for each block of tasks.
May be you get your goal just keeping handlers after "all" target hosts, but it doesn't seem a clean use of handlers.
.

Does Ansible delegate_to work with handlers?

I wrote a playbook to modify the IP address of several remote systems. I wrote the playbook to change only a few systems at a time, so I wanted to use delegate_to to change the DNS record on the nameservers as each system was modified, instead of adding a separate play targeted at the nameservers that would change all the host IPs at once.
However, it seems the handler is being run on the primary playbook target, not my delegate_to target. Does anyone have recommendations for working around this?
Here's my playbook:
---
host: hosts-to-modify
serial: 1
tasks:
- Modify IP for host-to-modify
//snip//
- name: Modify DNS entry
delegate_to: dns-servers
become: yes
replace:
args:
backup: yes
regexp: '^{{ inventory_hostname }}\s+IN\s+A\s+[\d\.]+$'
replace: "{{ inventory_hostname }} IN A {{ new_ip }}"
dest: /etc/bind/db.my.domain
notify:
- reload dns service
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
With an inventory file like the following:
[dns-servers]
ns01
ns02
[hosts-to-modify]
host1 new_ip=10.1.1.10
host2 new_ip=10.1.1.11
host3 new_ip=10.1.1.12
host4 new_ip=10.1.1.13
Output snippet, including error message:
TASK [Modify DNS entry] ********************************************************
Friday 02 September 2016 14:46:09 -0400 (0:00:00.282) 0:00:35.876 ******
changed: [host1 -> ns01]
changed: [host1 -> ns02]
RUNNING HANDLER [reload dns service] *******************************************
Friday 02 September 2016 14:47:00 -0400 (0:00:38.925) 0:01:27.385 ******
fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "no service or tool found for: bind9"}
First of all, you example playbook is invalid in several ways: play syntax is flawed and delegate_to can't be targeted to a group of hosts.
If you want to delegate to multiple servers, you should iterate over them.
And answering your main question: yes, you can use delegate_to with handlers:
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
delegate_to: "{{ item }}"
with_items: "{{ groups['dns-servers'] }}

how to run a particular task on specific host in ansible

my inventory file's contents -
[webservers]
x.x.x.x ansible_ssh_user=ubuntu
[dbservers]
x.x.x.x ansible_ssh_user=ubuntu
in my tasks file which is in common role i.e. it will run on both hosts but I want to run a following task on host webservers not in dbservers which is defined in inventory file
- name: Install required packages
apt: name={{ item }} state=present
with_items:
- '{{ programs }}'
become: yes
tags: programs
is when module helpful or there is any other way? How could I do this ?
If you want to run your role on all hosts but only a single task limited to the webservers group, then - like you already suggested - when is your friend.
You could define a condition like:
when: inventory_hostname in groups['webservers']
Thank you, this helps me too.
hosts file:
[production]
host1.dns.name
[internal]
host2.dns.name
requirements.yml file:
- name: install the sphinx-search rpm from a remote repo on x86_64 - internal host
when: inventory_hostname in groups['internal']
yum:
name: http://sphinxsearch.com/files/sphinx-2.2.11-1.rhel7.x86_64.rpm
state: present
- name: install the sphinx-search rpm from a remote repo on i386 - Production
when: inventory_hostname in groups['production']
yum:
name: http://sphinxsearch.com/files/sphinx-2.2.11-2.rhel6.i386.rpm
state: present
An alternative to consider in some scenarios is -
delegate_to: hostname
There is also this example form the ansible docs, to loop over a group. https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html -
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
loop: "{{groups['dbservers']}}"

Resources