Does Ansible delegate_to work with handlers? - ansible

I wrote a playbook to modify the IP address of several remote systems. I wrote the playbook to change only a few systems at a time, so I wanted to use delegate_to to change the DNS record on the nameservers as each system was modified, instead of adding a separate play targeted at the nameservers that would change all the host IPs at once.
However, it seems the handler is being run on the primary playbook target, not my delegate_to target. Does anyone have recommendations for working around this?
Here's my playbook:
---
host: hosts-to-modify
serial: 1
tasks:
- Modify IP for host-to-modify
//snip//
- name: Modify DNS entry
delegate_to: dns-servers
become: yes
replace:
args:
backup: yes
regexp: '^{{ inventory_hostname }}\s+IN\s+A\s+[\d\.]+$'
replace: "{{ inventory_hostname }} IN A {{ new_ip }}"
dest: /etc/bind/db.my.domain
notify:
- reload dns service
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
With an inventory file like the following:
[dns-servers]
ns01
ns02
[hosts-to-modify]
host1 new_ip=10.1.1.10
host2 new_ip=10.1.1.11
host3 new_ip=10.1.1.12
host4 new_ip=10.1.1.13
Output snippet, including error message:
TASK [Modify DNS entry] ********************************************************
Friday 02 September 2016 14:46:09 -0400 (0:00:00.282) 0:00:35.876 ******
changed: [host1 -> ns01]
changed: [host1 -> ns02]
RUNNING HANDLER [reload dns service] *******************************************
Friday 02 September 2016 14:47:00 -0400 (0:00:38.925) 0:01:27.385 ******
fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "no service or tool found for: bind9"}

First of all, you example playbook is invalid in several ways: play syntax is flawed and delegate_to can't be targeted to a group of hosts.
If you want to delegate to multiple servers, you should iterate over them.
And answering your main question: yes, you can use delegate_to with handlers:
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
delegate_to: "{{ item }}"
with_items: "{{ groups['dns-servers'] }}

Related

handler not executing using when condition in task

I'm trying to set up an NFS share between two nodes using Ansible. I'm using the role nfs, which is executed by both nodes, and I separated client and server tasks using the when condition.
After templating the NFS /etc/exports on the server node I want to restart the NFS server using a handler, but the notify doesn't seem to work.
This is the task I'm using:
- name: Template NFS /etc/exports
ansible.builtin.template:
src: exports
dest: /etc/exports
notify:
- Restart NFS server
when: inventory_hostname == groups['nfs-servers'][0]
I tried to restart the nfs server using this handler:
- name: Restart NFS server
ansible.builtin.systemd:
name: nfs-kernel-server
state: restarted
However, the handler never executes, even when exports is actually templated.
I think that's because the task always results in changed = false for the client node (node2), due to the when condition. This is the output of the task:
TASK [nfs : Template NFS /etc/exports]
skipping: [node2] => changed=false
skip_reason: Conditional result was False
changed: [node1] => changed=true
checksum: db864f9235e5afc0272896c4467e244d3b9c4147
dest: /etc/exports
gid: 0
group: root
md5sum: 447bf4f5557e3a020c40f4e729c90a62
mode: '0644'
owner: root
size: 94
src: /home/test/.ansible/tmp/ansible-tmp-1673949188.2970977-15410-64672518997352/source
state: file
uid: 0
Any suggestions on how to use when and notify together? Is there a way to make the notify ignore the skipping result?
Everything works fine if I use two roles and I remove the when condition.
This is the expected behaviour, handlers are running operations on change, if there is no change, there will be no handler run.
So, you will have to trigger the handlers in a separate task in order to have the handler react to a change on another host. In this way of doing it, you will have to register the result of the template task in order to inspect its change state with the help of the hostsvar from other hosts.
This can be done with a changed state on any task you like, for example, a debug task.
From the task you are providing I also see some other improvements possible:
Do not use a when in order to limit a task to a single host, prefer delegation and the use of run_once: true, which also offer the advantages that facts collected with run_once: true are propagated to all hosts, and avoid the need to use hostsvar.
Watch out for warning, your actual group name, containing a dash, is raised by Ansible as problematic
[WARNING]: Invalid characters were found in group names but not replaced,
use -vvvv to see details
and in verbose mode:
Not replacing invalid character(s) "{'-'}" in group name (nfs-servers)
Use underscores instead of dashes.
So, with all this:
- hosts: nfs_servers
gather_facts: no
tasks:
- name: Template NFS /etc/exports
ansible.builtin.template:
src: exports
dest: /etc/exports
delegate_to: "{{ groups.nfs_servers.0 }}"
run_once: true
register: templated_exports
- name: Propagation of the change to notify handler
ansible.builtin.debug:
msg: Notify all handlers on /etc/exports change
notify:
- Restart NFS server
changed_when: true
when: templated_exports is changed
handlers:
- name: Restart NFS server
ansible.builtin.debug:
msg: Restarting NFS server now
When run with a file change:
PLAY [nfs_servers] *******************************************************
TASK [Template NFS /etc/exports] *****************************************
changed: [node1]
TASK [Propagation of the change to notify handler] ***********************
changed: [node1] =>
msg: Notify all handlers on /etc/exports change
changed: [node2] =>
msg: Notify all handlers on /etc/exports change
RUNNING HANDLER [Restart NFS server] *************************************
ok: [node1] =>
msg: Restarting NFS server now
ok: [node2] =>
msg: Restarting NFS server now
When run with no change:
PLAY [nfs_servers] *******************************************************
TASK [Template NFS /etc/exports] *****************************************
ok: [node1]
TASK [Propagation of the change to notify handler] ***********************
skipping: [node1]
skipping: [node2]

Can I run on specific host or group of hosts in a Ansible task?

Can I run on specific host or group of hosts in a Ansible task?
---
- hosts: all
become: yes
tasks:
- name: Disable tuned
hosts: client1.local
service:
name: tuned
enabled: false
state: stopped
It does not work anyway. Here is the error:
[root#centos7 ansible]# ansible-playbook playbook/demo.yaml
ERROR! conflicting action statements: hosts, service
The error appears to be in '/root/ansible/playbook/demo.yaml': line 24, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Disable tuned
^ here
For example
- hosts: test_11,test_12,test_13
vars:
client1:
showroom: [test11, test12]
local: [test_13]
tasks:
- debug:
var: inventory_hostname
- debug:
msg: "Local client: {{ inventory_hostname }}"
when: inventory_hostname in client1.local
gives
TASK [debug] ****************************************************************
ok: [test_11] =>
inventory_hostname: test_11
ok: [test_12] =>
inventory_hostname: test_12
ok: [test_13] =>
inventory_hostname: test_13
TASK [debug] ****************************************************************
skipping: [test_11]
skipping: [test_12]
ok: [test_13] =>
msg: 'Local client: test_13'
Since I had the similar requirement to structure hosts into groups I found the following approach working for me.
First I've structured my inventory according my environment, administrative groups and tasks.
inventory
[infrastructure:children]
prod
qa
dev
[prod:children]
tuned_hosts
[tuned_hosts]
client1.local
Then I can use in
playbook
---
- hosts: all
become: yes
tasks:
- name: Disable tuned
service:
name: tuned
enabled: false
state: stopped
when: ('tuned_hosts' in group_names) # or ('prod' in group_names)
as well something like
when: ("dev" not in group_names)
depending on what I try to achieve.
Documentation
How to build your inventory
Special Variables
Playbook Conditionals

Running tasks on dynamically collected EC2 instances

I'm trying to write a role with tasks that filters out several EC2 instances, adds them to the inventory and then stops a PHP service on them.
This is how far I've gotten, the add_host idea I'm copying from here: http://docs.catalystcloud.io/tutorials/ansible-create-x-servers-using-in-memory-inventory.html
My service task does not appear to be running on the target instances but instead on the host specified in the playbook which runs this role.
---
- name: Collect ec2 data
connection: local
ec2_remote_facts:
region: "us-east-1"
filters:
"tag:Name": MY_TAG
register: ec2_info
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- name: Stop the service
hosts: foobar
become: yes
gather_facts: false
service: name=yii-queue#1 state=stopped enabled=yes
UPDATE: when I try baptistemm's suggestion, I get this:
PLAY [MAIN_HOST_NAME] ***************************
TASK [ec2-manage : Collect ec2 data]
*******************************************
ok: [MAIN_HOST_NAME]
TASK [ec2-manage : Add the hosts to a new group]
*******************************************************
PLAY RECAP **********************************************************
MAIN_HOST_NAME : ok=1 changed=0 unreachable=0 failed=0
UPDATE #2 - yes, the ec2_remote_tags filter does return instances (using the real tag value not the fake one I put in this post). Also, I have seen ec2_instance_facts but I ran into some issues with that (boto3 required although I have a workaround there, still I'm trying to fix the current issue first).
If you want to play tasks on a group of targets you need to define a new play after you created the in-memory list of targets with add_host (as mentionned in your link). So to do that you need like this:
… # omit the code before
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- hosts: foobar
gather_facts: false
become: yes
tasks:
- name: Stop the service
hosts: foobar
service: name=yii-queue#1 state=stopped enabled=yes

ansible - consul kv listing recursive and compare the key values

I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"
All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.

Ansible won't see a handler when using the group_by

I used to have simple playbook (something like this) which I run on all over my machines (RH & Debian based) to update them, and for each machine which was updated run a script (notify handler).
Recently I tried to test a new module called group_by, so instead using the when condition to run yum update when ansible_distribution == "CentOS", I will first gather the facts and group the host based on there ansible_pkg_mgr as key and then I was looking to run yum update on all the hosts which the key is PackageManager_yum , see the play book example:
---
- hosts: all
gather_facts: false
remote_user: root
tasks:
- name: Gathering facts
setup:
- name: Create a group of all hosts by operating system
group_by: key=PackageManager_{{ansible_pkg_mgr}}
- hosts: PackageManager_apt
gather_facts: false
tasks:
- name: Update DEB Family
apt:
upgrade=dist
autoremove=yes
install_recommends=no
update_cache=yes
when: ansible_os_family == "Debian"
register: update_status
notify: updateX
tags:
- deb
- apt_update
- update
- hosts: PackageManager_yum
gather_facts: false
tasks:
- name: Update RPM Family
yum: name=* state=latest
when: ansible_os_family == "RedHat"
register: update_status
notify: updateX
tags:
- rpm
- yum
- yum_update
handlers:
- name: updateX
command: /usr/local/bin/update
And this is the error message I get,
PLAY [all] ********************************************************************
TASK [Gathering facts] *********************************************************
Wednesday 21 December 2016 11:26:17 +0200 (0:00:00.031) 0:00:00.031 ****
....
TASK [Create a group of all hosts by operating system] *************************
Wednesday 21 December 2016 11:26:26 +0200 (0:00:01.443) 0:00:09.242 ****
TASK [Update DEB Family] *******************************************************
Wednesday 21 December 2016 11:26:26 +0200 (0:00:00.211) 0:00:09.454 ****
ERROR! The requested handler 'updateX' was not found in either the main handlers list nor in the listening handlers list
thanks in advance.
You defined handlers only in one of your plays. It's quite clear if you look at the indentation.
The play which you execute for PackageManager_apt does not have the handlers at all (it has no access to the updateX handler defined in a separate play), so Ansible complains.
If you don't want to duplicate the code, you can save the handler to a separate file (let's name it handlers.yml) and include in both plays with:
handlers:
- name: Include common handlers
include: handlers.yml
Note: there's a remark in Handlers: Running Operations On Change section regarding including handlers:
You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be static.
Finally, you should rather consider converting your playbook to a role.
A common method to achieve what you want is to include the tasks (in tasks/main.yml) using file names with the architecture in their names:
- include: "{{ architecture_specific_tasks_file }}"
with_first_found:
- "tasks-for-{{ ansible_distribution }}.yml"
- "tasks-for-{{ ansible_os_family }}.yml"
loop_control:
loop_var: architecture_specific_tasks_file
Handlers are then defined in handlers/main.yml.

Resources