I'm trying to set up an NFS share between two nodes using Ansible. I'm using the role nfs, which is executed by both nodes, and I separated client and server tasks using the when condition.
After templating the NFS /etc/exports on the server node I want to restart the NFS server using a handler, but the notify doesn't seem to work.
This is the task I'm using:
- name: Template NFS /etc/exports
ansible.builtin.template:
src: exports
dest: /etc/exports
notify:
- Restart NFS server
when: inventory_hostname == groups['nfs-servers'][0]
I tried to restart the nfs server using this handler:
- name: Restart NFS server
ansible.builtin.systemd:
name: nfs-kernel-server
state: restarted
However, the handler never executes, even when exports is actually templated.
I think that's because the task always results in changed = false for the client node (node2), due to the when condition. This is the output of the task:
TASK [nfs : Template NFS /etc/exports]
skipping: [node2] => changed=false
skip_reason: Conditional result was False
changed: [node1] => changed=true
checksum: db864f9235e5afc0272896c4467e244d3b9c4147
dest: /etc/exports
gid: 0
group: root
md5sum: 447bf4f5557e3a020c40f4e729c90a62
mode: '0644'
owner: root
size: 94
src: /home/test/.ansible/tmp/ansible-tmp-1673949188.2970977-15410-64672518997352/source
state: file
uid: 0
Any suggestions on how to use when and notify together? Is there a way to make the notify ignore the skipping result?
Everything works fine if I use two roles and I remove the when condition.
This is the expected behaviour, handlers are running operations on change, if there is no change, there will be no handler run.
So, you will have to trigger the handlers in a separate task in order to have the handler react to a change on another host. In this way of doing it, you will have to register the result of the template task in order to inspect its change state with the help of the hostsvar from other hosts.
This can be done with a changed state on any task you like, for example, a debug task.
From the task you are providing I also see some other improvements possible:
Do not use a when in order to limit a task to a single host, prefer delegation and the use of run_once: true, which also offer the advantages that facts collected with run_once: true are propagated to all hosts, and avoid the need to use hostsvar.
Watch out for warning, your actual group name, containing a dash, is raised by Ansible as problematic
[WARNING]: Invalid characters were found in group names but not replaced,
use -vvvv to see details
and in verbose mode:
Not replacing invalid character(s) "{'-'}" in group name (nfs-servers)
Use underscores instead of dashes.
So, with all this:
- hosts: nfs_servers
gather_facts: no
tasks:
- name: Template NFS /etc/exports
ansible.builtin.template:
src: exports
dest: /etc/exports
delegate_to: "{{ groups.nfs_servers.0 }}"
run_once: true
register: templated_exports
- name: Propagation of the change to notify handler
ansible.builtin.debug:
msg: Notify all handlers on /etc/exports change
notify:
- Restart NFS server
changed_when: true
when: templated_exports is changed
handlers:
- name: Restart NFS server
ansible.builtin.debug:
msg: Restarting NFS server now
When run with a file change:
PLAY [nfs_servers] *******************************************************
TASK [Template NFS /etc/exports] *****************************************
changed: [node1]
TASK [Propagation of the change to notify handler] ***********************
changed: [node1] =>
msg: Notify all handlers on /etc/exports change
changed: [node2] =>
msg: Notify all handlers on /etc/exports change
RUNNING HANDLER [Restart NFS server] *************************************
ok: [node1] =>
msg: Restarting NFS server now
ok: [node2] =>
msg: Restarting NFS server now
When run with no change:
PLAY [nfs_servers] *******************************************************
TASK [Template NFS /etc/exports] *****************************************
ok: [node1]
TASK [Propagation of the change to notify handler] ***********************
skipping: [node1]
skipping: [node2]
Related
I am creating an Ansible role to update hosts configured in a cluster. There are 3 'roles' defined in this PSA architecture.
host-1 # primary
host-2 # secondary
host-3 # secondary
host-4 # secondary
host-5 # arbiter
I want to configure Ansible to execute a block of tasks, one by one per host, if the host is a secondary. But in that same block, execute a single task as the primary
E.g:
- name: remove host from cluster
shell: command_to_remove
- name: update
yum:
state: latest
name: application
- name: add host to cluster
shell: command_to_add
- name: verify cluster
shell: verify cluster
when: primary
The use of serial would be perfect here, but that isn't supported for a block.
The result of Ansible should be:
TASK [remove host from cluster]
changed: [host-2]
TASK [update]
changed: [host-2]
TASK [add host from cluster]
changed: [host-2]
TASK [verify cluster]
changed: [host-1]
# rinse and repeat for host-3
TASK [remove host from cluster]
changed: [host-3]
TASK [update]
changed: [host-3]
TASK [add host from cluster]
changed: [host-3]
TASK [verify cluster]
changed: [host-1]
# and so on, until host-4
While it might not be "correct" in some cases, you could put a hostvar to declare a sort of "profile" for each of the hosts, then do an "include_tasks" with a profile.yaml based on that, then inside that profile.yaml put your tasks.
I do something similar with a playbook I use to configure a machine based on a profile declared on ansible invocation.
This playbook is invoked on the laptop being configured, so it uses localhost, but you could use it remotely with some tweaks
hosts.yaml
all:
hosts:
localhost:
ansible_connection: local
vars:
profile: all
playbook.yaml
- hosts: all
tasks:
...
- name: Run install tasks
include_role:
name: appinstalls
...
roles/appinstalls/tasks/main.yaml
- name: Include tasks for profile "all"
include_tasks: profile_all.yml
when: 'profile == "all"'
- name: Include tasks for profile "office"
include_tasks: profile_office.yml
when: 'profile == "office"'
roles/appinstalls/tasks/profile_all.yml
- name: Run tasks for profile "all"
include_tasks: "{{ item }}.yml"
with_items:
- app1
- app2
- app3
- app4
roles/appinstalls/tasks/profile_office.yml
- name: Run tasks for profile "all"
include_tasks: "{{ item }}.yml"
with_items:
- app1
- app2
- app3
- app4
- office_app1
- office_app2
- office_app3
- office_app4
When invoked using:
ansible-playbook playbook.yml -i hosts.yaml -K
It will run using the default profile "all". If invoked using
ansible-playbook playbook.yml -i hosts.yaml -K -e profile=office
It will run using the "office" profile instead
What I like about this method, is that all the includes happen in one go, rather than as you go along:
TASK [appinstalls : Run tasks for profile "office"] ******************************************************************************************************************************************
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app1.yml for localhost => (item=app1)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app2.yml for localhost => (item=app2)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app3.yml for localhost => (item=app3)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app4.yml for localhost => (item=app4)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app1.yml for localhost => (item=office_app1)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app2.yml for localhost => (item=office_app2)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app3.yml for localhost => (item=office_app3)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app4.yml for localhost => (item=office_app4)
So in your case, you could put your tasks in the equivalent of the profile_all or profile_office
This method has no fancy stuff and it makes it easier to tack on future changes.
It fully allows each profile to independently evolve from the other -- just change the list in each of the Run tasks for profile block
I want to execute a playbook with tags, because I want to execute part of script, but the variable stored in register is empty.
---
- hosts: node
vars:
service_name: apache2
become: true
- name: Start if Service Apache is stopped
shell: service apache2 status | grep Active | awk -v N=2 '{print $N}'
args:
warn: false
register: res
tags:
- toto
- name: Start Apache because service was stopped
service:
name: "{{service_name}}"
state: started
when: res.stdout == 'inactive'
tags:
- toto
- name: Check for apache status
service_facts:
- debug:
var: ansible_facts.services.apache2.state
tags:
- toto2
$ ansible-playbook status.yaml -i hosts --tags="toto,toto2"
PLAY [nodeOne] ***************************************************************************
TASK [Start if Service Apache is stopped] ************************************************
changed: [nodeOne]
TASK [Start Apache because service was stopped] ******************************************
skipping: [nodeOne]
TASK [debug] *****************************************************************************
ok: [nodeOne] => {
"ansible_facts.services.apache2.state": "VARIABLE IS NOT DEFINED!"
}
At the end of the script, I don't get the output of apache status.
Q: "The variable stored in the register is empty."
A: A tag is missing in the task service_facts. As a result, this task is skipped and ansible_facts.services is not defined. Fix it for example
- name: Check for apache status
service_facts:
tags: toto2
Notes
1) With a single tag only it's not necessary to declare a list with one tag.
2) The concept of Idempotency makes the condition redundant.
An operation is idempotent if the result of performing it once is exactly the same as the result of performing it repeatedly without any intervening actions.
The module service is idempotent. For example the task below
- name: Start Apache
service:
name: "{{ service_name }}"
state: started
tags:
- toto
will make any changes only if the service has not been started yet. The result of the task will be a started service. Once the service is started the task will report [ok] and not touch the service. In this respect, it does not matter what was the previous state of the service i.e. there is no reason to run the task conditionally.
3) The module service_facts works as expected. For example
- service_facts:
- debug:
var: ansible_facts.services.apache2.state
gives
"ansible_facts.services.apache2.state": "running"
I have Ansible play that changes configuration files of services and restarts services that had their configuration changed. I do this by notifying handler.
For some reason program that had no changes gets also restarted by handler.
Running the play when only Program 1 has changes:
TASK [programs : Configure programs] **********************************************
changed: [127.0.0.1] => (item=program1)
ok: [127.0.0.1] => (item=program2)
ok: [127.0.0.1] => (item=program3)
RUNNING HANDLER [programs : Restart program1] ****************************************
changed: [127.0.0.1]
RUNNING HANDLER [programs : Restart program2] **************************************
changed: [127.0.0.1]
Handler file of role:
- name: Restart program1
service:
name: program1
state: restarted
- name: Restart program2
service:
name: program2
state: restarted
- name: Restart program3
service:
name: program3
state: restarted
Task for changing configuration:
- name: Configure programs
template:
src: templates/{{ item }}.conf.j2
dest: '{{ install_path }}/{{ item }}/{{ item }}.conf'
notify: 'Restart {{ item }}'
with_items: '{{ list_of_programs }}'
Why does restart of Program2 get notified without any change?
I am using ansible 2.0.0.2.
This is how it works for current ansible versions.
All notified handlers will be executed if any of the items has changed state.
See this issue and upvote it if you think this is undesired behaviour.
I am trying to use Ansible to check if SELinux is enabled (set to Enforcing), and if not, enable it. The play to enable SELinux must be invoked only if SELinux is disabled.
The playbook looks like so:
- hosts: all
# root should execute this.
remote_user: root
become: yes
tasks:
# Check if SELinux is enabled.
- name: check if selinux is enabled
tags: selinuxCheck
register: selinuxCheckOut
command: getenforce
- debug: var=selinuxCheckOut.stdout_lines
- name: enable selinux if not enabled already
tags: enableSELinux
selinux: policy=targeted state=enforcing
when: selinuxCheckOut.stdout_lines == "Enforcing"
- debug: var=enableSELinuxOut.stdout_lines
When I run this, the task enableSELinux fails with the reason, "Conditional check failed". The output is:
TASK [debug] *******************************************************************
task path: /root/ansible/playbooks/selinuxConfig.yml:24
ok: [localhost] => {
"selinuxCheckOut.stdout_lines": [
"Enforcing"
]
}
TASK [enable selinux if not enabled already] ***********************************
task path: /root/ansible/playbooks/selinuxConfig.yml:26
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
My questions:
1. Is this the correct way to get a play to execute depending on the output from another play?
2. How do I get this to work?
Your playbook is correct. But stdout_lines is a list. You have to compare the first element in that list. Try this:
when: selinuxCheckOut.stdout_lines[0] == "Enforcing"
I wrote a playbook to modify the IP address of several remote systems. I wrote the playbook to change only a few systems at a time, so I wanted to use delegate_to to change the DNS record on the nameservers as each system was modified, instead of adding a separate play targeted at the nameservers that would change all the host IPs at once.
However, it seems the handler is being run on the primary playbook target, not my delegate_to target. Does anyone have recommendations for working around this?
Here's my playbook:
---
host: hosts-to-modify
serial: 1
tasks:
- Modify IP for host-to-modify
//snip//
- name: Modify DNS entry
delegate_to: dns-servers
become: yes
replace:
args:
backup: yes
regexp: '^{{ inventory_hostname }}\s+IN\s+A\s+[\d\.]+$'
replace: "{{ inventory_hostname }} IN A {{ new_ip }}"
dest: /etc/bind/db.my.domain
notify:
- reload dns service
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
With an inventory file like the following:
[dns-servers]
ns01
ns02
[hosts-to-modify]
host1 new_ip=10.1.1.10
host2 new_ip=10.1.1.11
host3 new_ip=10.1.1.12
host4 new_ip=10.1.1.13
Output snippet, including error message:
TASK [Modify DNS entry] ********************************************************
Friday 02 September 2016 14:46:09 -0400 (0:00:00.282) 0:00:35.876 ******
changed: [host1 -> ns01]
changed: [host1 -> ns02]
RUNNING HANDLER [reload dns service] *******************************************
Friday 02 September 2016 14:47:00 -0400 (0:00:38.925) 0:01:27.385 ******
fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "no service or tool found for: bind9"}
First of all, you example playbook is invalid in several ways: play syntax is flawed and delegate_to can't be targeted to a group of hosts.
If you want to delegate to multiple servers, you should iterate over them.
And answering your main question: yes, you can use delegate_to with handlers:
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
delegate_to: "{{ item }}"
with_items: "{{ groups['dns-servers'] }}