Ansible rolling execution block of handlers on targets - ansible

I have an ansible playbook that calls a role, containing many tasks and handlers. the handlers will be triggered if my desired configuration of service is changed.
What I am looking for is a way to trigger multiple handlers on the hosts sequently. for example, if I have to targets target1 and target2 and I have also two handlers handler1 and handler2, What I want in execution of handlers on targets would be something like below:
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target1]
RUNNING HANDLER [myrole : handler2] *************************************************
changed: [target1]
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target2]
RUNNING HANDLER [myrole : handler2] *************************************************
changed: [target2]
But as is known, the normal execution of handlers on targets are as below:
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target1]
changed: [target2]
RUNNING HANDLER [myrole : handler 2] ********************************************
changed: [target1]
changed: [target2]
That it is not what I want.
I know that with using of serial option in playbook level we can restrict parallelism, but this option will bring the cost of huge time consuming because all of my tasks would be executed in serial as well.
The ways I have tried was using both of throttle option and block directive on handlers but it wasn't usefull.

flush_handlers on each host separately. Dynamically create and include the file. For example, the playbook
shell> cat pb.yml
- hosts: target1,target2
tasks:
- debug:
msg: Notify handler1
changed_when: true
notify: handler1
- debug:
msg: Notify handler2
changed_when: true
notify: handler2
- block:
- copy:
dest: "{{ playbook_dir }}/flush_handlers_serial.yml"
content: |
{% for host in ansible_play_hosts_all %}
- meta: flush_handlers
when: inventory_hostname == '{{ host }}'
{% endfor %}
delegate_to: localhost
- include_tasks: flush_handlers_serial.yml
run_once: true
when: flush_handlers_serial|d(false)|bool
handlers:
- name: handler1
debug:
msg: Run handler1
- name: handler2
debug:
msg: Run handler2
by default runs the handlers in parallel (see linear strategy)
shell> ansible-playbook pb.yml
PLAY [target1,target2] ***************************************************************************************
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler1
changed: [target2] =>
msg: Notify handler1
TASK [debug] *************************************************************************************************
changed: [target2] =>
msg: Notify handler2
changed: [target1] =>
msg: Notify handler2
TASK [copy] **************************************************************************************************
skipping: [target1]
TASK [include_tasks] *****************************************************************************************
skipping: [target1]
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target1] =>
msg: Run handler1
ok: [target2] =>
msg: Run handler1
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target1] =>
msg: Run handler2
ok: [target2] =>
msg: Run handler2
PLAY RECAP ***************************************************************************************************
target1: ok=4 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
target2: ok=4 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
When you enable flush_handlers_serial=true the file below will be created and included
shell> cat flush_handlers_serial.yml
- meta: flush_handlers
when: inventory_hostname == 'target1'
- meta: flush_handlers
when: inventory_hostname == 'target2'
This will run the handlers serially, similarly to the strategy host_pinned
shell> ansible-playbook pb.yml -e flush_handlers_serial=true
PLAY [target1,target2] ***************************************************************************************
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler1
changed: [target2] =>
msg: Notify handler1
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler2
changed: [target2] =>
msg: Notify handler2
TASK [copy] **************************************************************************************************
changed: [target1 -> localhost]
TASK [include_tasks] *****************************************************************************************
included: /export/scratch/tmp7/test-172/flush_handlers_serial.yml for target1
TASK [meta] **************************************************************************************************
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target1] =>
msg: Run handler1
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target1] =>
msg: Run handler2
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target2] =>
msg: Run handler1
TASK [meta] **************************************************************************************************
skipping: [target1]
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target2] =>
msg: Run handler2
PLAY RECAP ***************************************************************************************************
target1: ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
target2: ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

I've got a way to solve my problem as a workaround.
the file handlers/main.yml would be as below:
---
- name: Trigger some handlers sequentially.
include_tasks: trigger_handlers_sequentialy.yml
run_once: true
loop: "{{ ansible_play_hosts_all }}"
listen: restart_my_service
So, the above handler will connect to just one target (because of using run_once option) and the loop operation on the targets will be done one time.
Then, all the handlers that are expected to be run on targets in serial mode should be inserted on the tasks/trigger_handlers_sequentialy.yml file:
---
- name: handler1
debug:
msg: run handler1
delegate_to: "{{ item }}"
- name: handler2
debug:
msg: run handler2
delegate_to: "{{ item }}"
So, what will be happened during the playbook execution will be something like below:
RUNNING HANDLER [myrole : Trigger some handlers sequentially.] ***********************************************************************************************************************************************
included: ~/ansible/roles/myrole/tasks/trigger_handlers_sequentialy.yml for target1 => (item=target1)
RUNNING HANDLER [myrole : run handler1] ***********************************************************************************************************************************************
ok: [target1]
RUNNING HANDLER [myrole : run handler2] **********************************************************************************************************************************************
ok: [target1]
included: ~/ansible/roles/myrole/tasks/trigger_handlers_sequentialy.yml for target1 => (item=target2)
RUNNING HANDLER [myrole : run handler1] ***********************************************************************************************************************************************
ok: [target1 -> target2(192.168.4.10)]
RUNNING HANDLER [myrole : run handler2] **********************************************************************************************************************************************
ok: [target1 -> target2(192.168.4.10)]

Related

Ansible. Updating the hostvars variable

If, during the execution of the playbook, we change the host file in host_vars (i.e. add a new variable), how then can we get this variable in hostvars in the current execution of the playbook? When you run it again, it appears in hostvars.
UPDATE 01:
Here's an example, it doesn't work (
The task Debug 3 should display test_1 instead of VARIABLE IS NOT DEFINED!
- name: Test
hosts: mon
tasks:
- name: Debug 1
debug:
var: hostvars.mon.test_1
- name: Add vars for host_vars
delegate_to: 127.0.0.1
blockinfile:
path: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}.yml"
marker: "# {mark}: {{ item.key }}"
block: |
{{ item.key }}: {{ item.value }}
with_dict:
- {test_1: "test_1"}
- name: Debug 2
debug:
var: hostvars.mon.test_1
- name: Clear facts
meta: clear_facts
- name: Refresh inventory
meta: refresh_inventory
- name: Setup
setup:
- name: Debug 3
debug:
var: hostvars.mon.test_1
Result:
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [mon]
TASK [Debug 1] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "VARIABLE IS NOT DEFINED!"
}
TASK [Add vars for host_vars] **************************************************
changed: [mon -> 127.0.0.1] => (item={'key': 'test_1', 'value': 'test_1'})
TASK [Debug 2] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "VARIABLE IS NOT DEFINED!"
}
TASK [Setup] *******************************************************************
ok: [mon]
TASK [Debug 3] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "VARIABLE IS NOT DEFINED!"
}
PLAY RECAP *********************************************************************
mon : ok=6 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
On restart:
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [mon]
TASK [Debug 1] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "test_1"
}
TASK [Add vars for host_vars] **************************************************
ok: [mon -> 127.0.0.1] => (item={'key': 'test_1', 'value': 'test_1'})
TASK [Debug 2] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "test_1"
}
TASK [Setup] *******************************************************************
ok: [mon]
TASK [Debug 3] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "test_1"
}
PLAY RECAP *********************************************************************
mon : ok=6 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Maybe there is a way to change hostvars manually in the process?
You can ask Ansible to reread inventory (including host_vars). Generally I'd say that changing inventory on-fly is a code smell, but there are few valid cases.
- name: Refreshing inventory, SO copypaste
meta: refresh_inventory

Using ansible variable inside gathered fact list

I'm stuck to get data from gathered fact, using calculated data as part of query.
I am using 2.9 ansible and here is my task
---
- hosts: ios
connection: network_cli
gather_facts: true
tasks:
- name: CEF OUTPUT
ios_command:
commands: sh ip cef 0.0.0.0 0.0.0.0 | i nexthop
register: cef
- set_fact:
reg_result: "{{ cef.stdout |string| regex_search('Tunnel[0-9]+')}}"
- name: IT WORKS!
debug:
msg: "{{ reg_result }}"
- name: MANUAL LIST
debug:
var: ansible_facts.net_interfaces.Tunnel3.description
- name: AUTO LIST
debug:
var: ansible_facts.net_interfaces.[reg_result].description
and here is output
PLAY [ios] **********************************************
TASK [Gathering Facts] **********************************
ok: [10.77.3.1]
TASK [CEF OUTPUT] ***************************************
ok: [10.77.3.1]
TASK [set_fact] *****************************************
ok: [10.77.3.1]
TASK [IT WORKS!] ****************************************
ok: [10.77.3.1] => {
"msg": "Tunnel3"
}
TASK [MANUAL LIST] **************************************
ok: [10.77.3.1] => {
"ansible_facts.net_interfaces.Tunnel3.description": "DMVPN via MTS"
}
TASK [AUTO LIST] ****************************************
fatal: [10.77.3.1]: FAILED! => {"msg": "template error while templating string: expected name or number. String: {{ansible_facts.net_interfaces.[reg_result].description}}"}
to retry, use: --limit #/home/user/ansible/retry/ios_find_gw_int.retry
PLAY RECAP **********************************************
10.77.3.1 : ok=5 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
You see. Now I know that my default gateway is pointing to "Tunnel3", and it is possible to get some data placing this "Tunnel3" in {{ ansible_facts.net_interfaces.Tunnel3.description }} but how to get this automatically? And I feel such nested variable in the list is a very handy tool.
Remove the dot if you use the indirect addressing
- name: AUTO LIST
debug:
var: ansible_facts.net_interfaces[reg_result].description
See Referencing key:value dictionary variables.

How to trigger handlers (notify) when using import_tasks?

When using notify on a task using import_tasks the handlers are not fired. I'm wondering why. tags are working like expected.
How to trigger handlers on imported tasks?
Example:
playbook test.yml:
- hosts: all
gather_facts: no
handlers:
- name: restart service
debug:
msg: restart service
tasks:
- import_tasks: test_imports.yml
notify: restart service
tags: test
test_imports.yml
- name: test
debug:
msg: test
changed_when: yes
- name: test2
debug:
msg: test2
changed_when: yes
Expected:
> ansible-playbook -i localhost, test.yml
PLAY [all] *************************************************************************************************************
TASK [test] ************************************************************************************************************
changed: [localhost] => {
"msg": "test"
}
TASK [test2] ***********************************************************************************************************
changed: [localhost] => {
"msg": "test2"
}
RUNNING HANDLER [restart service] **************************************************************************************
ok: [localhost] => {
"msg": "restart service"
}
...
Actual:
> ansible-playbook -i localhost, test.yml
PLAY [all] *************************************************************************************************************
TASK [test] ************************************************************************************************************
changed: [localhost] => {
"msg": "test"
}
TASK [test2] ***********************************************************************************************************
changed: [localhost] => {
"msg": "test2"
}
...
This question has been partially answered in Ansible's bug tracker here:
import_tasks is processed at parse time and is effectively replaced by the tasks it imports. So when using import_tasks in handlers you would need to notify the task names within.
Source: mkrizek's comment: https://github.com/ansible/ansible/issues/59706#issuecomment-515879321
This has been also further explained here:
imports do not work like tasks, and they do not have an association between the import_tasks task and the tasks within the imported file really. import_tasks is a pre-processing trigger, that is handled during playbook parsing time. When the parser encounters it, the tasks within are retrieved, and inserted where the import_tasks task was located. There is a proposal to "taskify" includes at ansible/proposals#136 but I don't see that being implemented any time soon.
Source: sivel comment: https://github.com/ansible/ansible/issues/64935#issuecomment-573062042
And for the good part of it, it seems it has been recently fixed: https://github.com/ansible/ansible/pull/73572
But, for now, what would work would be:
- hosts: all
gather_facts: no
handlers:
- name: restart service
debug:
msg: restart service
tasks:
- import_tasks: test_imports.yml
tags: test
And a test_imports.yml file like:
- name: test
debug:
msg: test
changed_when: yes
notify: restart service
- name: test2
debug:
msg: test2
changed_when: yes
notify: restart service
This all yields:
PLAY [all] *******************************************************************************************************
TASK [test] ******************************************************************************************************
changed: [localhost] =>
msg: test
TASK [test2] *****************************************************************************************************
changed: [localhost] =>
msg: test2
RUNNING HANDLER [restart service] ********************************************************************************
ok: [localhost] =>
msg: restart service
PLAY RECAP *******************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Then if you want to import those tasks somewhere this handler is not defined you could use the environment variable ERROR_ON_MISSING_HANDLER that helps you transform the error thrown when there is a missing handler to a "simple" warning.
e.g.
$ ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook play.yml -i inventory.yml
PLAY [all] *******************************************************************************************************
TASK [test] ******************************************************************************************************
[WARNING]: The requested handler 'restart service' was not found in either the main handlers list nor in the
listening handlers list
changed: [localhost] =>
msg: test
TASK [test2] *****************************************************************************************************
changed: [localhost] =>
msg: test2
PLAY RECAP *******************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible loses variable content

I've a role with some tasks to call docker_compose. When I use a variable twice, the second task fails with error: The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'base_dir'.
It looks to me (using copy&paste) that the second call to docker_compose can't access the variable in the same scope. With other tasks (file, command, shell) it works well.
The variable is stored in roles/collabora/default/main.yml:
collabora:
base_dir: /srv/docker/collabora
and this is the inventory:
...
collabora:
hosts:
server1:
collabora_customer:
customer1:
port: 2000
customer2:
port: 2001
server2:
collabora_customer:
customer1:
port: 2000
...
This is a part of my role roles/collabora/tasks/main.yml. It iterates over all entries in host var collabora_customer.
- name: Configure Collabora instances
include_tasks: manage_collabora.yml
loop: "{{ collabora_customer | default({}) | dict2items }}"
And the manage_collabora.yml makes the work per instance.
- debug:
msg: "path: {{collabora.base_dir}}"
- name: Define location
set_fact:
instance: "{{ item }}"
collabora_path: "{{ collabora.base_dir }}/{{ item.key }}"
- name: Create "Collabora" directory
file:
path: "{{collabora_path}}"
state: directory
- name: Copy docker-compose folder "Collabora"
template:
src: "docker-compose.yml.j2"
dest: "{{collabora_path}}/docker-compose.yml"
owner: "root"
group: "root"
mode: "0640"
- name: Run docker-compose pull for "Collabora"
docker_compose:
project_src: "{{collabora_path}}"
state: present
The loop works well for the first trip per host. But the second loop failes for customer2 with error message from above in the debug message.
The funny part is, when I remove the last docker_compose task, then everything works as expected. It looks to me, as the docker_compose removes the variable from the environments - why ever.
The current result is:
PLAY [Install Collabora]
****************************************************
TASK [Gathering Facts]
****************************************************
ok: [server1]
ok: [server2]
TASK [docker_collabora : Configure Collabora instances]
****************************************************
included: roles/docker_collabora/tasks/manage_collabora.yml for server1
included: roles/docker_collabora/tasks/manage_collabora.yml for server1
included: roles/docker_collabora/tasks/manage_collabora.yml for server2
TASK [docker_collabora : debug]
****************************************************
ok: [server2] => {
"msg": "path: /srv/docker/collabora"
}
TASK [docker_collabora : Define location]
****************************************************
ok: [server2]
TASK [docker_collabora : Create "Collabora" directory]
****************************************************
ok: [server2]
TASK [docker_collabora : Copy docker-compose folder "Collabora"]
****************************************************
ok: [server2]
TASK [docker_collabora : Run docker-compose pull for "Collabora"]
****************************************************
ok: [server2]
TASK [docker_collabora : debug]
****************************************************
ok: [server1] => {
"msg": "path: /srv/docker/collabora"
}
TASK [docker_collabora : Define location]
****************************************************
ok: [server1]
TASK [docker_collabora : Create "Collabora" directory]
****************************************************
ok: [server1]
TASK [docker_collabora : Copy docker-compose folder "Collabora"]
****************************************************
ok: [server1]
TASK [docker_collabora : Run docker-compose pull for "Collabora"]
****************************************************
ok: [server1]
TASK [docker_collabora : debug]
****************************************************
fatal: [server1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'base_dir'\n\nThe error appears to be in 'roles/docker_collabora/tasks/manage_collabora.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- debug:\n ^ here\n"}
PLAY RECAP
****************************************************
server1 : ok=9 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
server2 : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I would expect, that this would work.

Execute an entire yaml task file one host after the other in ansible

I want to run test.yaml, which contains a lot of tasks. I expect that the first host in my inventory runs all tasks in test.yaml before moving to the next one. But the actual result is that each task in test.yml is run on all three hosts at a time. How to implement this function?
This is my inventoy
[host1]
192.168.1.1
192.168.1.2
192.168.1.3
This is my task file
---
# test.yaml
- name: task1
shell: echo task1
- name: task2
shell: echo task2
- name: task3
shell: echo task3
And this is how I include the task file in my playbook
- name: Multiple machine loops include
include: test.yaml
delegate_to: "{{item}}"
loop: "{{ groups['host1'] }}"
The actual result is
TASK [Multiple machine loops include] **********************************************************************************************************************************************************
included: /home/learn/main.yml for 192.168.1.1, 192.168.1.2, 192.168.1.3 => (item=192.168.1.1)
included: /home/learn/main.yml for 192.168.1.1, 192.168.1.2, 192.168.1.3 => (item=192.168.1.2)
included: /home/learn/main.yml for 192.168.1.1, 192.168.1.2, 192.168.1.3 => (item=192.168.1.3)
TASK [task1] *********************************************************************************************************************************************************
ok: [192.168.11.1]
ok: [192.168.11.2]
ok: [192.168.11.3]
TASK [task2] *********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task3] ******************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task1] *******************************************************************************************************************************************************
ok: [192.168.11.1]
ok: [192.168.11.2]
ok: [192.168.11.3]
TASK [task2] ********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task3] *********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task1] *********************************************************************************************************************************************************
ok: [192.168.11.1]
ok: [192.168.11.2]
ok: [192.168.11.3]
TASK [task2] ********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task3] *********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
PLAY RECAP ***********************************************************************************************************************************************************
192.168.11.1 : ok=12 changed=6 unreachable=0 failed=0
192.168.11.2 : ok=12 changed=6 unreachable=0 failed=0
192.168.11.3 : ok=12 changed=6 unreachable=0 failed=0
What I expect is:
TASK [task1] ***********************************************************************************************************************************************************
changed: [192.168.1.1]
TASK [task2] ***********************************************************************************************************************************************************
changed: [192.168.1.1]
TASK [task3] ***********************************************************************************************************************************************************
changed: [192.168.1.1]
TASK [task1] ***********************************************************************************************************************************************************
changed: [192.168.1.2]
TASK [task2] ***********************************************************************************************************************************************************
changed: [192.168.1.2]
TASK [task3] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
TASK [task1] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
TASK [task2] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
TASK [task3] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
What you are trying to do is to play a set of tasks serially on a set of hosts. Vladimir's answer points out why your current implementation cannot achieve your requirement.
You could do that with an include and a loop (see below if you really need that for a particular reason), but the best way IMO is to use serial in your play as described in the documentation for rolling upgrades
For both example below, I created a "fake" inventory file with 3 declared hosts all using the local connection type
[my_group]
host1 ansible_connection=local
host2 ansible_connection=local
host3 ansible_connection=local
Serial run (preferred)
This is the test.yml playbook for the serial run
---
- name: Serial run demo
hosts: my_group
serial: 1
tasks:
- name: task1
shell: echo task 1
- name: task2
shell: echo task 2
- name: task3
shell: echo task 3
And the result
$ ansible-playbook -i inventory test.yml
PLAY [Serial run demo] ******************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [host1]
TASK [task1] ****************************************************************************************************************
changed: [host1]
TASK [task2] ****************************************************************************************************************
changed: [host1]
TASK [task3] ****************************************************************************************************************
changed: [host1]
PLAY [Serial run demo] ******************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [host2]
TASK [task1] ****************************************************************************************************************
changed: [host2]
TASK [task2] ****************************************************************************************************************
changed: [host2]
TASK [task3] ****************************************************************************************************************
changed: [host2]
PLAY [Serial run demo] ******************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [host3]
TASK [task1] ****************************************************************************************************************
changed: [host3]
TASK [task2] ****************************************************************************************************************
changed: [host3]
TASK [task3] ****************************************************************************************************************
changed: [host3]
PLAY RECAP ******************************************************************************************************************
host1 : ok=4 changed=3 unreachable=0 failed=0
host2 : ok=4 changed=3 unreachable=0 failed=0
host3 : ok=4 changed=3 unreachable=0 failed=0
Include run (alternative if really needed)
If you really need to use include and delegate to a set of hosts, this would still be possible but you need:
To change your included file to add delegate_to for each task with a variable.
To target a single host in the play running your include and not your group of hosts like demonstrated in your question.
Note that in your question, you are using include which is already announce for future deprecation (see the notes on module documentation). You should prefer all the include_* and import_* replacements modules, in your case include_tasks
This is the test_include.yml file
---
- name: task1
shell: echo task 1
delegate_to: "{{ delegate_host }}"
- name: task2
shell: echo task 2
delegate_to: "{{ delegate_host }}"
- name: task3
shell: echo task 3
delegate_to: "{{ delegate_host }}"
This is the test.yml playbook:
---
- name: Include loop demo
hosts: localhost
gather_facts: false
tasks:
- name: Include needed tasks for each hosts
include_tasks: test_include.yml
loop: "{{ groups['my_group'] }}"
loop_control:
loop_var: delegate_host
And the result
$ ansible-playbook -i inventory test.yml
PLAY [Include loop demo] *********************************************************************
TASK [Include needed tasks for each hosts] ***************************************************
included: /tmp/testso/test_include.yml for localhost
included: /tmp/testso/test_include.yml for localhost
included: /tmp/testso/test_include.yml for localhost
TASK [task1] *********************************************************************************
changed: [localhost -> host1]
TASK [task2] *********************************************************************************
changed: [localhost -> host1]
TASK [task3] *********************************************************************************
changed: [localhost -> host1]
TASK [task1] *********************************************************************************
changed: [localhost -> host2]
TASK [task2] *********************************************************************************
changed: [localhost -> host2]
TASK [task3] *********************************************************************************
changed: [localhost -> host2]
TASK [task1] *********************************************************************************
changed: [localhost -> host3]
TASK [task2] *********************************************************************************
changed: [localhost -> host3]
TASK [task3] *********************************************************************************
changed: [localhost -> host3]
PLAY RECAP ***********************************************************************************
localhost : ok=12 changed=9 unreachable=0 failed=0
'include' is not delegated. Quoting from Delegation
Be aware that it does not make sense to delegate all tasks, debug, add_host, include, etc always get executed on the controller.
To see what's going on run the simplified playbook below
- hosts: localhost
tasks:
- name: Multiple machine loops include
include: test.yaml
delegate_to: "{{ item }}"
loop: "{{ groups['host1'] }}"
with 'test.yaml'
- debug:
msg: "{{ inventory_hostname }} {{ item }}"
For example, with the inventory
host1:
hosts:
test_01:
test_02:
test_03:
the abridged output below shows that the included task run at localhost despite the fact that the task was delegated to another host.
TASK [debug]
ok: [localhost] => {
"msg": "localhost test_01"
}
TASK [debug]
ok: [localhost] => {
"msg": "localhost test_02"
}
TASK [debug]
ok: [localhost] => {
"msg": "localhost test_03"
}
PLAY RECAP
localhost : ok=6 changed=0 unreachable=0 failed=0
The solution is to move 'delegate_to' to the included task. The play below
- hosts: localhost
tasks:
- name: Multiple machine loops include
include: test.yaml
loop: "{{ groups['host1'] }}"
with the included task
- command: hostname
register: result
delegate_to: "{{ item }}"
- debug: var=result.stdout
gives (abridged):
TASK [command]
changed: [localhost -> test_01]
TASK [debug]
ok: [localhost] => {
"result.stdout": "test_01.example.com"
}
TASK [command]
changed: [localhost -> test_02]
TASK [debug]
ok: [localhost] => {
"result.stdout": "test_02.example.com"
}
TASK [command]
changed: [localhost -> test_03]
TASK [debug]
ok: [localhost] => {
"result.stdout": "test_03.example.com"
}
PLAY RECAP
localhost : ok=9 changed=3 unreachable=0 failed=0

Resources