How to trigger handlers (notify) when using import_tasks? - ansible

When using notify on a task using import_tasks the handlers are not fired. I'm wondering why. tags are working like expected.
How to trigger handlers on imported tasks?
Example:
playbook test.yml:
- hosts: all
gather_facts: no
handlers:
- name: restart service
debug:
msg: restart service
tasks:
- import_tasks: test_imports.yml
notify: restart service
tags: test
test_imports.yml
- name: test
debug:
msg: test
changed_when: yes
- name: test2
debug:
msg: test2
changed_when: yes
Expected:
> ansible-playbook -i localhost, test.yml
PLAY [all] *************************************************************************************************************
TASK [test] ************************************************************************************************************
changed: [localhost] => {
"msg": "test"
}
TASK [test2] ***********************************************************************************************************
changed: [localhost] => {
"msg": "test2"
}
RUNNING HANDLER [restart service] **************************************************************************************
ok: [localhost] => {
"msg": "restart service"
}
...
Actual:
> ansible-playbook -i localhost, test.yml
PLAY [all] *************************************************************************************************************
TASK [test] ************************************************************************************************************
changed: [localhost] => {
"msg": "test"
}
TASK [test2] ***********************************************************************************************************
changed: [localhost] => {
"msg": "test2"
}
...

This question has been partially answered in Ansible's bug tracker here:
import_tasks is processed at parse time and is effectively replaced by the tasks it imports. So when using import_tasks in handlers you would need to notify the task names within.
Source: mkrizek's comment: https://github.com/ansible/ansible/issues/59706#issuecomment-515879321
This has been also further explained here:
imports do not work like tasks, and they do not have an association between the import_tasks task and the tasks within the imported file really. import_tasks is a pre-processing trigger, that is handled during playbook parsing time. When the parser encounters it, the tasks within are retrieved, and inserted where the import_tasks task was located. There is a proposal to "taskify" includes at ansible/proposals#136 but I don't see that being implemented any time soon.
Source: sivel comment: https://github.com/ansible/ansible/issues/64935#issuecomment-573062042
And for the good part of it, it seems it has been recently fixed: https://github.com/ansible/ansible/pull/73572
But, for now, what would work would be:
- hosts: all
gather_facts: no
handlers:
- name: restart service
debug:
msg: restart service
tasks:
- import_tasks: test_imports.yml
tags: test
And a test_imports.yml file like:
- name: test
debug:
msg: test
changed_when: yes
notify: restart service
- name: test2
debug:
msg: test2
changed_when: yes
notify: restart service
This all yields:
PLAY [all] *******************************************************************************************************
TASK [test] ******************************************************************************************************
changed: [localhost] =>
msg: test
TASK [test2] *****************************************************************************************************
changed: [localhost] =>
msg: test2
RUNNING HANDLER [restart service] ********************************************************************************
ok: [localhost] =>
msg: restart service
PLAY RECAP *******************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Then if you want to import those tasks somewhere this handler is not defined you could use the environment variable ERROR_ON_MISSING_HANDLER that helps you transform the error thrown when there is a missing handler to a "simple" warning.
e.g.
$ ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook play.yml -i inventory.yml
PLAY [all] *******************************************************************************************************
TASK [test] ******************************************************************************************************
[WARNING]: The requested handler 'restart service' was not found in either the main handlers list nor in the
listening handlers list
changed: [localhost] =>
msg: test
TASK [test2] *****************************************************************************************************
changed: [localhost] =>
msg: test2
PLAY RECAP *******************************************************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Related

Ansible rolling execution block of handlers on targets

I have an ansible playbook that calls a role, containing many tasks and handlers. the handlers will be triggered if my desired configuration of service is changed.
What I am looking for is a way to trigger multiple handlers on the hosts sequently. for example, if I have to targets target1 and target2 and I have also two handlers handler1 and handler2, What I want in execution of handlers on targets would be something like below:
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target1]
RUNNING HANDLER [myrole : handler2] *************************************************
changed: [target1]
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target2]
RUNNING HANDLER [myrole : handler2] *************************************************
changed: [target2]
But as is known, the normal execution of handlers on targets are as below:
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target1]
changed: [target2]
RUNNING HANDLER [myrole : handler 2] ********************************************
changed: [target1]
changed: [target2]
That it is not what I want.
I know that with using of serial option in playbook level we can restrict parallelism, but this option will bring the cost of huge time consuming because all of my tasks would be executed in serial as well.
The ways I have tried was using both of throttle option and block directive on handlers but it wasn't usefull.
flush_handlers on each host separately. Dynamically create and include the file. For example, the playbook
shell> cat pb.yml
- hosts: target1,target2
tasks:
- debug:
msg: Notify handler1
changed_when: true
notify: handler1
- debug:
msg: Notify handler2
changed_when: true
notify: handler2
- block:
- copy:
dest: "{{ playbook_dir }}/flush_handlers_serial.yml"
content: |
{% for host in ansible_play_hosts_all %}
- meta: flush_handlers
when: inventory_hostname == '{{ host }}'
{% endfor %}
delegate_to: localhost
- include_tasks: flush_handlers_serial.yml
run_once: true
when: flush_handlers_serial|d(false)|bool
handlers:
- name: handler1
debug:
msg: Run handler1
- name: handler2
debug:
msg: Run handler2
by default runs the handlers in parallel (see linear strategy)
shell> ansible-playbook pb.yml
PLAY [target1,target2] ***************************************************************************************
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler1
changed: [target2] =>
msg: Notify handler1
TASK [debug] *************************************************************************************************
changed: [target2] =>
msg: Notify handler2
changed: [target1] =>
msg: Notify handler2
TASK [copy] **************************************************************************************************
skipping: [target1]
TASK [include_tasks] *****************************************************************************************
skipping: [target1]
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target1] =>
msg: Run handler1
ok: [target2] =>
msg: Run handler1
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target1] =>
msg: Run handler2
ok: [target2] =>
msg: Run handler2
PLAY RECAP ***************************************************************************************************
target1: ok=4 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
target2: ok=4 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
When you enable flush_handlers_serial=true the file below will be created and included
shell> cat flush_handlers_serial.yml
- meta: flush_handlers
when: inventory_hostname == 'target1'
- meta: flush_handlers
when: inventory_hostname == 'target2'
This will run the handlers serially, similarly to the strategy host_pinned
shell> ansible-playbook pb.yml -e flush_handlers_serial=true
PLAY [target1,target2] ***************************************************************************************
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler1
changed: [target2] =>
msg: Notify handler1
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler2
changed: [target2] =>
msg: Notify handler2
TASK [copy] **************************************************************************************************
changed: [target1 -> localhost]
TASK [include_tasks] *****************************************************************************************
included: /export/scratch/tmp7/test-172/flush_handlers_serial.yml for target1
TASK [meta] **************************************************************************************************
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target1] =>
msg: Run handler1
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target1] =>
msg: Run handler2
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target2] =>
msg: Run handler1
TASK [meta] **************************************************************************************************
skipping: [target1]
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target2] =>
msg: Run handler2
PLAY RECAP ***************************************************************************************************
target1: ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
target2: ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've got a way to solve my problem as a workaround.
the file handlers/main.yml would be as below:
---
- name: Trigger some handlers sequentially.
include_tasks: trigger_handlers_sequentialy.yml
run_once: true
loop: "{{ ansible_play_hosts_all }}"
listen: restart_my_service
So, the above handler will connect to just one target (because of using run_once option) and the loop operation on the targets will be done one time.
Then, all the handlers that are expected to be run on targets in serial mode should be inserted on the tasks/trigger_handlers_sequentialy.yml file:
---
- name: handler1
debug:
msg: run handler1
delegate_to: "{{ item }}"
- name: handler2
debug:
msg: run handler2
delegate_to: "{{ item }}"
So, what will be happened during the playbook execution will be something like below:
RUNNING HANDLER [myrole : Trigger some handlers sequentially.] ***********************************************************************************************************************************************
included: ~/ansible/roles/myrole/tasks/trigger_handlers_sequentialy.yml for target1 => (item=target1)
RUNNING HANDLER [myrole : run handler1] ***********************************************************************************************************************************************
ok: [target1]
RUNNING HANDLER [myrole : run handler2] **********************************************************************************************************************************************
ok: [target1]
included: ~/ansible/roles/myrole/tasks/trigger_handlers_sequentialy.yml for target1 => (item=target2)
RUNNING HANDLER [myrole : run handler1] ***********************************************************************************************************************************************
ok: [target1 -> target2(192.168.4.10)]
RUNNING HANDLER [myrole : run handler2] **********************************************************************************************************************************************
ok: [target1 -> target2(192.168.4.10)]

Using ansible variable inside gathered fact list

I'm stuck to get data from gathered fact, using calculated data as part of query.
I am using 2.9 ansible and here is my task
---
- hosts: ios
connection: network_cli
gather_facts: true
tasks:
- name: CEF OUTPUT
ios_command:
commands: sh ip cef 0.0.0.0 0.0.0.0 | i nexthop
register: cef
- set_fact:
reg_result: "{{ cef.stdout |string| regex_search('Tunnel[0-9]+')}}"
- name: IT WORKS!
debug:
msg: "{{ reg_result }}"
- name: MANUAL LIST
debug:
var: ansible_facts.net_interfaces.Tunnel3.description
- name: AUTO LIST
debug:
var: ansible_facts.net_interfaces.[reg_result].description
and here is output
PLAY [ios] **********************************************
TASK [Gathering Facts] **********************************
ok: [10.77.3.1]
TASK [CEF OUTPUT] ***************************************
ok: [10.77.3.1]
TASK [set_fact] *****************************************
ok: [10.77.3.1]
TASK [IT WORKS!] ****************************************
ok: [10.77.3.1] => {
"msg": "Tunnel3"
}
TASK [MANUAL LIST] **************************************
ok: [10.77.3.1] => {
"ansible_facts.net_interfaces.Tunnel3.description": "DMVPN via MTS"
}
TASK [AUTO LIST] ****************************************
fatal: [10.77.3.1]: FAILED! => {"msg": "template error while templating string: expected name or number. String: {{ansible_facts.net_interfaces.[reg_result].description}}"}
to retry, use: --limit #/home/user/ansible/retry/ios_find_gw_int.retry
PLAY RECAP **********************************************
10.77.3.1 : ok=5 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
You see. Now I know that my default gateway is pointing to "Tunnel3", and it is possible to get some data placing this "Tunnel3" in {{ ansible_facts.net_interfaces.Tunnel3.description }} but how to get this automatically? And I feel such nested variable in the list is a very handy tool.
Remove the dot if you use the indirect addressing
- name: AUTO LIST
debug:
var: ansible_facts.net_interfaces[reg_result].description
See Referencing key:value dictionary variables.

Ansible loses variable content

I've a role with some tasks to call docker_compose. When I use a variable twice, the second task fails with error: The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'base_dir'.
It looks to me (using copy&paste) that the second call to docker_compose can't access the variable in the same scope. With other tasks (file, command, shell) it works well.
The variable is stored in roles/collabora/default/main.yml:
collabora:
base_dir: /srv/docker/collabora
and this is the inventory:
...
collabora:
hosts:
server1:
collabora_customer:
customer1:
port: 2000
customer2:
port: 2001
server2:
collabora_customer:
customer1:
port: 2000
...
This is a part of my role roles/collabora/tasks/main.yml. It iterates over all entries in host var collabora_customer.
- name: Configure Collabora instances
include_tasks: manage_collabora.yml
loop: "{{ collabora_customer | default({}) | dict2items }}"
And the manage_collabora.yml makes the work per instance.
- debug:
msg: "path: {{collabora.base_dir}}"
- name: Define location
set_fact:
instance: "{{ item }}"
collabora_path: "{{ collabora.base_dir }}/{{ item.key }}"
- name: Create "Collabora" directory
file:
path: "{{collabora_path}}"
state: directory
- name: Copy docker-compose folder "Collabora"
template:
src: "docker-compose.yml.j2"
dest: "{{collabora_path}}/docker-compose.yml"
owner: "root"
group: "root"
mode: "0640"
- name: Run docker-compose pull for "Collabora"
docker_compose:
project_src: "{{collabora_path}}"
state: present
The loop works well for the first trip per host. But the second loop failes for customer2 with error message from above in the debug message.
The funny part is, when I remove the last docker_compose task, then everything works as expected. It looks to me, as the docker_compose removes the variable from the environments - why ever.
The current result is:
PLAY [Install Collabora]
****************************************************
TASK [Gathering Facts]
****************************************************
ok: [server1]
ok: [server2]
TASK [docker_collabora : Configure Collabora instances]
****************************************************
included: roles/docker_collabora/tasks/manage_collabora.yml for server1
included: roles/docker_collabora/tasks/manage_collabora.yml for server1
included: roles/docker_collabora/tasks/manage_collabora.yml for server2
TASK [docker_collabora : debug]
****************************************************
ok: [server2] => {
"msg": "path: /srv/docker/collabora"
}
TASK [docker_collabora : Define location]
****************************************************
ok: [server2]
TASK [docker_collabora : Create "Collabora" directory]
****************************************************
ok: [server2]
TASK [docker_collabora : Copy docker-compose folder "Collabora"]
****************************************************
ok: [server2]
TASK [docker_collabora : Run docker-compose pull for "Collabora"]
****************************************************
ok: [server2]
TASK [docker_collabora : debug]
****************************************************
ok: [server1] => {
"msg": "path: /srv/docker/collabora"
}
TASK [docker_collabora : Define location]
****************************************************
ok: [server1]
TASK [docker_collabora : Create "Collabora" directory]
****************************************************
ok: [server1]
TASK [docker_collabora : Copy docker-compose folder "Collabora"]
****************************************************
ok: [server1]
TASK [docker_collabora : Run docker-compose pull for "Collabora"]
****************************************************
ok: [server1]
TASK [docker_collabora : debug]
****************************************************
fatal: [server1]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'base_dir'\n\nThe error appears to be in 'roles/docker_collabora/tasks/manage_collabora.yml': line 2, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n---\n- debug:\n ^ here\n"}
PLAY RECAP
****************************************************
server1 : ok=9 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
server2 : ok=7 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I would expect, that this would work.

How do I handle rollback in case of failure using handlers in Ansible?

I wrote below yml file which will install the SSM and cloudwatch agent but I want to rollback the installation in case of any failures during the installation. I tried use FAIL but not working..Please advise..
---
# tasks file for SSMAgnetInstall
- name: status check
command: systemctl status amazon-ssm-agent
register: s_status
- debug:
msg: "{{ s_status }}"
- name: Get CPU architecture
command: getconf LONG_BIT
register: cpu_arch
changed_when: False
check_mode: no
when: s_status.stdout == ""
ignore_errors: true
- name: Install rpm file for Redhat Family (Amazon Linux, RHEL, and CentOS) 32/64-bit
yum:
name: "https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_386/amazon-ssm-agent.rpm"
state: present
when: s_status.stdout == ""
become: yes
ignore_errors: true
- name: cloud status check
command: systemctl status amazon-cloudwatch-agent
register: cld_status
become: yes
- debug:
msg: "{{ cld_status }}"
- name: Register to cloud watch service
become: yes
become_user: root
service:
name: amazon-ssm-agent
enabled: yes
state: started
- name: copy the output to a local file
copy:
content: "{{ myshell_output.stdout }}"
dest: "/home/ansible/rama/output.txt"
delegate_to: localhost
You should have a look at the the documentation on blocks, more specifically the error handling part. This is the general idea with an oversimplified example, you will have to adapt to your specific case.
The test.yml playbook
---
- hosts: localhost
gather_facts: false
tasks:
- block:
- name: I am a task that can fail
debug:
msg: "I {{ gen_fail | default(false) | bool | ternary('failed', 'succeeded') }}"
failed_when: gen_fail | default(false) | bool
- name: I am a task that will never fail
debug:
msg: I succeeded
rescue:
- name: I am a task in a block played when a failure happens
debug:
msg: rescue task
always:
- name: I am a task always played whatever happens
debug:
msg: always task
Played normally (no fail)
$ ansible-playbook test.yml
PLAY [localhost] ************************************************************************
TASK [I am a task that can fail] ********************************************************
ok: [localhost] => {
"msg": "I succeeded"
}
TASK [I am a task that will never fail] *************************************************
ok: [localhost] => {
"msg": "I succeeded"
}
TASK [I am a task always played whatever happens] ***************************************
ok: [localhost] => {
"msg": "always task"
}
PLAY RECAP ******************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Played forcing a fail
$ ansible-playbook test.yml -e gen_fail=true
PLAY [localhost] ************************************************************************
TASK [I am a task that can fail] ********************************************************
fatal: [localhost]: FAILED! => {
"msg": "I failed"
}
TASK [I am a task in a block played when a failure happens] *****************************
ok: [localhost] => {
"msg": "rescue task"
}
TASK [I am a task always played whatever happens] ***************************************
ok: [localhost] => {
"msg": "always task"
}
PLAY RECAP ******************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0

Ansible: include_role on a loop running unexpected number of times

I am trying to use include_role with items
---
- hosts: cluster
tasks:
- block:
- name: Execute test role
include_role:
name: testrole
with_items:
- 'one'
...
My role is
---
- name: Just debugging
debug:
...
The issue is that it seems that the role is being ran by each host X times per item where X is the number of hosts.
PLAY [cluster] *****************************************************************
TASK [setup] *******************************************************************
ok: [thisNode]
ok: [dww]
TASK [Execute test role] *******************************************************
TASK [testrole : Just debugging] ***********************************************
ok: [thisNode] => {
"msg": "Hello world!"
}
ok: [dww] => {
"msg": "Hello world!"
}
TASK [testrole : Just debugging] ***********************************************
ok: [thisNode] => {
"msg": "Hello world!"
}
ok: [dww] => {
"msg": "Hello world!"
}
PLAY RECAP *********************************************************************
dww : ok=3 changed=0 unreachable=0 failed=0
thisNode : ok=3 changed=0 unreachable=0 failed=0
Why is this happening and how can I fix it?
Ansible hosts:
[cluster]
thisNode ansible_host=localhost ansible_connection=local
dww
I cannot delegate the task because, in the real role, the task must be executed in each of the hosts.
Using allow_duplicates: no still outputs the same.
---
- hosts: cluster
tasks:
- name: Execute test role
include_role:
name: testrole
allow_duplicates: False
with_items:
- 'one'
...
As a workaround you can add allow_duplicates: false to prevent Ansible from running the same role twice with the same parameters.
Clearly the module is looped twice: once with hosts, the other time with the specified items. As it is supposed to runs the action against all hosts, the outer loop gets executed twice.
It's a new module in preview state and this behaviour should probably be files as an issue.
Ansible has an internal parameter BYPASS_HOST_LOOP to avoid such situations and this mechanism should probably be used by this module.
I had the same problem and allow_duplicates: False did not change anything.
It seems that setting serial: 1 on the play and it somehow solved it. This is a workaround that probably will work for a small number of hosts.

Resources