Ansible fails to locate templates in role's /templates directory [duplicate] - ansible

Does ansible pass Role Default variables to the Handlers within the same Role?
Here's a minimal excerpt of the playbook that has the issue:
Role hierarchy
- playbook.yml
- roles/
- gunicorn/
- defaults/
- main.yml
- handlers/
- main.yml
- code-checkout/
- tasks/
- main.yml
Here's the file contents
gunicorn/defaults/main.yml
---
gu_log: "/tmp/gunicorn.log"
gunicorn/handlers/main.yml
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
finalize/tasks/main.yml
---
- name: Test Handlers
shell: ls
notify:
- Restart Gunicorn
playbook.yml
---
- name: Deploy
hosts: webservers
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
AFAIK everything looks good. However, I get this error during the playbook execution
FAILED! => {"failed": true, "msg": "the field 'args' has an invalid
value, which appears to include a variable that is undefined. The
error was: 'gu_log' is undefined\n\nThe error appears to have been in
'/roles/gunicorn/handlers/main.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n\n-
name: Restart Gunicorn\n ^ here\n"}
Using Ansible 2.2 on Ubuntu 12.04 LTS
Here's a modified version of techraf's script that creates all the directories and demonstrates my issue
#!/bin/bash
mkdir -p ./rtindru-test/roles/gunicorn
mkdir -p ./rtindru-test/roles/gunicorn/defaults
mkdir -p ./rtindru-test/roles/gunicorn/handlers
mkdir -p ./rtindru-test/roles/finalize/tasks
cat >./rtindru-test/roles/finalize/tasks/main.yml <<HANDLERS_END
---
- name: Test Handlers
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./rtindru-test/playbook.yml <<PLAYBOOK_END
---
- name: Deploy
hosts: localhost
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./rtindru-test/playbook.yml
ls -l /tmp/gunicorn.log
Output
PLAY [Deploy]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [Test Handlers]
*********************************************************** fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has
an invalid value, which appears to include a variable that is
undefined. The error was: 'gu_log' is undefined\n\nThe error appears
to have been in '/rtindru-test/roles/finalize/tasks/main.yml': line 2,
column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n---\n- name:
Test Handlers\n ^ here\n"} to retry, use: --limit
#/rtindru-test/playbook.retry
PLAY RECAP
********************************************************************* localhost : ok=1 changed=0 unreachable=0
failed=1

You are neither defining nor using any roles. With the following task:
- include: roles/finalize/tasks/main.yml
you are only including a tasks file into your playbook. It has nothing to do with roles.
To assign a role you should specify a list of roles for a play (one or more):
role:
- my_role1
- my_role2
Please have a look at the documentation on roles and feel free to use the playbook and structure as created by the below script.
Does ansible pass Role Default variables to the Handlers within the same Role?
Yes it does.
For a proof run the following bash script which creates and runs a minimal example. It takes the contents of gunicorn/defaults/main.yml and gunicorn/handlers/main.yml from the question intact and adds missing components: the tasks and the playbook. It creates a file to be removed and runs the playbook.
#!/bin/bash
mkdir -p ./so41285033/roles/gunicorn
mkdir -p ./so41285033/roles/gunicorn/defaults
mkdir -p ./so41285033/roles/gunicorn/handlers
mkdir -p ./so41285033/roles/gunicorn/tasks
cat >./so41285033/roles/gunicorn/tasks/main.yml <<TASKS_END
---
- debug:
changed_when: true
notify: Clear Gunicorn Log
TASKS_END
cat >./so41285033/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
when: "'apiservers' not in group_names"
HANDLERS_END
cat >./so41285033/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./so41285033/playbook.yml <<PLAYBOOK_END
---
- hosts: localhost
gather_facts: no
connection: local
roles:
- gunicorn
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./so41285033/playbook.yml
ls -l /tmp/gunicorn.log
The result:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [gunicorn : debug] ********************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [gunicorn : Clear Gunicorn Log] ********************************
changed: [localhost]
[WARNING]: Consider using file module with state=absent rather than running rm
PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
ls: /tmp/gunicorn.log: No such file or directory
Interpretation:
Before running the playbook the file /tmp/gunicorn.log was created and its existence verified:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
After running the playbook the file /tmp/gunicorn.log does not exist:
ls: /tmp/gunicorn.log: No such file or directory
Ansible correctly passed the variable gu_log value to the Clear Gunicorn Log handler which removed the file.
Final remark:
The problem described in question is impossible to reproduce, because the question does not contain complete nor verifiable example in the meaning of MCVE.

Related

How do I use ansible_become_password on different target in the same playbook

My playbook has a task to copy a file from the local box to the remote box and the last task has to use sudo to root. I am not getting it to work.
In my inventory I am trying to use I am just trying to get it to work then I can lock it down with ansible vault. I just need to see it work first.
[logserver]
mylogserver ansible_ssh_user=myuser ansible_become_password=mypassword
In my playbook the last task using the -host param to do the work on the remote box, earlier task copies file to remote server but then I add a remote host to get the work done.
# Cat file and append destination file
- name: cat files to destination file
hosts: mylogserver
gather_facts: no
become_exe: "sudo su - root"
tasks:
- name: cat contents and append to destination file
shell:
cmd: /usr/bin/cat /var/tmp/test_file.txt >> /etc/some/target_file.txt
For example, the inventory
shell> cat hosts
test_11 ansible_ssh_user=User1 ansible_become_password=mypassword
[logserver]
test_11
and the playbook
shell> cat pb.yml
- hosts: logserver
gather_facts: false
become_method: su
become_exe: sudo su
become_user: root
become_flags: -l
tasks:
- command: whoami
become: true
register: result
- debug:
var: result.stdout
work as expected
shell> ansible-playbook -i hosts pb.yml
PLAY [logserver] ******************************************************************************
TASK [command] ********************************************************************************
changed: [test_11]
TASK [debug] **********************************************************************************
ok: [test_11] =>
result.stdout: root
PLAY RECAP ************************************************************************************
test_11: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Notes:
Configuration of the method, e.g. become_method: su, is missing. See DEFAULT_BECOME_METHOD. The default is 'sudo'. For details see Using Become Plugins
See details of the plugin from the command line shell> ansible-doc -t become su
Use become_flags to specify options to pass to su, e.g. become_flags: -l
Use become_user to specify the user you 'become' to execute the task, e.g. become_user: root. This example is redundant. 'root' is the default
Specify become: true if the task shall use the configured escalation. The default is 'false'. See DEFAULT_BECOME
Configure sudoers, e.g.
- command: grep User1 /usr/local/etc/sudoers
become: true
register: result
- debug:
var: result.stdout
gives
TASK [debug] ***************************************************************
ok: [test_11] =>
result.stdout: User1 ALL=(ALL) ALL
Encrypt the password
Remove the password from inventory hosts and put it into a file, e.g. in host_vars
shell> cat hosts
test_11 ansible_ssh_user=User1
[logserver]
test_11
shell> cat host_vars/test_11/ansible_become_password.yml
ansible_become_password: mypassword
Encrypt the password
shell> ansible-vault encrypt host_vars/test_11/ansible_become_password.yml
Encryption successful
shell> cat host_vars/test_11/ansible_become_password.yml
$ANSIBLE_VAULT;1.1;AES256
35646364306161623262653632393833323662633738323639353539666231373165646238636236
3462386536666463323136396131326333636365663264350a383839333536313937353637373765
...
Test the playbook
shell> ansible-playbook -i hosts pb.yml
PLAY [logserver] ******************************************************************************
TASK [command] ********************************************************************************
changed: [test_11]
TASK [debug] **********************************************************************************
ok: [test_11] =>
result.stdout: root
...

Why ansible doesn't do the task at first attempt in Gitlab?

I am executing some ansible playbooks via gitlab-ci and what I could see is
Ansible playbook executing successfully through pipeline, but it doesn't produce the output it is intended to do
When I retry the gitlab job, it produces the output I needed.
This is one of the many playbooks I am executing through gitlab:
1_ca.yaml
---
- hosts: 127.0.0.1
connection: local
tasks:
- name: Create ca-csr.json
become: true
copy:
dest: ca-csr.json
content: '{"CN":"Kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"Portland","O":"Kubernetes","OU":"CA","ST":"Oregon"}]}'
- name: Create ca-config.json
become: true
copy:
dest: ca-config.json
content: '{"signing":{"default":{"expiry":"8760h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"8760h"}}}}'
- name: Create the ca.pem & ca-key.pem
# become: true
shell: |
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Basically what does this do is, it creates some certs I needed.
But in the first attempt even though pipeline passes and it doesn't generate these certs. When I restart (running the same job for the second time) that particular job in gitlab it generates these certs.
Why this is happening?
This is how my .gitlab-ci.yaml looks like:
Create-Certificates:
stage: ansible-play-books-create-certs
retry:
max: 2
when:
- always
script:
- echo "Executing ansible playbooks for generating certficates"
- ansible-playbook ./ansible-playbooks/1_ca/1_ca.yaml
- ansible-playbook ./ansible-playbooks/1_ca/2_admin.yaml
- ansible-playbook ./ansible-playbooks/1_ca/3_kubelet.yaml
- ansible-playbook ./ansible-playbooks/1_ca/4_kube-controller.yaml
- ansible-playbook ./ansible-playbooks/1_ca/5_kube-proxy.yaml
- ansible-playbook ./ansible-playbooks/1_ca/6_kube-scheduler.yaml
- ansible-playbook ./ansible-playbooks/1_ca/7_kube-api-server.yaml
- ansible-playbook ./ansible-playbooks/1_ca/8_service-account.yaml
- ansible-playbook ./ansible-playbooks/1_ca/9_distribute-client-server-cert.yaml
# when: delayed
# start_in: 1 minutes
tags:
- banuka-gcp-k8s-hard-way
PS: These ansible playbooks are executing in the ansible host itself, not in remote servers. So I can log into the ansible master server and check if these files are created or not.
running your playbook without the last shell module produces the follwing output:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [127.0.0.1] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [Create ca-csr.json] *****************************************************************************************************************************************************************************************
[WARNING]: File './ca-csr.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
TASK [Create ca-config.json] **************************************************************************************************************************************************************************************
[WARNING]: File './ca-config.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and checking the existence:
$ ls ca* -al
-rw------- 1 root root 155 Aug 17 02:48 ca-config.json
-rw------- 1 root root 129 Aug 17 02:48 ca-csr.json
so although it's quite dirty way of writing a playbook - it works.
Why is it dirty ? :
you're not using any inventory
you should use local_action and not connection: local for local tasks
you are misusing ansible that is multi-node configuration management to do a bash script task
so in conclusion - there's nothing wrong with your ansible playbook - or maybe the file permissions (?) and if it does not run - you should look more in the gitlab-ci direction.
you need to provide more details on Gitlab-CI setup but - maybe the stage is not correct ?

No such file or directory when using zip command

I want to create a zip to package my function by using ansible, this is my playbook:
---
- name: build lambda functions
hosts: localhost
- name: Buid Zip Package
command: zip -r functions/build/build-function.zip .
args:
chdir: functions/function-package/
the function I want to package has its code inside functions/function-package/
I get this error:
> TASK [Buid Zip Package]
> ******************************************************** fatal: [localhost]: FAILED! => {"changed": false, "cmd": "zip -r
> functions/build/build-function.zip", "failed": true, "msg": "[Errno 2]
> No such file or directory", "rc": 2}
The paths are very correct, I don't know what else to check!
the playbook is in a file in the same level as the directory /functions
this is the strtucture of the files:
-- playbook.yml
-- /functions
-- /build
-- /function-package
-- script.py
-- lib
the zip is to be put inside /build
If you're using chdir: functions/function-package on your task, then you're running the zip command inside the functions/function-package directory. That means that the path functions/build/build-function.zip is probably no longer valid, since you're inside a child of the functions/ directory.
Based on the information in your question, the appropriate path is probably ../build/, like this:
- name: build lambda functions
hosts: localhost
- name: Buid Zip Package
command: zip -r ../build/build-function.zip .
args:
chdir: functions/function-package/
Update
If I replicate your directory layout:
$ find *
functions
functions/function-package
functions/function-package/script.py
functions/build
playbook.yml
And run this playbook:
---
- hosts: localhost
gather_facts: false
tasks:
- name: Build Zip Package
command: zip -r ../build/build-function.zip .
args:
chdir: functions/function-package
It works just fine:
$ ansible-playbook playbook.yml
PLAY [localhost] ******************************************************************************
TASK [Build Zip Package] **********************************************************************
changed: [localhost]
PLAY RECAP ************************************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0
And the .zip file exists as expected:
$ ls -l ./functions/build/build-function.zip
-rw-rw-r--. 1 lars lars 209 Jan 3 08:19 ./functions/build/build-function.zip

Ansible when statement with variable from another task

can someone help me please?
I want to create a folder at host "cfme_tester-0".
For this I check variable "osp_version" from "undercloud-0" host and based on the result I should create a folder at the "cfme_tester-0" host.
Here is my playbook:
- name: take openstack version
hosts: undercloud-0
become: true
become_user: stack
tasks:
- name: creating flavor
shell: |
source /home/stack/stackrc
cat /etc/rhosp-release | egrep -o '[0-9]+' | head -1
register: osp_version
ignore_errors: True
- debug: msg="{{ osp_version.stdout }}"
- name: set up CFME tester
hosts: cfme_tester-0
become: yes
become_user: root
tasks:
- name: Run prepare script for OSP10
debug:
shell: |
cd /tmp/cfme/ && mkdir osp10
when: "'10' in osp_version.stdout"
- name: Run prepare script for OSP13
debug:
shell: |
cd /tmp/cfme/ && mkdir osp13
when: "'13' in osp_version.stdout"
But an error occurs:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [undercloud-0] => {
"msg": "10"
}
PLAY [set up CFME tester] *****************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [cfme_tester-0]
TASK [Run prepare script for OSP10] *******************************************************************************************************************************************************************************
fatal: [cfme_tester-0]: FAILED! => {"msg": "The conditional check ''10' in osp_version.stdout' failed. The error was: error while evaluating conditional ('10' in osp_version.stdout): 'osp_version' is undefined\n\nThe error appears to have been in '/root/infrared/rhos-qe-core-installer/playbooks/my_setup.yaml': line 20, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Run prepare script for OSP10\n ^ here\n"}
to retry, use: --limit #/root/infrared/rhos-qe-core-installer/playbooks/my_setup.retry
PLAY RECAP ********************************************************************************************************************************************************************************************************
cfme_tester-0 : ok=1 changed=0 unreachable=0 failed=1
undercloud-0 : ok=3 changed=1 unreachable=0 failed=0
Variables are per-host (because otherwise, what happens when you run a task on multiple hosts and register a variable?). In your first task, you are setting the osp_version variable for host undercloud-0.
If you want to use this variable in your second play, which is running on cfme_tester-0, then you should read the Magic Variables, and How To Access Information About Other Hosts section of the Ansible documentation. You'll need to refer to the variable via the hostvars dictionary, so your second play will look like:
- name: set up CFME tester
hosts: cfme_tester-0
become: yes
become_user: root
tasks:
- name: Run prepare script for OSP10
shell: |
cd /tmp/cfme/ && mkdir osp10
when: "'10' in hostvars['undercloud-0'].osp_version.stdout"
- name: Run prepare script for OSP13
shell: |
cd /tmp/cfme/ && mkdir osp13
when: "'13' in hostvars['undercloud-0'].osp_version.stdout"
...but note that if you're just creating a directory, you would be better off just using the file module instead:
- name: Run prepare script for OSP10
file:
path: /tmp/cfme/osp10
state: directory
when: "'10' in hostvars['undercloud-0'].osp_version.stdout"

Ansible not detecting Role default variables in its handler

Does ansible pass Role Default variables to the Handlers within the same Role?
Here's a minimal excerpt of the playbook that has the issue:
Role hierarchy
- playbook.yml
- roles/
- gunicorn/
- defaults/
- main.yml
- handlers/
- main.yml
- code-checkout/
- tasks/
- main.yml
Here's the file contents
gunicorn/defaults/main.yml
---
gu_log: "/tmp/gunicorn.log"
gunicorn/handlers/main.yml
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
finalize/tasks/main.yml
---
- name: Test Handlers
shell: ls
notify:
- Restart Gunicorn
playbook.yml
---
- name: Deploy
hosts: webservers
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
AFAIK everything looks good. However, I get this error during the playbook execution
FAILED! => {"failed": true, "msg": "the field 'args' has an invalid
value, which appears to include a variable that is undefined. The
error was: 'gu_log' is undefined\n\nThe error appears to have been in
'/roles/gunicorn/handlers/main.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n\n-
name: Restart Gunicorn\n ^ here\n"}
Using Ansible 2.2 on Ubuntu 12.04 LTS
Here's a modified version of techraf's script that creates all the directories and demonstrates my issue
#!/bin/bash
mkdir -p ./rtindru-test/roles/gunicorn
mkdir -p ./rtindru-test/roles/gunicorn/defaults
mkdir -p ./rtindru-test/roles/gunicorn/handlers
mkdir -p ./rtindru-test/roles/finalize/tasks
cat >./rtindru-test/roles/finalize/tasks/main.yml <<HANDLERS_END
---
- name: Test Handlers
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./rtindru-test/playbook.yml <<PLAYBOOK_END
---
- name: Deploy
hosts: localhost
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./rtindru-test/playbook.yml
ls -l /tmp/gunicorn.log
Output
PLAY [Deploy]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [Test Handlers]
*********************************************************** fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has
an invalid value, which appears to include a variable that is
undefined. The error was: 'gu_log' is undefined\n\nThe error appears
to have been in '/rtindru-test/roles/finalize/tasks/main.yml': line 2,
column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n---\n- name:
Test Handlers\n ^ here\n"} to retry, use: --limit
#/rtindru-test/playbook.retry
PLAY RECAP
********************************************************************* localhost : ok=1 changed=0 unreachable=0
failed=1
You are neither defining nor using any roles. With the following task:
- include: roles/finalize/tasks/main.yml
you are only including a tasks file into your playbook. It has nothing to do with roles.
To assign a role you should specify a list of roles for a play (one or more):
role:
- my_role1
- my_role2
Please have a look at the documentation on roles and feel free to use the playbook and structure as created by the below script.
Does ansible pass Role Default variables to the Handlers within the same Role?
Yes it does.
For a proof run the following bash script which creates and runs a minimal example. It takes the contents of gunicorn/defaults/main.yml and gunicorn/handlers/main.yml from the question intact and adds missing components: the tasks and the playbook. It creates a file to be removed and runs the playbook.
#!/bin/bash
mkdir -p ./so41285033/roles/gunicorn
mkdir -p ./so41285033/roles/gunicorn/defaults
mkdir -p ./so41285033/roles/gunicorn/handlers
mkdir -p ./so41285033/roles/gunicorn/tasks
cat >./so41285033/roles/gunicorn/tasks/main.yml <<TASKS_END
---
- debug:
changed_when: true
notify: Clear Gunicorn Log
TASKS_END
cat >./so41285033/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
when: "'apiservers' not in group_names"
HANDLERS_END
cat >./so41285033/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./so41285033/playbook.yml <<PLAYBOOK_END
---
- hosts: localhost
gather_facts: no
connection: local
roles:
- gunicorn
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./so41285033/playbook.yml
ls -l /tmp/gunicorn.log
The result:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [gunicorn : debug] ********************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [gunicorn : Clear Gunicorn Log] ********************************
changed: [localhost]
[WARNING]: Consider using file module with state=absent rather than running rm
PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
ls: /tmp/gunicorn.log: No such file or directory
Interpretation:
Before running the playbook the file /tmp/gunicorn.log was created and its existence verified:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
After running the playbook the file /tmp/gunicorn.log does not exist:
ls: /tmp/gunicorn.log: No such file or directory
Ansible correctly passed the variable gu_log value to the Clear Gunicorn Log handler which removed the file.
Final remark:
The problem described in question is impossible to reproduce, because the question does not contain complete nor verifiable example in the meaning of MCVE.

Resources