can someone help me please?
I want to create a folder at host "cfme_tester-0".
For this I check variable "osp_version" from "undercloud-0" host and based on the result I should create a folder at the "cfme_tester-0" host.
Here is my playbook:
- name: take openstack version
hosts: undercloud-0
become: true
become_user: stack
tasks:
- name: creating flavor
shell: |
source /home/stack/stackrc
cat /etc/rhosp-release | egrep -o '[0-9]+' | head -1
register: osp_version
ignore_errors: True
- debug: msg="{{ osp_version.stdout }}"
- name: set up CFME tester
hosts: cfme_tester-0
become: yes
become_user: root
tasks:
- name: Run prepare script for OSP10
debug:
shell: |
cd /tmp/cfme/ && mkdir osp10
when: "'10' in osp_version.stdout"
- name: Run prepare script for OSP13
debug:
shell: |
cd /tmp/cfme/ && mkdir osp13
when: "'13' in osp_version.stdout"
But an error occurs:
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [undercloud-0] => {
"msg": "10"
}
PLAY [set up CFME tester] *****************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [cfme_tester-0]
TASK [Run prepare script for OSP10] *******************************************************************************************************************************************************************************
fatal: [cfme_tester-0]: FAILED! => {"msg": "The conditional check ''10' in osp_version.stdout' failed. The error was: error while evaluating conditional ('10' in osp_version.stdout): 'osp_version' is undefined\n\nThe error appears to have been in '/root/infrared/rhos-qe-core-installer/playbooks/my_setup.yaml': line 20, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Run prepare script for OSP10\n ^ here\n"}
to retry, use: --limit #/root/infrared/rhos-qe-core-installer/playbooks/my_setup.retry
PLAY RECAP ********************************************************************************************************************************************************************************************************
cfme_tester-0 : ok=1 changed=0 unreachable=0 failed=1
undercloud-0 : ok=3 changed=1 unreachable=0 failed=0
Variables are per-host (because otherwise, what happens when you run a task on multiple hosts and register a variable?). In your first task, you are setting the osp_version variable for host undercloud-0.
If you want to use this variable in your second play, which is running on cfme_tester-0, then you should read the Magic Variables, and How To Access Information About Other Hosts section of the Ansible documentation. You'll need to refer to the variable via the hostvars dictionary, so your second play will look like:
- name: set up CFME tester
hosts: cfme_tester-0
become: yes
become_user: root
tasks:
- name: Run prepare script for OSP10
shell: |
cd /tmp/cfme/ && mkdir osp10
when: "'10' in hostvars['undercloud-0'].osp_version.stdout"
- name: Run prepare script for OSP13
shell: |
cd /tmp/cfme/ && mkdir osp13
when: "'13' in hostvars['undercloud-0'].osp_version.stdout"
...but note that if you're just creating a directory, you would be better off just using the file module instead:
- name: Run prepare script for OSP10
file:
path: /tmp/cfme/osp10
state: directory
when: "'10' in hostvars['undercloud-0'].osp_version.stdout"
Related
I am using the below Ansible code to get the file system details (NAME,MOUNTPOINT,FSTYPE,SIZE) from node servers to control server. I am not getting any issues while running the playbook. But the CSV file is not copied to control machine.
Can anyone please help me on this?
tasks:
- name: Fsdetails
shell: |
lsblk -o NAME,MOUNTPOINT,FSTYPE,SIZE > $(hostname).csv
register: fsdetails_files_to_copy
- name: Fetch the fsdetails
fetch:
src: "{{ item }}"
dest: /data3/deployments/remediation
flat: yes
with_items: "{{ fsdetails_files_to_copy.stdout_lines }}"
Output:
PLAY [all] ************************************************************************************************
TASK [Gathering Facts] ************************************************************************************
ok: [10.xxx.xxx.xx]
TASK [Fsdetails] ******************************************************************************************
changed: [10.xxx.xxx.xx]
TASK [Fetch the fsdetails] ********************************************************************************
PLAY RECAP ************************************************************************************************
10.xxx.xxx.xx : ok=2 changed=1 unreachable=0 failed=0
Your shell command is not returning anything, since it is writing the output to the CSV file. Because of this, your fetch task has nothing to loop on (stdout_lines is an empty list).
What you could do is make your shell task echo the CSV name $(hostname):
- name: Fsdetails
shell: |
lsblk -o NAME,MOUNTPOINT,FSTYPE,SIZE > $(hostname).csv && echo $(hostname).csv
register: fsdetails_files_to_copy
This way, your fetch task will pick the correct filename to download.
I want to save the hosts name and results of linux command in the dictionary format. The problem is I cannot successfully get the dictionary format, and the new line stored in the results.txt file will replace the previous lines.
---
- hosts: "{{variable_host | default('lsbxdmss001')}}"
tasks:
- name: Check Redhat version for selected servers
shell:
cmd: rpm --query redhat-release-server
warn: False
register: myshell_output
- debug: var=myshell_output
- name: set fact
set_fact: output = "{{item.0}}:{{item.1}}"
with_together:
- groups['{{variable_host}}']
- "{{myshell_output.stdout}}"
register: output
- debug: var=output
- name: copy the output to results.txt
copy:
content: "{{output}}"
dest: results.txt
delegate_to: localhost
Looks like you have four issues:
Your dictionary creation seems odd, a dictionary in python is enclosed in curly bracers { ... }
So you line dictionary
"{{item.0}}:{{item.1}}"
Should rather be
"{{ {item.0: item.1} }}"
Adding to a file, in Ansible, is done via the module lineinfile rather than the copy one.
Your loop doesn't really makes sense, as Ansible is already executing each task on all hosts, so the reason you have only one information is also coming from this: you keep on overriding the output fact, in your trial to do it.
For the same reason as above, instead of your with_together loop, you should use the special variable inventory_hostname, as this is
The inventory name for the ‘current’ host being iterated over in the play
Source: https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html#special-variables
So after your shell command, here should be the next task:
- lineinfile:
path: results.txt
line: "{{ {inventory_hostname: myshell_output.stdout} | string }}"
create: yes
delegate_to: localhost
Mind that the string filter was added in order to convert the dictionary into a string to write it into the file without Ansible issuing a warning.
An example of playbook would be:
- hosts: all
gather_facts: no
tasks:
- name: Creating a fake shell result
shell:
cmd: echo 'Linux Vlersion XYZ {{ inventory_hostname }}'
register: shell_output
- lineinfile:
path: results.txt
line: "{{ {inventory_hostname: shell_output.stdout} | string }}"
create: yes
delegate_to: localhost
Which gives the recap:
PLAY [all] ******************************************************************************************************************************************
TASK [Creating a fake shell result] *****************************************************************************************************************
changed: [host2]
changed: [host1]
TASK [lineinfile] ***********************************************************************************************************************************
changed: [host1 -> localhost]
changed: [host2 -> localhost]
PLAY RECAP ******************************************************************************************************************************************
host1 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And fill the results.txt file with:
{'host1': 'Linux Vlersion XYZ host1'}
{'host2': 'Linux Vlersion XYZ host2'}
In my ansible playbook I have a task that executes a shell command. One of the parameters of that command is password. When the shell command fails, ansible prints the whole json object that includes the command having password. If I use no_log: True then I get censored output and not able to get stderr_lines. Is there a way to customize the output when shell command execution fails?
You can take advantage of ansible blocks and their error handling feature.
Here is an example playbook
---
- name: Block demo for shell
hosts: localhost
gather_facts: false
tasks:
- block:
- name: my command
shell: my_command is bad
register: cmdresult
no_log: true
rescue:
- name: show error
debug:
msg: "{{ cmdresult.stderr }}"
- name: fail the playbook
fail:
msg: Error on command. See debug of stderr above
which gives the following result:
PLAY [Block demo for shell] *********************************************************************************************************************************************************************************************************************************************
TASK [my command] *******************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}
TASK [show error] *******************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "/bin/sh: 1: my_command: not found"
}
TASK [fail the playbook] ************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error on command. See debug of stderr above"}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
You can utilize something like this :
- name: Running it
hosts: localhost
tasks:
- name: failing the task
shell: sh a.sh > output.txt
ignore_errors: true
register: abc
- name: now failing
command: rm output.txt
when: abc|succeeded
stdout will be written to a file. If it's a failure you can check the file and debug it, if it's a success then file will be deleted.
Does ansible pass Role Default variables to the Handlers within the same Role?
Here's a minimal excerpt of the playbook that has the issue:
Role hierarchy
- playbook.yml
- roles/
- gunicorn/
- defaults/
- main.yml
- handlers/
- main.yml
- code-checkout/
- tasks/
- main.yml
Here's the file contents
gunicorn/defaults/main.yml
---
gu_log: "/tmp/gunicorn.log"
gunicorn/handlers/main.yml
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
finalize/tasks/main.yml
---
- name: Test Handlers
shell: ls
notify:
- Restart Gunicorn
playbook.yml
---
- name: Deploy
hosts: webservers
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
AFAIK everything looks good. However, I get this error during the playbook execution
FAILED! => {"failed": true, "msg": "the field 'args' has an invalid
value, which appears to include a variable that is undefined. The
error was: 'gu_log' is undefined\n\nThe error appears to have been in
'/roles/gunicorn/handlers/main.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n\n-
name: Restart Gunicorn\n ^ here\n"}
Using Ansible 2.2 on Ubuntu 12.04 LTS
Here's a modified version of techraf's script that creates all the directories and demonstrates my issue
#!/bin/bash
mkdir -p ./rtindru-test/roles/gunicorn
mkdir -p ./rtindru-test/roles/gunicorn/defaults
mkdir -p ./rtindru-test/roles/gunicorn/handlers
mkdir -p ./rtindru-test/roles/finalize/tasks
cat >./rtindru-test/roles/finalize/tasks/main.yml <<HANDLERS_END
---
- name: Test Handlers
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./rtindru-test/playbook.yml <<PLAYBOOK_END
---
- name: Deploy
hosts: localhost
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./rtindru-test/playbook.yml
ls -l /tmp/gunicorn.log
Output
PLAY [Deploy]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [Test Handlers]
*********************************************************** fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has
an invalid value, which appears to include a variable that is
undefined. The error was: 'gu_log' is undefined\n\nThe error appears
to have been in '/rtindru-test/roles/finalize/tasks/main.yml': line 2,
column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n---\n- name:
Test Handlers\n ^ here\n"} to retry, use: --limit
#/rtindru-test/playbook.retry
PLAY RECAP
********************************************************************* localhost : ok=1 changed=0 unreachable=0
failed=1
You are neither defining nor using any roles. With the following task:
- include: roles/finalize/tasks/main.yml
you are only including a tasks file into your playbook. It has nothing to do with roles.
To assign a role you should specify a list of roles for a play (one or more):
role:
- my_role1
- my_role2
Please have a look at the documentation on roles and feel free to use the playbook and structure as created by the below script.
Does ansible pass Role Default variables to the Handlers within the same Role?
Yes it does.
For a proof run the following bash script which creates and runs a minimal example. It takes the contents of gunicorn/defaults/main.yml and gunicorn/handlers/main.yml from the question intact and adds missing components: the tasks and the playbook. It creates a file to be removed and runs the playbook.
#!/bin/bash
mkdir -p ./so41285033/roles/gunicorn
mkdir -p ./so41285033/roles/gunicorn/defaults
mkdir -p ./so41285033/roles/gunicorn/handlers
mkdir -p ./so41285033/roles/gunicorn/tasks
cat >./so41285033/roles/gunicorn/tasks/main.yml <<TASKS_END
---
- debug:
changed_when: true
notify: Clear Gunicorn Log
TASKS_END
cat >./so41285033/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
when: "'apiservers' not in group_names"
HANDLERS_END
cat >./so41285033/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./so41285033/playbook.yml <<PLAYBOOK_END
---
- hosts: localhost
gather_facts: no
connection: local
roles:
- gunicorn
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./so41285033/playbook.yml
ls -l /tmp/gunicorn.log
The result:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [gunicorn : debug] ********************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [gunicorn : Clear Gunicorn Log] ********************************
changed: [localhost]
[WARNING]: Consider using file module with state=absent rather than running rm
PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
ls: /tmp/gunicorn.log: No such file or directory
Interpretation:
Before running the playbook the file /tmp/gunicorn.log was created and its existence verified:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
After running the playbook the file /tmp/gunicorn.log does not exist:
ls: /tmp/gunicorn.log: No such file or directory
Ansible correctly passed the variable gu_log value to the Clear Gunicorn Log handler which removed the file.
Final remark:
The problem described in question is impossible to reproduce, because the question does not contain complete nor verifiable example in the meaning of MCVE.
Does ansible pass Role Default variables to the Handlers within the same Role?
Here's a minimal excerpt of the playbook that has the issue:
Role hierarchy
- playbook.yml
- roles/
- gunicorn/
- defaults/
- main.yml
- handlers/
- main.yml
- code-checkout/
- tasks/
- main.yml
Here's the file contents
gunicorn/defaults/main.yml
---
gu_log: "/tmp/gunicorn.log"
gunicorn/handlers/main.yml
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
finalize/tasks/main.yml
---
- name: Test Handlers
shell: ls
notify:
- Restart Gunicorn
playbook.yml
---
- name: Deploy
hosts: webservers
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
AFAIK everything looks good. However, I get this error during the playbook execution
FAILED! => {"failed": true, "msg": "the field 'args' has an invalid
value, which appears to include a variable that is undefined. The
error was: 'gu_log' is undefined\n\nThe error appears to have been in
'/roles/gunicorn/handlers/main.yml':
line 3, column 3, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n\n-
name: Restart Gunicorn\n ^ here\n"}
Using Ansible 2.2 on Ubuntu 12.04 LTS
Here's a modified version of techraf's script that creates all the directories and demonstrates my issue
#!/bin/bash
mkdir -p ./rtindru-test/roles/gunicorn
mkdir -p ./rtindru-test/roles/gunicorn/defaults
mkdir -p ./rtindru-test/roles/gunicorn/handlers
mkdir -p ./rtindru-test/roles/finalize/tasks
cat >./rtindru-test/roles/finalize/tasks/main.yml <<HANDLERS_END
---
- name: Test Handlers
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
HANDLERS_END
cat >./rtindru-test/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./rtindru-test/playbook.yml <<PLAYBOOK_END
---
- name: Deploy
hosts: localhost
tasks:
- include: roles/finalize/tasks/main.yml
handlers:
- include: roles/gunicorn/handlers/main.yml
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./rtindru-test/playbook.yml
ls -l /tmp/gunicorn.log
Output
PLAY [Deploy]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [Test Handlers]
*********************************************************** fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has
an invalid value, which appears to include a variable that is
undefined. The error was: 'gu_log' is undefined\n\nThe error appears
to have been in '/rtindru-test/roles/finalize/tasks/main.yml': line 2,
column 3, but may\nbe elsewhere in the file depending on the exact
syntax problem.\n\nThe offending line appears to be:\n\n---\n- name:
Test Handlers\n ^ here\n"} to retry, use: --limit
#/rtindru-test/playbook.retry
PLAY RECAP
********************************************************************* localhost : ok=1 changed=0 unreachable=0
failed=1
You are neither defining nor using any roles. With the following task:
- include: roles/finalize/tasks/main.yml
you are only including a tasks file into your playbook. It has nothing to do with roles.
To assign a role you should specify a list of roles for a play (one or more):
role:
- my_role1
- my_role2
Please have a look at the documentation on roles and feel free to use the playbook and structure as created by the below script.
Does ansible pass Role Default variables to the Handlers within the same Role?
Yes it does.
For a proof run the following bash script which creates and runs a minimal example. It takes the contents of gunicorn/defaults/main.yml and gunicorn/handlers/main.yml from the question intact and adds missing components: the tasks and the playbook. It creates a file to be removed and runs the playbook.
#!/bin/bash
mkdir -p ./so41285033/roles/gunicorn
mkdir -p ./so41285033/roles/gunicorn/defaults
mkdir -p ./so41285033/roles/gunicorn/handlers
mkdir -p ./so41285033/roles/gunicorn/tasks
cat >./so41285033/roles/gunicorn/tasks/main.yml <<TASKS_END
---
- debug:
changed_when: true
notify: Clear Gunicorn Log
TASKS_END
cat >./so41285033/roles/gunicorn/handlers/main.yml <<HANDLERS_END
---
- name: Clear Gunicorn Log
shell: rm {{ gu_log }}
when: "'apiservers' not in group_names"
HANDLERS_END
cat >./so41285033/roles/gunicorn/defaults/main.yml <<DEFAULTS_END
---
gu_log: "/tmp/gunicorn.log"
DEFAULTS_END
cat >./so41285033/playbook.yml <<PLAYBOOK_END
---
- hosts: localhost
gather_facts: no
connection: local
roles:
- gunicorn
PLAYBOOK_END
touch /tmp/gunicorn.log
ls -l /tmp/gunicorn.log
ansible-playbook ./so41285033/playbook.yml
ls -l /tmp/gunicorn.log
The result:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
TASK [gunicorn : debug] ********************************************************
ok: [localhost] => {
"msg": "Hello world!"
}
RUNNING HANDLER [gunicorn : Clear Gunicorn Log] ********************************
changed: [localhost]
[WARNING]: Consider using file module with state=absent rather than running rm
PLAY RECAP *********************************************************************
localhost : ok=2 changed=2 unreachable=0 failed=0
ls: /tmp/gunicorn.log: No such file or directory
Interpretation:
Before running the playbook the file /tmp/gunicorn.log was created and its existence verified:
-rw-r--r-- 1 techraf wheel 0 Dec 23 07:57 /tmp/gunicorn.log
After running the playbook the file /tmp/gunicorn.log does not exist:
ls: /tmp/gunicorn.log: No such file or directory
Ansible correctly passed the variable gu_log value to the Clear Gunicorn Log handler which removed the file.
Final remark:
The problem described in question is impossible to reproduce, because the question does not contain complete nor verifiable example in the meaning of MCVE.