Ansible mount module to just check state and not report status - ansible

Team,
I am writing a validation task that is supposed to just check if a mount exists or not and report its state from output. so my task is below but it fails and am not sure how to handle it. any hint what adjustments do i need to make?
- name: "Verify LVP Mounts on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
#failed_when: lvp_mount.rc != 0
#ignore_errors: yes
# - debug:
# var: lvp_mount
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }}
{{ lvp_mount.stderr }}
when: lvp_mount | failed
changed: [localhost -> ] => (item=hostA)
[WARNING]: Consider using the mount module rather than running 'mount'. If you
need to use command because mount is insufficient you can add 'warn: false' to
this command task or set 'command_warnings=False' in ansible.cfg to get rid of
this message.
failed: [localhost -> hostA.test.net] (item=hostA) => {"ansible_loop_var": "item", "changed": true, "cmd": "mount | grep sdd", "delta": "0:00:00.009284", "end": "2019-11-06 18:22:56.138007", "failed_when_result": true, "item": "hostA", "msg": "non-zero return code", "rc": 1, "start": "2019-11-06 18:22:56.128723", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [services-pre-install-checks : Report status of mounts] ************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/run_ansible_playbook/k8s/baremetal/roles/services-pre-install-checks/tasks/main.yml': line 265, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: \"Report status of mounts\"\n ^ here\n"}

Your task "Verify LVP Mounts on CPU Nodes for mount_device" is a loop so the register behavior is modified as specified in the documentation.
You can access the various outputs with lvp_mount.results.X.stdout where X is the index.
There is a cleaner way to write your script however. More specifically using:
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
is bad practice. You can accomplish your desired outcome at the play level.
For example:
- hosts: kube-cpu-node # allows you to iterate over all hosts in kube-cpu-node group
tasks:
- name: "Verify LVP Mounts on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
ignore_errors: yes
# notice there is no loop here
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }} # no loop so you can use lvp_mount.stdout
{{ lvp_mount.stderr }} # no loop so you can use lvp_mount.stderr
when: lvp_mount | failed

Related

Ansible: pvchange with shell does not get executed

I'm not able to execute the shell command pvchange -u /dev/sde with ansible.
Can you pls help me to figure out what's wrong?
vars:
tgt_string_id: TGTDBID
action_vg: "clone_{{ tgt_string_id }}vg"
host_vars_dir: /home/abhn02a/ansible/host_vars
action_host: "{{ tgt_vm | default('localhost') }}"
tgt_disks: "/dev/sde /dev/sdf"
Tasks
- name: Display target disks
debug:
msg: "{{ item }}"
loop: "{{ tgt_disks.split(' ') }}"
- name: Change pv uuid
become: yes
shell: |
pvchange -u "{{ item }}"
loop: "{{ tgt_disks.split(' ') }}"
#changed_when: false
delegate_to: "{{ action_host }}"
Output:
......
.......
TASK [Display target disks]
*********************************************************************************************
ok: [localhost] => (item=/dev/sde) => {
    "msg": "/dev/sde"
}
ok: [localhost] => (item=/dev/sdf) => {
    "msg": "/dev/sdf"
}
!! This is not true, uuid stays same !!!
TASK [Change pv uuid]
***************************************************************************************************
changed: [localhost] => (item=/dev/sde)
changed: [localhost] => (item=/dev/sdf)
.......
.....
When I check the uuid of the disk, It has not been changed. Then, I type "pvchange -u /dev/sde", It changes the uuid of the disk with no problem!
Have ansible 2.9 with python2.7.5
Can you help, please?
(Oh, this is my 1st question on SO, sorry for the previous formatting)
Edit
So I have almost reproduced the error at home.
This happens when I try to do a pvchange on a cloned disk (But again, only with Ansible!!)
I have cloned 2 disks (/dev/xvdc and /dev/xvde) to (/dev/xvdf and /dev/xvdg), and re-run the task, and I have following output:
TASK [Change pv uuid]
*************************************************************************
failed: [localhost] (item=/dev/xvdf) => {"ansible_loop_var": "item", "changed": true, "cmd": "pvchange -u \"/dev/xvdf\"\n", "delta": "0:00:00.038053", "end": "2021-03-06 03:46:07.526239", "item": "/dev/xvdf", "msg": "non-zero return code", "rc": 5, "start": "2021-03-06 03:46:07.488186", "stderr": " Failed to find physical volume \"/dev/xvdf\".", "stderr_lines": [" Failed to find physical volume \"/dev/xvdf\"."], "stdout": " 0 physical volumes changed / 0 physical volumes not changed", "stdout_lines": [" 0 physical volumes changed / 0 physical volumes not changed"]}
This seems to be, because the cloned disks are still not "PVs" in LVM; I do not see /dev/xvdf and /dev/xvdg when I do pvs.
But at work, this was different, I could see the cloned disks in "pvs output"
Will check again next week.
Thank you
From the dub of the execution, you can see that the command executed is pvchange -u "/dev/xvdf", which raise an error message 'Failed to find physical volume "/dev/xvdf".', so looks like it doesn't like the quotes.
You can change your task to that, it should works better:
- name: Change pv uuid
become: yes
shell: |
pvchange -u {{ item }}
loop: "{{ tgt_disks.split(' ') }}"
pvscan --cache `, before doing` pvchange -u <PV>

Ansible search for a substring in stdout output

Team,
I am trying to match a version and that fails when whole string does not match. so I just want to match first two octects. I tried several combons but no luck.
- name: "Validate k8s version"
shell: "kubectl version --short"
register: k8s_version_live
failed_when: k8s_version_live.stdout_lines is not search("{{ k8s_server_version }}")
#failed_when: "'{{ k8s_server_version }}' not in k8s_version_live.stdout_lines"
ignore_errors: yes
- debug:
var: k8s_version_live.stdout_lines
output:
[WARNING]: conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: k8s_version_live.stdout_lines is not search("{{
k8s_server_version }}")
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl version --short", "delta": "0:00:00.418128", "end": "2019-12-05 02:13:15.108997", "failed_when_result": true, "rc": 0, "start": "2019-12-05 02:13:14.690869", "stderr": "", "stderr_lines": [], "stdout": "Client Version: v1.13.3\nServer Version: v1.13.10", "stdout_lines": ["Client Version: v1.13.3", "Server Version: v1.13.10"]}
...ignoring
TASK [team-services-pre-install-checks : debug] *************************************************************************************************************************
Thursday 05 December 2019 02:13:15 +0000 (0:00:00.902) 0:00:01.039 *****
ok: [localhost] => {
"k8s_version_live.stdout_lines": [
"Client Version: v1.13.3",
"Server Version: v1.13.10"
]
}```
As the error says:
conditional statements should not include jinja2 templating delimiters such as {{ }} or {% %}.
In a conditional statement, you are already inside a Jinja template context. You can just refer to variables by name:
- name: "Validate k8s version"
shell: "kubectl version --short"
register: k8s_version_live
failed_when: k8s_version_live.stdout_lines is not search(k8s_server_version)
ignore_errors: yes
Although you probably want k8s_version_live.stdout instead of k8s_version_live.stdout_lines.
I would probably write the task as:
- name: "Validate k8s version"
command: "kubectl version --short"
register: k8s_version_live
failed_when: k8s_server_version not in k8s_version_live.stdout
ignore_errors: true
Q: "Match a version ... match first two octets"
A: Use Version Comparison. For example, create the variable k8s_server_version from the registred output
- set_fact:
k8s_server_version: "{{ k8s_version_live.stdout_lines.1.split(':').1[2:] }}"
Compare the first two numbers of the version
- debug:
msg: "{{ k8s_server_version }} match 1.13"
when:
- k8s_server_version is version('1.13', '>=')
- k8s_server_version is version('1.14', '<')
gives
}
localhost | SUCCESS => {
"msg": "1.13.10 match 1.13"
}
Fail when the version does not match
- fail:
msg: "{{ k8s_server_version }} does not match 1.12"
when: not (k8s_server_version is version('1.12', '>=') and
k8s_server_version is version('1.13', '<'))
gives
localhost | FAILED! => {
"changed": false,
"msg": "1.13.10 does not match 1.12"
}

ansible shell task erroring out without proper message

Team,
I am trying to verify if sdd exists in mount command output. so when there is any, i am fine but when there is none, my task is simply failing instead of just telling me that no mounts exists. any hint how to tackle this? I don't want my task to fail but to report what is the state.
when status code is 0 am good but when status code is 1 am just seeing failure instead of a helpful message that mounds sdd don't exist.
"mount | grep sdd"
- name: "Verify LVP Mounts sdd exists on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
ignore_errors: yes
failed_when: False
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }}
{{ lvp_mount.stderr }}
when: lvp_mount | failed
output:
fatal: [localhost]: FAILED! => {"msg": "The conditional check 'lvp_mount | failed' failed. The error was: template error while templating string: no filter named 'failed'. String: {% if lvp_mount | failed %} True {% else %} False {% endif %}\n\nThe error appears to be in '/k8s/baremetal/roles/maglev-services-pre-install-checks/tasks/main.yml': line 111, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n with_items: \"{{ groups['kube-cpu-node'] }}\"\n - name: \"Report status of mounts\"\n ^ here\n"}
expected output:
if lvp_mount.rc == 0
msg: mount sdd exists
if lvp_mount.rc == 1
msg: mount sdd does not exists
if lvp_mount.rc not in [0, 1]
msg: mount exec errir
The error is telling you that there is no filter named failed. To check for a failed result in a conditional, use this instead:
when: lvp_mount is failed
Alternatively, to check for a successful result, use:
when: lvp_mount is succeeded

Getting Ansible runtime error dict object' has no attribute 'stdout_lines' despite variable not null

Below is my playbook which has a variable running_processes which contains a list of pids(one or more)
Next, I read the user ids for each of the pids. All good so far.
I then try to print the list of user ids in curr_user_ids variable using -debug module is when i get the error: 'dict object' has no attribute 'stdout_lines'
I was expecting the curr_user_ids to contain one or more entries as evident from the output shared below.
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ running_processes.stdout }}/loginuid`"
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
- debug: msg="USER ID LIST HERE:{{ curr_user_ids.stdout }}"
with_items: "{{ curr_user_ids.stdout_lines }}"
TASK [Get running processes list from remote host] **********************************************************************************************************
task path: /app/wls/startstop.yml:22
ok: [10.9.9.111] => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "cmd": "ps -few | grep java | grep -v grep | awk '{print $2}'", "delta": "0:00:00.166049", "end": "2019-11-06 11:49:42.298603", "rc": 0, "start": "2019-11-06 11:49:42.132554", "stderr": "", "stderr_lines": [], "stdout": "24032", "stdout_lines": ["24032"]}
TASK [Gather USER IDS of processes id before killing.] ******************************************************************************************************
task path: /app/wls/startstop.yml:59
changed: [10.9.9.111] => (item=24032) => {"ansible_loop_var": "item", "changed": true, "cmd": "id -nu `cat /proc/24032/loginuid`", "delta": "0:00:00.116639", "end": "2019-11-06 11:46:41.205843", "item": "24032", "rc": 0, "start": "2019-11-06 11:46:41.089204", "stderr": "", "stderr_lines": [], "stdout": "user1", "stdout_lines": ["user1"]}
TASK [debug] ************************************************************************************************************************************************
task path: /app/wls/startstop.yml:68
fatal: [10.9.9.111]: FAILED! => {"msg": "'dict object' has no attribute 'stdout_lines'"}
Can you please suggest why am I getting the error and how can I resolve it ?
Few points to note why your solution didn't work.
The task Get running processes list from remote host returns a newline splitted \n string. So you will need to process this and turn the output into a propper list object first.
The task Gather USER IDs from processes id before killing. is returning a dictionary containing the key results where the value is of type list, so you will need iterate over it and fetch for each element the stdout value.
This is how I solved it.
---
- hosts: "localhost"
gather_facts: true
become: true
tasks:
- name: Set default values
set_fact:
process_ids: []
user_names: []
- name: Get running processes list from remote host
shell: "ps -few | grep java | grep -v grep | awk '{print $2}'"
changed_when: false
register: running_processes
- name: Register a list of Process ids (Split newline from output before)
set_fact:
process_ids: "{{ running_processes.stdout.split('\n') }}"
- name: Gather USER IDs from processes id before killing.
shell: "id -nu `cat /proc/{{ item }}/loginuid`"
register: curr_user_ids
with_items: "{{ process_ids }}"
- name: Register a list of User names (Out of result from before)
set_fact:
user_names: "{{ user_names + [item.stdout] | unique }}"
when: item.rc == 0
with_items:
- "{{ curr_user_ids.results }}"
- name: Set unique entries in User names list
set_fact:
user_names: "{{ user_names | unique }}"
- name: DEBUG
debug:
msg: "{{ user_names }}"
The variable curr_user_ids registers results of each iteration
register: curr_user_ids
with_items: "{{ running_processes.stdout_lines }}"
The list of the results is stored in
curr_user_ids.results
Take a look at the variable
- debug:
var: curr_user_ids
and loop the stdout_lines
- debug:
var: item.stdout_lines
loop: "{{ curr_user_ids.results }}"

local_action: shell error concatenate files

There is such an error in my playbook. Why and how to fix it? (getting the list of updates of remote-hosts, concatenate lists into one file)
- name: Save update_deb_packs in file on ansible-host
copy:
content: "{{ update_deb_packs.stdout }}"
dest: ~/tmp/{{ inventory_hostname }}_update_deb_packs
delegate_to: 127.0.0.1
- name: merge files from different hosts in once
local_action: shell cat ~/tmp/* > ~/tmp/list_update
result:
TASK [Save update_deb_packs in file on ansible-host] *******************************************
changed: [g33-linux -> 127.0.0.1]
changed: [dell-e6410 -> 127.0.0.1]
TASK [merge files from different hosts in once] ********************************
changed: [g33-linux -> localhost]
fatal: [dell-e6410 -> localhost]: FAILED! => {"changed": true, "cmd": "cat ~/tmp/* > ~/tmp/list_update", "delta": "0:00:00.052968", "end": "2017-12-23 13:48:37.029706", "msg": "non-zero return code", "rc": 1, "start": "2017-12-23 13:48:36.976738", "stderr": "cat: /home/alex/tmp/list_update: input and output in one file", "stderr_lines": ["cat: /home/alex/tmp/list_update: input and output in one file"], "stdout": "", "stdout_lines": []}
to retry, use: --limit #/home/alex/.ansible/playbooks/update_deb_list.retry
directory ~/tmp is empty. After shell result file (list_update) file exists and true.
You use a glob ~/tmp/* and one of the files is ~/tmp/list_update, so effectively you order it to read and write to the same file with a redirection and system is refusing to do that.
You also didn't specify run_once, so the merge files from different hosts in once task will run as there are hosts defined for the play.
If you are sure ~/tmp/list_update didn't exist beforehand, adding run_once: true to that task would resolve the problem, but it's far from the best way.
Long story short, you can achieve what you want in one task, for example
with a Jinja2 template:
- copy:
content: "{% for host in ansible_play_hosts %}{{ hostvars[host].update_deb_packs.stdout }}\n{% endfor %}"
dest: ~/tmp/list_update
delegate_to: localhost
run_once: true
You can add separators with host-value to indicate what data is coming from which host:
content: "{% for host in ansible_play_hosts %}From host: {{ host }}\n{{ hostvars[host].update_deb_packs.stdout }}\n{% endfor %}"
using operations on lists:
content: "{{ ansible_play_hosts | map('extract', hostvars, 'update_deb_packs') | map(attribute='stdout') | list }}"

Resources