I am running a custom command because I haven't found a working module doing what I need, and I want to adjust the changed flag to reflect the actual behaviour:
- name: Remove unused images
shell: '[ -n "$(docker images -q -f dangling=true)" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...'
register: command_result
changed_when: "command_result.stdout == 'Ignoring failure...'"
- debug: var="1 {{ command_result.stdout }}"
when: "command_result.stdout != 'Ignoring failure...'"
- debug: var="2 {{ command_result.stdout }}"
when: "command_result.stdout == 'Ignoring failure...'"
(I know the shell command is ugly and could be improved by a more complex script but I don't want to for now)
Running this task on an host where no Docker image can be removed gives the following output:
TASK: [utils.dockercleaner | Remove unused images] ****************************
changed: [cloud-host] => {"changed": true, "cmd": "[ -n \"$(docker images -q -f dangling=true)\" ] && docker rmi $(docker images -q -f dangling=true) || echo Ignoring failure...", "delta": "0:00:00.064451", "end": "2015-07-30 18:37:25.620135", "rc": 0, "start": "2015-07-30 18:37:25.555684", "stderr": "", "stdout": "Ignoring failure...", "stdout_lines": ["Ignoring failure..."], "warnings": []}
TASK: [utils.dockercleaner | debug var="DIFFERENT {{ command_result.stdout }}"] ***
skipping: [cloud-host]
TASK: [utils.dockercleaner | debug var="EQUAL {{ command_result.stdout }}"] ***
ok: [cloud-host] => {
"var": {
"EQUAL Ignoring failure...": "EQUAL Ignoring failure..."
}
}
So, I have this stdout return value "stdout": "Ignoring failure...", and the debug task shows the strings are equal, so why is the task still displayed as "changed"?
I am using ansible 1.9.1.
The documentation I am refering to is this one: http://docs.ansible.com/ansible/playbooks_error_handling.html#overriding-the-changed-result
I think you may have misinterpreted what changed_when does.
changed_when marks the task as changed based on the evaluation of the conditional statement which in your case is:
"command_result.stdout == 'Ignoring failure...'"
So whenever this condition is true, the task will be marked as changed.
Related
I have this recipe:
- name: Kill usr/bin/perl -w /usr/share/debconf/frontend
shell: "{{ item }}"
with_items:
- 'pkill -9 -f debconf/frontend || echo'
changed_when: false # Don't report changed state
When I run this on the command line, it returns a 0 exit code thanks to || echo:
~$ pkill -9 -f debconf/frontend || echo
~$ $?
0
However running in Ansible it shows this:
failed: [testhost1] (item=pkill -9 -f debconf/frontend || echo) => {"ansible_loop_var": "item", "changed": false, "cmd": "pkill -9 -f debconf/frontend || echo", "delta": "0:00:00.132296", "end": "2023-01-13 10:34:11.098765", "item": "sudo pkill -9 -f debconf/frontend || echo", "msg": "non-zero return code", "rc": -9, "start": "2023-01-13 10:34:10.966469", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
Any ideas what I'm doing wrong?
Starting a test process via
perl -MPOSIX -e '$0="test"; pause' &
a minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Shell task
shell:
cmd: "{{ item }}"
with_items:
- 'pkill -9 -f test || echo'
register: result
- name: Show result
debug:
msg: "{{ result }}"
reproduce the issue. Using instead
with_items:
- 'pkill -9 -f test'
or
with_items:
- 'kill -9 $(pgrep test) || echo'
will execute without an error.
Links
How can I start a process with any name which does nothing?
How to kill a running process using Ansible?
Similar Q&A
with the same error message
Ansible: How force kill process ...?
Further Reading
Since it is recommended to use Ansible modules instead of shell commands if possible
pids module – Retrieves process IDs list if the process is running otherwise return empty list
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Get Process ID (PID)
pids:
name: test
register: PID
- name: Show PID
debug:
msg: "{{ PID.pids | first }}"
- name: Shell task
shell:
cmd: "kill -9 {{ PID.pids | first }}"
register: result
- name: Show result
debug:
msg: "{{ result }}"
Ah, this is because running pkill from inside of Ansible causes it to kill itself (bug in pkill?)
In any case, a workaround is to do this:
with_items:
- 'if pgrep -f "debconf/frontend" | grep -v $$; then sudo pkill -9 -f "debconf/frontend"; fi'
I have created the following playbook, to check if a database exists:
- name: Check database exits
shell: |
mysql -hmysqlhost -uroot -ppassword -e "show databases" | egrep db"
register: mysql_exist
- name: Show database
debug:
msg: "{{ mysql_exist.stdout }}"
My idea is to finish the playbook if the database does not exist and show a message, I tried this but it does not work, otherwise I should continue to the next task.
- name: Check database exits
shell: |
mysql -hmysqlhost -uroot -ppassword -e "show databases" | egrep db"
register: mysql_exist
- name: End Playbook If database not exits.
meta: end_play
when: mysql_exist == 0
- name: Show database
debug:
msg: "{{ mysql_exist.stdout }}"
## other tasks
How can I create a playbook to check if a database exists and if it does not exist, it must display the message The database does not exist and finish the playbook without running other tasks?
if you want to show a message if playbook has to finish, use a block:
(you dont show the output of your register when a db doesnt exist so, i suppose your test in when condition is ok!!)
- block:
- name: "end play "
debug:
msg: "db doesnt exist"
- meta: end_play
when: mysql_exist == 0
so the playbook is finished after the message displaying
Do you need to show the message? Stopping the playbook already happens automatically if egrep does not find anything, because it exits with a non-0 code.
Playbook:
---
- hosts: srv1
become: True
tasks:
- name: x
shell: "echo nope | egrep dbname"
- name: good
shell: "echo very much"
Output (notice how "good" is not executed):
PLAY [srv1] *******************************************************************************************************************************
TASK [x] **********************************************************************************************************************************
fatal: [srv1]: FAILED! => {"changed": true, "cmd": "echo nope | egrep dbname", "delta": "0:00:00.005942", "end": "2022-02-09 15:56:56.726828", "msg": "non-zero return code", "rc": 1, "start": "2022-02-09 15:56:56.720886", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
PLAY RECAP ********************************************************************************************************************************
srv1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
It could be approximated with something like this:
- name: x
shell: "echo nope | egrep dbname || { echo Database not found && false; }"
Which gives:
TASK [x] ***********************************************************************************************************************************
fatal: [srv1]: FAILED! => {"changed": true, "cmd": "echo nope | egrep dbname || { echo Database not found && false; }", "delta": "0:00:00.006159", "end": "2022-02-09 15:59:26.176704", "msg": "non-zero return code", "rc": 1, "start": "2022-02-09 15:59:26.170545", "stderr": "", "stderr_lines": [], "stdout": "Database not found", "stdout_lines": ["Database not found"]}
I'm not able to execute the shell command pvchange -u /dev/sde with ansible.
Can you pls help me to figure out what's wrong?
vars:
tgt_string_id: TGTDBID
action_vg: "clone_{{ tgt_string_id }}vg"
host_vars_dir: /home/abhn02a/ansible/host_vars
action_host: "{{ tgt_vm | default('localhost') }}"
tgt_disks: "/dev/sde /dev/sdf"
Tasks
- name: Display target disks
debug:
msg: "{{ item }}"
loop: "{{ tgt_disks.split(' ') }}"
- name: Change pv uuid
become: yes
shell: |
pvchange -u "{{ item }}"
loop: "{{ tgt_disks.split(' ') }}"
#changed_when: false
delegate_to: "{{ action_host }}"
Output:
......
.......
TASK [Display target disks]
*********************************************************************************************
ok: [localhost] => (item=/dev/sde) => {
"msg": "/dev/sde"
}
ok: [localhost] => (item=/dev/sdf) => {
"msg": "/dev/sdf"
}
!! This is not true, uuid stays same !!!
TASK [Change pv uuid]
***************************************************************************************************
changed: [localhost] => (item=/dev/sde)
changed: [localhost] => (item=/dev/sdf)
.......
.....
When I check the uuid of the disk, It has not been changed. Then, I type "pvchange -u /dev/sde", It changes the uuid of the disk with no problem!
Have ansible 2.9 with python2.7.5
Can you help, please?
(Oh, this is my 1st question on SO, sorry for the previous formatting)
Edit
So I have almost reproduced the error at home.
This happens when I try to do a pvchange on a cloned disk (But again, only with Ansible!!)
I have cloned 2 disks (/dev/xvdc and /dev/xvde) to (/dev/xvdf and /dev/xvdg), and re-run the task, and I have following output:
TASK [Change pv uuid]
*************************************************************************
failed: [localhost] (item=/dev/xvdf) => {"ansible_loop_var": "item", "changed": true, "cmd": "pvchange -u \"/dev/xvdf\"\n", "delta": "0:00:00.038053", "end": "2021-03-06 03:46:07.526239", "item": "/dev/xvdf", "msg": "non-zero return code", "rc": 5, "start": "2021-03-06 03:46:07.488186", "stderr": " Failed to find physical volume \"/dev/xvdf\".", "stderr_lines": [" Failed to find physical volume \"/dev/xvdf\"."], "stdout": " 0 physical volumes changed / 0 physical volumes not changed", "stdout_lines": [" 0 physical volumes changed / 0 physical volumes not changed"]}
This seems to be, because the cloned disks are still not "PVs" in LVM; I do not see /dev/xvdf and /dev/xvdg when I do pvs.
But at work, this was different, I could see the cloned disks in "pvs output"
Will check again next week.
Thank you
From the dub of the execution, you can see that the command executed is pvchange -u "/dev/xvdf", which raise an error message 'Failed to find physical volume "/dev/xvdf".', so looks like it doesn't like the quotes.
You can change your task to that, it should works better:
- name: Change pv uuid
become: yes
shell: |
pvchange -u {{ item }}
loop: "{{ tgt_disks.split(' ') }}"
pvscan --cache `, before doing` pvchange -u <PV>
Team,
I am writing a validation task that is supposed to just check if a mount exists or not and report its state from output. so my task is below but it fails and am not sure how to handle it. any hint what adjustments do i need to make?
- name: "Verify LVP Mounts on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
#failed_when: lvp_mount.rc != 0
#ignore_errors: yes
# - debug:
# var: lvp_mount
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }}
{{ lvp_mount.stderr }}
when: lvp_mount | failed
changed: [localhost -> ] => (item=hostA)
[WARNING]: Consider using the mount module rather than running 'mount'. If you
need to use command because mount is insufficient you can add 'warn: false' to
this command task or set 'command_warnings=False' in ansible.cfg to get rid of
this message.
failed: [localhost -> hostA.test.net] (item=hostA) => {"ansible_loop_var": "item", "changed": true, "cmd": "mount | grep sdd", "delta": "0:00:00.009284", "end": "2019-11-06 18:22:56.138007", "failed_when_result": true, "item": "hostA", "msg": "non-zero return code", "rc": 1, "start": "2019-11-06 18:22:56.128723", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring
TASK [services-pre-install-checks : Report status of mounts] ************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'\n\nThe error appears to be in '/home/run_ansible_playbook/k8s/baremetal/roles/services-pre-install-checks/tasks/main.yml': line 265, column 9, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: \"Report status of mounts\"\n ^ here\n"}
Your task "Verify LVP Mounts on CPU Nodes for mount_device" is a loop so the register behavior is modified as specified in the documentation.
You can access the various outputs with lvp_mount.results.X.stdout where X is the index.
There is a cleaner way to write your script however. More specifically using:
delegate_to: "{{ item }}"
with_items: "{{ groups['kube-cpu-node'] }}"
is bad practice. You can accomplish your desired outcome at the play level.
For example:
- hosts: kube-cpu-node # allows you to iterate over all hosts in kube-cpu-node group
tasks:
- name: "Verify LVP Mounts on CPU Nodes for mount_device"
shell: "mount | grep sdd"
register: lvp_mount
ignore_errors: yes
# notice there is no loop here
- name: "Report status of mounts"
fail:
msg: |
Mounts sdd not found
Output of `mount | grep sdd`:
{{ lvp_mount.stdout }} # no loop so you can use lvp_mount.stdout
{{ lvp_mount.stderr }} # no loop so you can use lvp_mount.stderr
when: lvp_mount | failed
I have written the following ansible playbook to find the disk failure on the raid
- name: checking raid status
shell: "cat /proc/mdstat | grep nvme"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
Following is the output I got
"msg": [
"md0 : active raid1 nvme0n1p1[0] nvme1n1p1[1]",
"md2 : active raid1 nvme1n1p3[1](F) nvme0n1p3[0]",
"md1 : active raid1 nvme1n1p2[1] nvme0n1p2[0]"
]
I want to extract the disk name which is failed from the register variable array_check.
How do I do this in the ansible? Can I use set_fact module in ansible? Can I use grep, awk, sed command on the register variable array_check
This is the playbook I am using to check the health status of a drive using smartctl
- name: checking the smartctl logs
shell: "smartctl -H /dev/{{ item }}"
with_items:
- nvme0
- nvme1
And I am facing the following error
(item=nvme0) => {"changed": true, "cmd": "smartctl -H /dev/nvme0", "delta": "0:00:00.090760", "end": "2019-09-05 11:21:17.035173", "failed": true, "item": "nvme0", "rc": 127, "start": "2019-09-05 11:21:16.944413", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
(item=nvme1) => {"changed": true, "cmd": "smartctl -H /dev/nvme1", "delta": "0:00:00.086596", "end": "2019-09-05 11:21:17.654036", "failed": true, "item": "nvme1", "rc": 127, "start": "2019-09-05 11:21:17.567440", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
The desired output should be something like this,
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Below is the complete playbook including the logic to execute multiple commands in a single task using with_items,
---
- hosts: raid_host
remote_user: ansible
become: yes
become_method: sudo
tasks:
- name: checking raid status
shell: "cat /proc/mdstat | grep 'F' | cut -d' ' -f6 | cut -d'[' -f1"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
- name: checking the samrtctl logs for the drive
shell: "/usr/sbin/smartctl -H /dev/{{ item }} | tail -2|awk -F' ' '{print $6}'"
with_items:
- "nvme0"
- "nvme1"
register: "smartctl_status"