This is something I often do
- name: check whether ready
shell: do_something_complex
changed_when: false
ignore_errors: true
register: result
- name: make it ready
when: result.rc != 0
It is messy when there are many tasks after that, which perform the same check.
I would like instead to wrap the result somehow, and use it like this:
- name: do this if ready
when: isReady
- name: do that if not ready
when: not isReady
How can I do that? (Preferably without intermediate dummy tasks that exist just to set variables.)
That's the thing that import_tasks:, or even custom modules, are designed to solve:
# tasks/make_ready.yml
- name: check whether ready
shell: '{{ check_ready_shell }}'
changed_when: false
# you'll want to be **very careful** using this
ignore_errors: true
register: isReady
- name: make it ready
shell: '{{ make_ready_shell }}'
when: isReady.rc != 0
then, in your main playbook:
- import_tasks: make_ready.yml
vars:
check_ready_shell: echo 'hello from checking'
make_ready_shell: systemctl start-the-thing-or-whatever
- import_tasks: make_ready.yml
vars:
check_ready_shell: echo 'hello from other checking'
make_ready_shell: systemctl start-the-other-thing
Related
Real scenario is,
I have n hosts in a inventory group and playbook has to run a specific *command for the specific inventory hostname(done with ansible when condition statement), but whenever the condition met and I need to register a variable for the above *command result.
so this variable creation should be done dynamically and these created variable should be appended into a list and then at end of the same playbook by passing the list to a loop I have to check the job async_status.
So could some one help me here?
tasks:
-name:
command:
when: invenory_hostname == x
async: 360
poll:0
regsiter: "here dynamic variable"
-name:
command:
when: invenory_hostname == x
async: 360
poll:0
regsiter: "here dynamic variable"
-name:
command:
when: invenory_hostname == x
async: 360
poll:0
regsiter: "here dynamic variable" #his will continue based on the requirments
-name: collect the job ids
async_status:
jid:{item}
with_items:"list which has all the dynamically registered variables"
If you can write this as a loop instead of a series of independent tasks this becomes much easier. E.g:
tasks:
- command: "{{ item }}"
register: results
loop:
- "command1 ..."
- "command2 ..."
- name: show command output
debug:
msg: "{{ item.stdout }}"
loop: "{{ results.results }}"
The documentation on "Registering variables with a loop" discusses what the structure of results would look like after this task executes.
If you really need to write independent tasks instead, you could use the
vars lookup to find the results from all the tasks like this:
tasks:
- name: task 1
command: echo task1
register: task_result_1
- name: task 2
command: echo task2
register: task_result_2
- name: task 3
command: echo task3
register: task_result_3
- name: show results
debug:
msg: "{{ item }}"
loop: "{{ q('vars', *q('varnames', '^task_result_')) }}"
loop_control:
label: "{{ item.cmd }}"
You've updated the question to show that you're using async tasks, so
that changes things a bit. In this example, we use an until loop
that waits for each job to complete before checking the status of the
next job. The gather results task won't exit until all the async
tasks have completed.
Here's the solution using a loop:
- hosts: localhost
gather_facts: false
tasks:
- name: run tasks
command: "{{ item }}"
async: 360
poll: 0
register: task_results
loop:
- sleep 1
- sleep 5
- sleep 10
- name: gather results
async_status:
jid: "{{ item.ansible_job_id }}"
register: status
until: status.finished
loop: "{{ task_results.results }}"
- debug:
var: status
And the same thing using individual tasks:
- hosts: localhost
gather_facts: false
tasks:
- name: task 1
command: sleep 1
async: 360
poll: 0
register: task_result_1
- name: task 2
command: sleep 5
async: 360
poll: 0
register: task_result_2
- name: task 3
command: sleep 10
async: 360
poll: 0
register: task_result_3
- name: gather results
async_status:
jid: "{{ item.ansible_job_id }}"
register: status
until: status.finished
loop: "{{ q('vars', *q('varnames', '^task_result_')) }}"
- debug:
var: status
I have a playbook (CIS compliance standard) with multiple tasks and I want to produce a "success" or "failed" depending on the ansible return code.
---
- name: 2.2.# Ensure ### Server is not enabled
block:
- name: Check if ### exists
stat: path=/usr/lib/systemd/system/###.service
register: exists
- name: Disable if exists
service:
name: ###
state: stopped
enabled: no
when: exists.stat.exists
register: result
- name: To File
block:
- name: Success
lineinfile:
dest: ./results/{{ customer }}-{{ scan_type }}-{{ inventory_hostname }}.txt
line: "{{ inventory_hostname }} 2.2.9 success"
insertafter: EOF
delegate_to: localhost
check_mode: False
when: ((result is skipped) or (result.enabled == false))
- name: Failed
lineinfile:
dest: ./results/{{ customer }}-{{ scan_type }}-{{ inventory_hostname }}.txt
line: "{{ inventory_hostname }} 2.2.9 failed"
insertafter: EOF
delegate_to: localhost
check_mode: False
when: ((result is not skipped) or (result.enabled == true))
From my observation, 'result' can have two different outputs depending on if the "Disable if exists" block is triggered.
If it is triggered, it'll give an output based on the "service" module.
If it is skipped, it'll give the generic Ansible output.
I'm fine with that, but what I can't seem to work out is the conditional statement.
when: ((result is not skipped) or (result.enabled == true))
This will always try to resolve both options, so if the module triggers, it will fail because "skipped" is not an attribute of the service module. If it skips, it'll pass, but obviously fail if it ever gets triggered. It's like it wants all conditions to exist before evaluating despite the "or" statement.
What am I doing wrong?
Do you mean result is skipped rather than result is not skipped? In any case, you can solve this using the default filter, which provides a default value if the input expression is undefined. For example:
when: result.enabled|default(false) == true
Of course, since that's a boolean, you can further simplify it to:
when: result.enabled|default(false)
I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!
How do we check for a registered variable if only one of the two conditions turns out to be true having the same registered variable?
Below is my playbook that executes only one of the two shell modules.
- name: Check file
shell: cat /tmp/front.txt
register: myresult
when: Layer == 'front'
- name: Check file
shell: cat /tmp/back.txt
register: myresult
when: Layer == 'back'
- debug:
msg: data was read from back.txt and print whatever
when: Layer == 'back' and myresult.rc != 0
- debug:
msg: data was read from front.txt and print whatever
when: Layer == 'front' and myresult.rc != 0
Run the above playbook as
ansible-playbook test.yml -e Layer="front"
I do get error that says myresult does not have an attribute rc. What is the best way to print debug one statements based on the condition met?
I tried myresult is changed but that too does not help. Can you please suggest.
Use ignore_errors: true and change the task order. Try as below.
- name: Check file
shell: cat /tmp/front.txt
register: myresult
when: Layer == 'front'
- debug:
msg: data was read from front.txt and print whatever
when: not myresult.rc
ignore_errors: true
- name: Check file
shell: cat /tmp/back.txt
register: myresult
when: Layer == 'back'
- debug:
msg: data was read from back.txt and print whatever
when: not myresult.rc
ignore_errors: true
I have a playbook with multiple roles, hosts and groups. I am trying to develop a rollback functionality, that would run over all hosts. My current obstacle is that I see no way to delegate role, block or set of tasks to group of hosts
I tried looking up delegation to group without loops, so it would work on a block.
import_role doesn't accept loops
include_role doesn't accept delegate_to
same with import_tasks/include_tasks
here is what I have now as a playbook file (shortened version)
- hosts: all
any_errors_fatal: true
vars_prompt:
- name: "remote_user_p"
prompt: "Remote user running the playbook"
default: "root"
private: no
- name: "service_user_p"
prompt: "Specify user to run non-root tasks"
default: "user"
private: no
tasks:
- set_fact:
playbook_type: "upgrade"
- import_role:
name: 0_pre_check
run_once: true
remote_user: "{{ remote_user_p }}"
become_user: "{{ service_user_p }}"
become_method: su
become: yes
- block:
- import_role:
name: 1_os
- import_role:
name: 2_mysql
when: inventory_hostname in groups['mysql'] | default("")
- import_role:
name: 3_web
when: inventory_hostname in groups['web'] | default("")
...
rescue:
- block:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
remote_user: "{{ remote_user }}"
become_user: "{{ service_user }}"
become_method: su
become: yes
This is some example code from rollback.yml:
- block:
- name: rollback symlinks to config dir
file:
src: "{{ current_config_path }}"
dest: "{{ install_dir }}/static/cfg"
owner: "{{ service_user }}"
group: "{{ service_user_primary_group }}"
state: link
when: current_new_configs | default("N") == "Y"
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
- block:
- name: return config files
shell: test -f '{{ item.1.current_ver_file_path }}' && cp -p {{ item.1.current_ver_file_path }} {{ item.1.old_config_location }}
args:
warn: false
register: return_config_files
failed_when: return_config_files.rc >= 2
when:
- roolback_moved_cfg | default('N') == "Y"
- inventory_hostname in groups[item.0.group]
- item.1.old_config_location != ""
- item.1.current_ver_file_path != ""
with_subelements:
- "{{ config_files }}"
- files
become_user: root
become_method: sudo
become: yes
- name: systemctl daemon-reload
shell: systemctl daemon-reload
failed_when: false
when: root_rights == "Y"
args:
warn: false
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
when: root_rights == "Y"
become_user: root
become_method: sudo
become: yes
- fail:
msg: "Upgrade failed. Symbolic links were set to the previous version. Fix the issues and try again. If you wish to cancel the upgrade, restore the database backup manually."
As you can see, now I use lame workaround by introducing
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
after every task.
There are two problems here:
1. I can't use same approach after task return config files, because it already uses one loop
2. This is generally lame duplication of code and I hate it
Why I need it at all: if playbook execution fails somewhere in mysql role, for example, the rescue block will be executed only over the hosts in that mysql role (and btw, execution of tasks from next role will continue while running rescue block - same amount of tasks, despite all efforts), while I would like it to run over all hosts instead.
I finally was able to solve this with an ugly-ugly hack. Used plays instead of just roles - now there are more than 10 plays. Don't judge me, I spent lots of effort trying to make it nice ):
Example play followed by a check - same as for every other.
- hosts: mysql
any_errors_fatal: true
tasks:
- block:
- import_role:
name: 2_mysql
when: not rollback | default(false)
rescue:
- block:
- name: set fact for rollback
set_fact:
rollback: "yes"
delegate_to: "{{ item }}"
delegate_facts: true
with_items: "{{ groups['all'] }}"
- hosts: all
any_errors_fatal: true
tasks:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
when: rollback | default(false)
include_role doesn't accept delegate_to
Actually, it does.
With ansible-core 2.8:
- name: "call my/role with host '{{ansible_hostname}}' for hosts in '{{ansible_play_hosts}}'"
include_role:
name: my/role
apply:
delegate_to: "{{current_host}}"
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With ansible-core 2.5 to 2.7, see "2.5: delegate_to, include_role with loops" from George Shuklin, mentioned in ansible/ansible issue 35398
- name: "call my/role with host '{{ansible_hostname}}' for items in '{{ansible_play_hosts}}'"
include_tasks: loop.yml
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With loop.yml another tasks in its own file:
- name: "Import my/role for '{{current_host}}'"
import_role: name=my/role
delegate_to: "{{current_host}}"
So in two files (with ansible-core 2.7) or one file (2.8), you can make a all role and its tasks run on a delegated server.