Is it possible to run task with loop async? - ansible

I have a huge ansible playbook and I have a task, which is using an array (from values) and run the same command for each element in array.
For example (this is just example, I know that I can use one line command!):
packages:
vim:
dev:
version: x.x
qa:
version: x.x
redis:
dev:
version: x.x
qa:
version: x.x
...
And I have a task which is installing these packages:
- name: Install Packages
include_tasks: tasks/install-packages.yml
loop: "{{ lookup('dict', packages) }}"
loop_control:
loop_var: item
and
- name: "Install {{ item.key }} package"
command: >
apt-get install {{ item.key }} ...
environment:
...
retries: 3
delay: 5
register: result
until: result.rc == 0
My question is - is it possible to run the last step in threads?
As I understand - first of all I need to split packages array and run task in async mode.
In this example - apt-get install vim and apt-get install redis should start installing simultaneously.
Is it possible? Or maybe there can be some another solution?
The main goal of it - speed up package installation (for this example).

I found solution here
#####################
# main.yml
#####################
- name: Run items asynchronously in batch of two items
vars:
sleep_durations:
- 1
- 2
- 3
- 4
- 5
durations: "{{ item }}"
include_tasks: execute_batch.yml
loop: "{{ sleep_durations | batch(2) | list }}"
#####################
# execute_batch.yml
#####################
- name: Async sleeping for batched_items
ansible.builtin.command: sleep {{ async_item }}
async: 45
poll: 0
loop: "{{ durations }}"
loop_control:
loop_var: "async_item"
register: async_results
- name: Check sync status
async_status:
jid: "{{ async_result_item.ansible_job_id }}"
loop: "{{ async_results.results }}"
loop_control:
loop_var: "async_result_item"
register: async_poll_results
until: async_poll_results.finished
retries: 30

Related

Printing the output of an async command in a loop

Below is a snippet from a playbook. How can I print a message after the command is run for each loop item. Something like.
image.pyc is now running for item1
image.pyc is now running for item2
...
and so on
- name: Image Server(s)
command: python3 image.pyc {{item}} "{{systemVersion}}"
register: async_out
async: 7200
poll: 0
with_items: "{{hostinfo_input.hosts}}"
In a nutshell, extremely simple, untested, to be adapted to your use case specifically in terms of error/output control:
- name: Image Server(s)
ansible.builtin.command: python3 image.pyc {{ item }} "{{ systemVersion }}"
register: async_out
async: 7200
poll: 0
with_items: "{{ hostinfo_input.hosts }}"
- name: Wait for commands to finish
ansible.builtin.async_status:
jid: "{{ item.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 720
delay: 10
loop: "{{ async_out.results }}"
- name: Show async command output
ansible.builtin.debug:
msg: "{{ item.stdout }}"
loop: "{{ async_out.results }}"
- name: Good guests always clean up after themselves
ansible.builtin.async_status:
jid: "{{ item.ansible_job_id }}"
mode: cleanup
loop: "{{ async_out.results }}"
References:
async in playbooks
async_status module
debug module
Registering variables with a loop
Have a look at Ansible documentation for the debug module: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/debug_module.html
E.g.use it like this:
# playbook.yaml
---
- hosts: localhost
connection: local
vars:
ip_addresses:
- 10.0.0.10
- 10.0.0.11
tasks:
- name: Task
include_tasks: task.yaml
loop: "{{ ip_addresses }}"
# task.yaml
---
- name: Command
ansible.builtin.shell: "echo {{ item }}"
register: output
- name: Message
ansible.builtin.debug:
msg: |
- ip_address={{ item }}
- ouput={{ output }}
Unfortunately this does not work with async. See https://github.com/ansible/ansible/issues/22716.

How to register a variable to each task dynamically and add those variable in to a list so that i could use it later part of the ansible playbook?

Real scenario is,
I have n hosts in a inventory group and playbook has to run a specific *command for the specific inventory hostname(done with ansible when condition statement), but whenever the condition met and I need to register a variable for the above *command result.
so this variable creation should be done dynamically and these created variable should be appended into a list and then at end of the same playbook by passing the list to a loop I have to check the job async_status.
So could some one help me here?
tasks:
-name:
command:
when: invenory_hostname == x
async: 360
poll:0
regsiter: "here dynamic variable"
-name:
command:
when: invenory_hostname == x
async: 360
poll:0
regsiter: "here dynamic variable"
-name:
command:
when: invenory_hostname == x
async: 360
poll:0
regsiter: "here dynamic variable" #his will continue based on the requirments
-name: collect the job ids
async_status:
jid:{item}
with_items:"list which has all the dynamically registered variables"
If you can write this as a loop instead of a series of independent tasks this becomes much easier. E.g:
tasks:
- command: "{{ item }}"
register: results
loop:
- "command1 ..."
- "command2 ..."
- name: show command output
debug:
msg: "{{ item.stdout }}"
loop: "{{ results.results }}"
The documentation on "Registering variables with a loop" discusses what the structure of results would look like after this task executes.
If you really need to write independent tasks instead, you could use the
vars lookup to find the results from all the tasks like this:
tasks:
- name: task 1
command: echo task1
register: task_result_1
- name: task 2
command: echo task2
register: task_result_2
- name: task 3
command: echo task3
register: task_result_3
- name: show results
debug:
msg: "{{ item }}"
loop: "{{ q('vars', *q('varnames', '^task_result_')) }}"
loop_control:
label: "{{ item.cmd }}"
You've updated the question to show that you're using async tasks, so
that changes things a bit. In this example, we use an until loop
that waits for each job to complete before checking the status of the
next job. The gather results task won't exit until all the async
tasks have completed.
Here's the solution using a loop:
- hosts: localhost
gather_facts: false
tasks:
- name: run tasks
command: "{{ item }}"
async: 360
poll: 0
register: task_results
loop:
- sleep 1
- sleep 5
- sleep 10
- name: gather results
async_status:
jid: "{{ item.ansible_job_id }}"
register: status
until: status.finished
loop: "{{ task_results.results }}"
- debug:
var: status
And the same thing using individual tasks:
- hosts: localhost
gather_facts: false
tasks:
- name: task 1
command: sleep 1
async: 360
poll: 0
register: task_result_1
- name: task 2
command: sleep 5
async: 360
poll: 0
register: task_result_2
- name: task 3
command: sleep 10
async: 360
poll: 0
register: task_result_3
- name: gather results
async_status:
jid: "{{ item.ansible_job_id }}"
register: status
until: status.finished
loop: "{{ q('vars', *q('varnames', '^task_result_')) }}"
- debug:
var: status

How to parallize the execution by hostgroup in ansible

I am dynamically creating the hostgroup by site and it did well with below recipe.
- name: "generate batches from batch_processor"
batch_processor:
inventory: "{{ inventory_url }}"
ignore_applist: "{{ ignore_applist | default('None') }}"
ignore_podlist: "{{ ignore_podlist | default('None') }}"
register: batch_output
- name: "Creating batches by site"
include: batch_formatter.yml hostkey="{{ 'batch_' + item.key }}" hostlist="{{ item.value | list | join(',') }}"
with_dict: "{{ batch_output.result['assignments'] }}"
In my playbook, i have like this
- hosts: batch_*
serial: 10
gather_facts: False
tasks:
- include: roles/deployment/tasks/dotask.yml
I initially had strategy: free but in some reason it didn't pickup parallel. Currently i am using all batch hosts 10 at time to deploy.
I am thinking of below items
batch_site1 - 10 in parallel
batch_site2 - 10 in parallel
batch_site3 - 10 in parallel
But in the playbook, i don't want to specify the hostgroup execution by site as they are dynamic. Sometime, we will have siteX and sometime it wont be there. Please suggest the best approach.
- hosts: batch_*
gather_facts: False
strategy: free
tasks:
- include: roles/deployment/tasks/dotask.yml
when: "'batch_site1' not in group_names"
async: 600
poll: 0
register: job1
- name: Wait for asynchronous job to end
async_status:
jid: '{{ job1.ansible_job_id }}'
register: job_result1
until: job_result1.finished
retries: 30
- include: roles/deployment/tasks/dotask.yml
when: "'batch_site2' not in group_names"
register: job2
async: 600
poll: 0
- name: Wait for asynchronous job to end
async_status:
jid: '{{ job2.ansible_job_id }}'
register: job_result2
until: job_result2.finished
retries: 30
- include: roles/deployment/tasks/dotask.yml
when: "'batch_site3' not in group_names"
register: job3
async: 600
poll: 0
- name: Wait for asynchronous job to end
async_status:
jid: '{{ job3.ansible_job_id }}'
register: job_result3
until: job_result3.finished
retries: 30
This config slightly working on my purpose, however i couldn't pull the async_status to show better results in screen.
{"msg": "The task includes an option with an undefined variable. The error was: 'job1' is undefined\n\nThe error appears to have been in '/../../../../play.yml': line 12, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Wait for asynchronous job to end\n ^ here\n"}
not sure why ?

Ansible rollback: run a group of tasks over list of hosts even when one of hosts failed

I have a playbook with multiple roles, hosts and groups. I am trying to develop a rollback functionality, that would run over all hosts. My current obstacle is that I see no way to delegate role, block or set of tasks to group of hosts
I tried looking up delegation to group without loops, so it would work on a block.
import_role doesn't accept loops
include_role doesn't accept delegate_to
same with import_tasks/include_tasks
here is what I have now as a playbook file (shortened version)
- hosts: all
any_errors_fatal: true
vars_prompt:
- name: "remote_user_p"
prompt: "Remote user running the playbook"
default: "root"
private: no
- name: "service_user_p"
prompt: "Specify user to run non-root tasks"
default: "user"
private: no
tasks:
- set_fact:
playbook_type: "upgrade"
- import_role:
name: 0_pre_check
run_once: true
remote_user: "{{ remote_user_p }}"
become_user: "{{ service_user_p }}"
become_method: su
become: yes
- block:
- import_role:
name: 1_os
- import_role:
name: 2_mysql
when: inventory_hostname in groups['mysql'] | default("")
- import_role:
name: 3_web
when: inventory_hostname in groups['web'] | default("")
...
rescue:
- block:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
remote_user: "{{ remote_user }}"
become_user: "{{ service_user }}"
become_method: su
become: yes
This is some example code from rollback.yml:
- block:
- name: rollback symlinks to config dir
file:
src: "{{ current_config_path }}"
dest: "{{ install_dir }}/static/cfg"
owner: "{{ service_user }}"
group: "{{ service_user_primary_group }}"
state: link
when: current_new_configs | default("N") == "Y"
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
- block:
- name: return config files
shell: test -f '{{ item.1.current_ver_file_path }}' && cp -p {{ item.1.current_ver_file_path }} {{ item.1.old_config_location }}
args:
warn: false
register: return_config_files
failed_when: return_config_files.rc >= 2
when:
- roolback_moved_cfg | default('N') == "Y"
- inventory_hostname in groups[item.0.group]
- item.1.old_config_location != ""
- item.1.current_ver_file_path != ""
with_subelements:
- "{{ config_files }}"
- files
become_user: root
become_method: sudo
become: yes
- name: systemctl daemon-reload
shell: systemctl daemon-reload
failed_when: false
when: root_rights == "Y"
args:
warn: false
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
when: root_rights == "Y"
become_user: root
become_method: sudo
become: yes
- fail:
msg: "Upgrade failed. Symbolic links were set to the previous version. Fix the issues and try again. If you wish to cancel the upgrade, restore the database backup manually."
As you can see, now I use lame workaround by introducing
delegate_to: "{{ item }}"
with_items:
- "{{ ansible_play_hosts }}"
after every task.
There are two problems here:
1. I can't use same approach after task return config files, because it already uses one loop
2. This is generally lame duplication of code and I hate it
Why I need it at all: if playbook execution fails somewhere in mysql role, for example, the rescue block will be executed only over the hosts in that mysql role (and btw, execution of tasks from next role will continue while running rescue block - same amount of tasks, despite all efforts), while I would like it to run over all hosts instead.
I finally was able to solve this with an ugly-ugly hack. Used plays instead of just roles - now there are more than 10 plays. Don't judge me, I spent lots of effort trying to make it nice ):
Example play followed by a check - same as for every other.
- hosts: mysql
any_errors_fatal: true
tasks:
- block:
- import_role:
name: 2_mysql
when: not rollback | default(false)
rescue:
- block:
- name: set fact for rollback
set_fact:
rollback: "yes"
delegate_to: "{{ item }}"
delegate_facts: true
with_items: "{{ groups['all'] }}"
- hosts: all
any_errors_fatal: true
tasks:
- name: run rollback
import_tasks: ../common/roles/5_rollback/tasks/rollback.yml
when: rollback | default(false)
include_role doesn't accept delegate_to
Actually, it does.
With ansible-core 2.8:
- name: "call my/role with host '{{ansible_hostname}}' for hosts in '{{ansible_play_hosts}}'"
include_role:
name: my/role
apply:
delegate_to: "{{current_host}}"
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With ansible-core 2.5 to 2.7, see "2.5: delegate_to, include_role with loops" from George Shuklin, mentioned in ansible/ansible issue 35398
- name: "call my/role with host '{{ansible_hostname}}' for items in '{{ansible_play_hosts}}'"
include_tasks: loop.yml
with_items: "{{ansible_play_hosts}}"
loop_control:
loop_var: current_host
With loop.yml another tasks in its own file:
- name: "Import my/role for '{{current_host}}'"
import_role: name=my/role
delegate_to: "{{current_host}}"
So in two files (with ansible-core 2.7) or one file (2.8), you can make a all role and its tasks run on a delegated server.

Ansible conditional handlers for a task?

FYI, I'm running my Ansible playbook in "pull-mode" not push mode. Therefore, my nodes will publish the results of their task via Hipchat.
With that said, I have a task that installs RPMs. When the installs are successful, the nodes notify via Hipchat that the task was successfully run. Now, in the event that a task fails, i force it to notify hipchat w/ the "--force-handlers" paramter. My question, is there a way to call a particular handler depending on the outcome of the task?
Task
- name: Install Perl modules
command: sudo rpm -Uvh {{ rpm_repository }}/{{ item.key }}-{{ item.value.svn_tag }}.rpm --force
with_dict: deploy_modules_perl
notify: announce_hipchat
Handler
- name: announce_hipchat
local_action: hipchat
from="deployment"
token={{ hipchat_auth_token }}
room={{ hipchat_room }}
msg="[{{ ansible_hostname }}] Successfully installed RPMs!"
validate_certs="no"
In this case, I use multiple handlers. See following example :
file site.yml
- hosts: all
gather_facts: false
force_handlers: true
vars:
- cmds:
echo: hello
eccho: hello
tasks:
- name: echo
command: "{{ item.key }} {{ item.value }}"
register: command_result
with_dict: "{{ cmds }}"
changed_when: true
failed_when: false
notify:
- "hipchat"
handlers:
- name: "hipchat"
command: "/bin/true"
notify:
- "hipchat_succeeded"
- "hipchat_failed"
- name: "hipchat_succeeded"
debug:
var: "{{ item }}"
with_items: "{{ command_result.results | selectattr('rc', 'equalto', 0) | list }}"
- name: "hipchat_failed"
debug:
var: "{{ item }}"
with_items: "{{ command_result.results | rejectattr('rc', 'equalto', 0) | list }}"
Use command
ansible-playbook -c local -i "localhost," site.yml
CAUTION: use test filter 'equalsto' added in jinja2 2.8

Resources