I'm trying to implement a controlled restart inside a role in Ansible. I need a set of tasks to run sequentially on each node in turn. It seems that I can't use serial on a block. Is there another way to do this? Throttle still executes each task in the block one by one and serial can only be used on a play.
Here is my role:
- name: Task 1
debug:
msg: "hello1"
- name: An example block
block:
- name: Task 2
debug:
msg: "Decommission Node"
- name: Task 3
debug:
msg: "Restart Node"
- name: Task 4
debug:
msg: "Recommission Node"
throttle: 1
# serial: 1
You cannot do this at role level, this is a functionality limited to playbooks.
Think about a role as being some code that runs on a specific host, so looping over hosts should be done only at playbook level.
There are some tricks you could use to do some kind of looping inside a role but I will not go into the details because it will break badly in too many cases.
In ansible(version 2.9 and later) throttle argument set the limit for workers per task and not per role as far as I could understood from my experience and checking documentation here:
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html#restricting-execution-with-throttle
Check also relevant Github issue from ansible contributors regarding throttling in ansible:
https://github.com/ansible/ansible/issues/64659
Related
One can recover failed hosts using rescue. How can I configure Ansible so that the other hosts in the play are aware of the host which will be recovered?
I thought I was smart, and tried using difference between ansible_play_hosts_all and ansible_play_batch, but Ansible doesn't list the failed host, since it's rescued.
---
- hosts:
- host1
- host2
gather_facts: false
tasks:
- block:
- name: fail one host
shell: /bin/false
when: inventory_hostname == 'host1'
# returns an empty list
- name: list failed hosts
debug:
msg: "{{ ansible_play_hosts_all | difference(ansible_play_batch) }}"
rescue:
- shell: /bin/true
"How can I configure Ansible so that the other hosts in the play are aware of the host which will be recovered?"
It seems that according documentation Handling errors with blocks
If any tasks in the block return failed, the rescue section executes tasks to recover from the error. ... Ansible provides a couple of variables for tasks in the rescue portion of a block: ansible_failed_task, ansible_failed_result
as well the source of ansible/playbook/block.py, such functionality isn't implemented yet.
You may need to implement some logic to keep track of the content of return values of ansible_failed_task and on which host it happened during execution. Maybe it is possible to use add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory with parameter groups: has_rescued_tasks.
Or probably do further investigation beginning with default Callback plugin and Ansible Issue #48418 "Add stats on rescued/ignored tasks" since it added statistics about rescued tasks.
I have two Plays having one task each.
The first Play checks if the /var/test.dat exists on each target. Only if the first play is successful do I want the second play to run which executes these scripts in parallel.
If the first play fails i.e the test.dat does not exist I wish to terminate the playbook without the second Play getting executed. For this purpose, I have set any_errors_fatal set to true
I need to have an ansible Play strategy set to free as each of the scripts takes 30 minutes to complete.
My understanding of ansible is limited.
I understand that if I have both the tasks under a single PLAY and set the strategy to free both the tasks will run in parallel which is something I do not want.
---
- name: Play 1- check for login and script
hosts: all_hosts
any_errors_fatal: true
strategy: free
tasks:
- name: Check script existence
shell: "ls /var/test.dat"
register: checkscript
- name:
fail:
msg: "script {{ scriptdet }} missing on {{ inventory_hostname }}"
when: checkscript.rc != 0
- name: Play 2- Run scripts
hosts: all_hosts
user: "{{ USER }}"
strategy: free
tasks:
- name: Execute backup script
shell: "{{ scriptdet }}"
args:
chdir: ~/..
I tried the above playbook but I see the second play executes despite the first play's task failed.
Can you please suggest how can I get this to work?
Unless any_errors_fatal is true - Ansible will finish running on all hosts - even if some other host failed.
However, as you found out any_errors_fatal does not work with a free strategy.
It's also impossible to create a global variable.
However, you mentioned that you want to use free strategy to speed up the execution.
However, free strategy only prevents "blocking new tasks for hosts that have already finished.".
If you need to execute the same task without waiting for other hosts to finish, you need parallelism.
Strategy has nothing to do with parallelism.
In Ansible you adjust the number of parallel hosts with forks. However the default is very low, and is only 5.
What you want is to set this number higher - depending on your executing host resources. In my experience with a good amount of cpu and memory, this can be adjusted to hundreds.
You set this either in ansible.cfg or in the command line:
ansible-playbook -f 30 my_playbook.yml
This way you can use the default linear strategy with any_errors_fatal set to true.
I want to run some ansible task on 4 servers one by one, i.e. on serial manner. But there will be a pause in between. So, I have added the pause at last in the playbook, but I want it to be skipped on last server. Otherwise it will wait for no reason. Please let me know how to implement this.
---
- hosts: server1,server2,server3,server4
serial: 1
vars_files:
- ./vars.yml
tasks:
- name: Variable test
pause:
minutes: 1
Really interesting problem which forced me to look for an actual solution. Here is the quickest one I came up with.
The ansible special variables documentation defines the ansible_play_hosts_all variable as follow
List of all the hosts that were targeted by the play
The list of hosts in that var is in the order it was found inside the inventory.
Provided you use the default inventory order for your play, you can set a test that will trigger the task unless the current host is the last one in that list:
when: inventory_hostname != ansible_play_hosts_all[-1]
As reported by #Vladimir in the comments below, if you change the play order parameter from default, this approach will break.
The playbook below does the job
- hosts: all
serial: 1
vars:
completed: false
tasks:
- set_fact:
completed: true
- block:
- debug:
msg: All completed. End of play.
- meta: end_play
when: "groups['all']|
map('extract', hostvars, 'completed')|
list is all"
- name: Variable test
pause:
minutes: 1
Notes
see any/all
see Extracting values from containers
see hostvars
I have an Ansible Playbook myPlayBook.yml. It has two plays and each plays has one or more tasks.
I use the command as follows to run my playbook:
ansible-playbook myPlayBook.yml
So every thing is OK and my tasks are executed successfully. Now, after the first running I want to run my playbook again from the first play (similar to the first running but automatically). Is there a way to do that?
(I saw that it can be done for a specific task or play with include or include_tasks, but what about for a playbook?)
Now, after the first running I want to run my playbook again from the
first play (similar to the first running but automatically).
You can form your tasks as role and execute it via include_role from playbook multiple times:
- name: 'Include role'
include_role:
name: '{{ app_role}}'
- debug:
msg: "{{ inventory_hostname }}"
Create role by the following path roles/inventory_role that consist of one task. From playbook you can execute role just call it multiple times:
- name: 'Include role inventory_role'
include_role:
name: 'inventory_role'
- name: 'Include role inventory_role'
include_role:
name: 'inventory_role'
Or you can use something like this:
- { role: 'inventory_role' }
- { role: 'inventory_role' }
NOTE:
Ansible will only allow a role to execute once, even if defined
multiple times, if the parameters defined on the role are not
different for each definition.
See for more details: Role Duplication and Execution.
Execute an Ansible Playbook periodically
Ansible Tower have job with scheduled launch type.
I'm trying to spin up an AWS deployment environment in Ansible, and I want to make it so that if something fails along the way, Ansible tears down everything on AWS that has been spun up so far. I can't figure out how to get Ansible to throw an error within the role
For example:
<main.yml>
- hosts: localhost
connection: local
roles:
- make_ec2_role
- make_rds_role
- make_s3_role
2. Then I want it to run some code based on that error here.
<make_rds_role>
- name: "Make it"
- rds:
params: etc <-- 1. Let's say it fails in the middle here
I've tried:
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
As well as other things on within the documentation, but what I really want is just a way to use something like the "block" and "rescue" commands , but as far as I can tell that only works within the same book and on plays, not roles. Does anyone have a good way to do this?
Wrap tasks inside your roles into block/rescue thing.
Make sure that rescue block has at least one task – this way Ansible will not mark the host as failed.
Like this:
- block:
- name: task 1
... # something bad may happen here
- name: task N
rescue:
- assert: # we need a dummy task here to prevent our host from being failed
that: ansible_failed_task is defined
Recent versions of Ansible register ansible_failed_task and ansible_failed_result when hit rescue block.
So you can do some post_tasks in your main.yml playbook like this:
post_tasks:
- debug:
msg: "Failed task: {{ ansible_failed_task }}, failed result: {{ ansible_failed_result }}"
when: ansible_failed_task is defined
But be warned that this trick will NOT prevent other roles from executing.
So in your example if make_rds_role fails ansible will apply make_s3_role and run your post_tasks afterwards.
If you need to prevent it, add some checking for ansible_failed_task fact in the beginning of each role or something.