Ansible host reachability should not determine overall playbook success - ansible

A playbook which is gathering uptimes from hosts supplied by an inventory plugin reports failure if any hosts are unreachable; I'd like the PB to stop trying to run tasks against unreachable server - ie the default behaviour - but not fail the playbook as a whole, since it is a likely condition that given the 100s of servers in the inventory, some may have been torn down before the inventory is updated.
The first task in the playbook is setup (aka gather_facts), which is where I want to put the error handling:
- name: get some info about the host
setup:
gather_subset: minimal
ignore_unreachable: true
...
- name: Do something with the facts
write_data:
blah
The intention is that the playbook runs gather_facts, taking note of unreachability, but not allowing that to cause the PB as a whole to be marked as a failure.

Related

Continuing Ansible on error in a multi-host deployment

In our deployment strategy, our playbooks take on the following structure:
workflow.yml
- hosts: host1
tasks:
- name: setup | create virtual machines on host1
include_tasks: setup.yml
- name: run | import a playbook that will target new virtual machines
include: "virtual_machine_playbook/main.yml"
- hosts: host1
tasks:
- name: cleanup | destroy virtual machines on host1
include_tasks: destroy.yml
virtual_machine_playbook/main.yml
- hosts: newCreatedVMs
roles:
- install
- run
Which works great most of the time. However if for some reason, the virtual_machine_playbook/main.yml errors out, the last hosts block does not run and we are required to manually destroy our VMs. I wanted to know if there was a way to mandate that each hosts block run, regardless of what happens before it.
Other Notes:
The reason that we structure our playbooks this way is because we would like everything to be as contained as possible. Variables are created in each hosts block that are rather important to the ones that follow. Splitting them out into separate files and invocations is something we have not had much success with
We have tried the standard ansible approach for error handling as found here, but most of the options only apply at the task level (blocks, ignore_errors, etc.)

Iterate over inventory facts [duplicate]

I'm sitting in front of a fairly complex Ansible project that we're using to set up our local development environments (multiple VMs) and there's one role that uses the facts gathered by Ansible to set up the /etc/hosts file on every VM. Unfortunately, when you want to run the playbook for one host only (using the -limit parameter) the facts from the other hosts are (obviously) missing.
Is there a way to force Ansible to gather facts on all hosts, even if you limit the playbook to one specific host?
We tried to add a play to the playbook to gather facts from all hosts, but of course that also gets limited to the one host given by the -limit parameter. If there'd be a way to force this play to run on all hosts before the other plays, that would be perfect.
I've googled a bit and found the solution with fact caching with redis, but since our playbook is used locally, I wanted to avoid the need for additional software. I know, it's not a big deal, but I was just looking for a "cleaner", Ansible-only solution and was wondering, if that would exist.
Ansible version 2 introduced a clean, official way to do this using delegated facts (see: http://docs.ansible.com/ansible/latest/playbooks_delegation.html#delegated-facts).
when: hostvars[item]['ansible_default_ipv4'] is not defined is a check to ensure you don't check for facts in a host you already know the facts about
---
# This play will still work as intended if called with --limit "<host>" or --tags "some_tag"
- name: Hostfile generation
hosts: all
become: true
pre_tasks:
- name: Gather facts from ALL hosts (regardless of limit or tags)
setup:
delegate_to: "{{ item }}"
delegate_facts: True
when: hostvars[item]['ansible_default_ipv4'] is not defined
with_items: "{{ groups['all'] }}"
tasks:
- template:
src: "templates/hosts.j2"
dest: "/etc/hosts"
tags:
- hostfile
...
In general the way to get facts for all hosts even when you don't want to run tasks on all hosts is to do something like this:
- hosts: all
tasks: [ ]
But as you mentioned, the --limit parameter will limit what hosts this would be applied to.
I don't think there's a way to simply tell Ansible to ignore the --limit parameter on any plays. However there may be another way to do what you want entirely within Ansible.
I haven't used it personally, but as of Ansible 1.8 fact caching is available. In a nutshell, with fact caching enabled Ansible will use a redis server to cache all the facts about hosts it encounters and you'll be able to reference them in subsequent playbooks:
With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook.
This still seems to be an issue without a clean solution here in 2016, but newer versions of Ansible offer a "jsonfile" fact caching backend, which seems to be a decent compromise to installing Redis locally just to address this need. Now I just fire off an ansible all -m setup before running a playbook with the --limit option. Good enough for jazz!
http://docs.ansible.com/ansible/playbooks_variables.html#fact-caching
You could modify your playbook to:
...
- hosts: "{{ limit_hosts|default('default_group') }}"
tasks:
...
...
And when you run it, if some_var is not defined (normal state) then it will run on the default_group inventory group, BUT if you run it as:
ansible-playbook --extra-vars "limit_hosts=myHost" myplaybook.yml
Then it will only run on your myHost, but you could still have other sections with different hosts: .. declarations, for fact gathering, or anything else actually.

Ansible group variable evaluation with local actions

I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.

Checking which hosts failed a playbook in ansible

I have 2 playbooks running on ansible, one after another. After playbook 1 finishes, I want to run the second one on only the hosts for which the first playbook fully succeeded. Looking through the ansible docs, I can't find any accessible info on which hosts failed a specific playbook. How could this be done?
FYI I need separate playbooks because the second one must be run with in serial, which is only available at the playbook level
Where all hosts - successful hosts = failed hosts, you can use the following task to get the difference between the two special variables for all hosts in the play (including failed hosts) and all hosts that have not yet failed. Use of serial will affect the result.
- name: Print play hosts that failed
debug:
msg: "The hosts that failed are {{ ansible_play_hosts_all| difference(ansible_play_batch) |join('\n') }}"
Source: https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html
Honestly, the best way is to have some queryable state on each host. A simple method is to check for a file's existence, which is created after your first playbook succeeds. You can then have a task which checks for that state and notifies a notify task that it has been "updated", which will get what you want.
In an aside, I stopped using ansible because it wasn't configurable enough; I also had issues getting the parallelism controls I wanted. You my try hitting up the Ansible Project Google Group to put in a feature suggestion or describe your use case.
There is a difference between a play and a playbook. The serial argument is available on the play level. A playbook may contain multiple plays.
---
- name: Play 1
hosts: some_hosts
tasks:
- debug:
- name: Play 2
hosts: some_hosts
serial: 1
tasks:
- debug:
...
Hosts which failed in play 1 will not be processed in play 2.
If your really want to have separate playbooks, I see two options:
Create a callback plugin. You can register a function which gets fired when a task or host fails. You can store this info then locally and use in the next playbook run.
Activate Ansible logging. It will log pretty much the same stuff you see as raw output when running ansible-playbook.
Second option is bit ugly, but easier than creating a callback plugin.
It both cases you then need to create a dynamic inventory script which checks previously saved data and returns valid hosts. Or return all hosts but set a property to mark those hosts. You then can use group_by to create an ad-hoc group.

Ansible - How to sequentially execute playbook for each host

I am using ansible to script a deployment for an API. I would like this to work sequentially through each host in my inventory file so that I can fully deploy to one machine at a time.
With the out box behaviour, each task in my playbook is executed for each host in the inventory file before moving on to the next task.
How can I change this behaviour to execute all tasks for a host before starting on the next host? Ideally I would like to only have one playbook.
Thanks
Have a closer look at Rolling Updates:
What you are searching for is
- hosts: webservers
serial: 1
tasks:
- name: ...
Using the --forks=1 specify number of parallel processes to use (default=5)
Strategy enable to parallel tasks in a per host basis. See https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html
There are 3 strategies: linear (the default), serial and free (quickest)
- hosts: all
strategy: free
tasks:
...

Resources