One can recover failed hosts using rescue. How can I configure Ansible so that the other hosts in the play are aware of the host which will be recovered?
I thought I was smart, and tried using difference between ansible_play_hosts_all and ansible_play_batch, but Ansible doesn't list the failed host, since it's rescued.
---
- hosts:
- host1
- host2
gather_facts: false
tasks:
- block:
- name: fail one host
shell: /bin/false
when: inventory_hostname == 'host1'
# returns an empty list
- name: list failed hosts
debug:
msg: "{{ ansible_play_hosts_all | difference(ansible_play_batch) }}"
rescue:
- shell: /bin/true
"How can I configure Ansible so that the other hosts in the play are aware of the host which will be recovered?"
It seems that according documentation Handling errors with blocks
If any tasks in the block return failed, the rescue section executes tasks to recover from the error. ... Ansible provides a couple of variables for tasks in the rescue portion of a block: ansible_failed_task, ansible_failed_result
as well the source of ansible/playbook/block.py, such functionality isn't implemented yet.
You may need to implement some logic to keep track of the content of return values of ansible_failed_task and on which host it happened during execution. Maybe it is possible to use add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory with parameter groups: has_rescued_tasks.
Or probably do further investigation beginning with default Callback plugin and Ansible Issue #48418 "Add stats on rescued/ignored tasks" since it added statistics about rescued tasks.
Related
I'm trying to create a playbook which basically consists 2 hosts init; (don't ask why)
---
- hosts: all
tasks:
- name: get the hostname of machine and save it as a variable
shell: hostname
register: host_name
when: ansible_host == "x.x.x.x" *(will be filled by my application)*
- hosts: "{{ host_name.stdout }}"
tasks:
- name: use the variable as hostname
shell: whoami
I don't have any hostname information in my application so I need to trigger my playbook with an IP address, then i should get the hostname of that machine and save it to a variable to use in my other tasks to avoid "when" command for each task.
The problem is that I'm able to use "host_name" variable in all other fields except "hosts", it gives me an Error like this when i try to run;
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'host_name' is undefined
Screenshot of the error
By default, Ansible itself gathers some information about a host. This happens at the beginning of a playbook's execution right after PLAY in TASK [Gathering Facts].
This automatic gathering of information about a system can be turned off via gather_facts: no, by default this is active.
This collected information is called Ansible Facts. An example of the collected facts is shown in the Ansible Docs, for your host you can print out all Ansible Facts:
either in the playbook as a task:
- name: Print all available facts
debug:
var: ansible_facts
or via CLI as an adhoc command:
ansible <hostname> -m setup
The Ansible Facts contain values like: ansible_hostname, ansible_fqdn, ansible_domain or even ansible_all_ipv4_addresses. This is the simplest way to act with the hostname of the client.
If you want to output the hostname and IP addresses that Ansible has collected, you can do it with the following tasks for example:
- name: Print hostname
debug:
var: ansible_hostname
- name: Print IP addresses
debug:
var: ansible_all_ipv4_addresses
If you start your playbook for all hosts, you can check the IP address and also stop it directly for the "wrong" clients.
---
- hosts: all
tasks:
- name: terminate execution for wrong hosts
assert:
that: '"x.x.x.x" is in ansible_all_ipv4_addresses'
fail_msg: Terminating because IP did not match
success_msg: "Host matched. Hostname: {{ ansible_hostname }}"
# your task for desired host
I'm trying to spin up an AWS deployment environment in Ansible, and I want to make it so that if something fails along the way, Ansible tears down everything on AWS that has been spun up so far. I can't figure out how to get Ansible to throw an error within the role
For example:
<main.yml>
- hosts: localhost
connection: local
roles:
- make_ec2_role
- make_rds_role
- make_s3_role
2. Then I want it to run some code based on that error here.
<make_rds_role>
- name: "Make it"
- rds:
params: etc <-- 1. Let's say it fails in the middle here
I've tried:
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"
As well as other things on within the documentation, but what I really want is just a way to use something like the "block" and "rescue" commands , but as far as I can tell that only works within the same book and on plays, not roles. Does anyone have a good way to do this?
Wrap tasks inside your roles into block/rescue thing.
Make sure that rescue block has at least one task – this way Ansible will not mark the host as failed.
Like this:
- block:
- name: task 1
... # something bad may happen here
- name: task N
rescue:
- assert: # we need a dummy task here to prevent our host from being failed
that: ansible_failed_task is defined
Recent versions of Ansible register ansible_failed_task and ansible_failed_result when hit rescue block.
So you can do some post_tasks in your main.yml playbook like this:
post_tasks:
- debug:
msg: "Failed task: {{ ansible_failed_task }}, failed result: {{ ansible_failed_result }}"
when: ansible_failed_task is defined
But be warned that this trick will NOT prevent other roles from executing.
So in your example if make_rds_role fails ansible will apply make_s3_role and run your post_tasks afterwards.
If you need to prevent it, add some checking for ansible_failed_task fact in the beginning of each role or something.
We have a "periodic" tag in our roles that is intended to be run at regular intervals by our Ansible box for file assurance, etc. Would it be possible to have a playbook for periodic runs that calls the other playbooks with the appropriate host groups and tags?
The only way to execute an Ansible playbook "with the appropriate host groups and tags" is to run ansible-playbook executable. This is the only case in which all the data structures starting from the inventory would be created in isolation from the currently running playbook.
You can simply call the executable using command module on the control machine:
- hosts: localhost
tasks:
- command: ansible-playbook {{ playbook }} --tags {{ tags }}
You can also use local_action or delegate_to.
It might be that you want to include plays, or use roles, however given the problem description in the question, it's impossible to tell.
Here is what we ended up with: It turns out that tags and variables passed on the command-line are inherited all the way down the line. This allowed us to pass this on the command line:
ansible-playbook -t periodic periodic.yml
Which calls a playbook like this:
---
- name: This playbook must be called with the "periodic" tag.
hosts: 127.0.0.1
any_errors_fatal: True
tasks:
- fail:
when: periodic not True
- name: Begin periodic runs for type 1 servers
include: type1-server.yml
vars:
servers:
- host_group1
- host_group2
- ...
- name: Begin periodic runs for type 2 servers
...
Our 'real' playbooks have - hosts: "{{ servers }}" so that they can be inherited from the parent. The tasks in our roles are tagged with "periodic" for things that need to be run on a schedule. We then use SystemD to schedule the runs. You can use cron, but SystemD is better IMHO. Examples can be provided upon request.
As part of an Ansible playbook, I want to verify whether my servers are in sync.
To do so, I execute a script on each of my servers using the shell module and I register the result into a variable, say result_value.
The tricky part is to verify whether result_value is the same on all the servers. The expected value is not known beforehand.
Is there an idiomatic way to achieve this in Ansible?
One of possible ways:
---
- hosts: all
tasks:
- shell: /usr/bin/my_check.sh
register: result_value
- assert:
that: hostvars[item]['result_value'].stdout == result_value.stdout
with_items: "{{play_hosts}}"
run_once: true
I'm adding few hosts in the hosts inventory file through playbook. Now I'm using those newly added hosts in the same playbook. But those newly added hosts are not readble by the same playbook in the same run it seems, because I get -
skipping: no hosts matched
When I run it separately, i.e. I update hosts file through one playbook and use the updated hosts in it through another playbook, it works fine.
I wanted to do something like this recently, using ansible 1.8.4. I found that add_host needs to use a group name, or the play will be skipped with "no hosts matched". At the same time I wanted play #2 to use facts discovered in play #1. Variables and facts normally remain scoped to each host, so this requires using the magic variables hostvars and groups.
Here's what I came up with. It works, but it's a bit ugly. I'd love to see a cleaner alternative.
# test.yml
#
# The name of the active CFN stack is provided on the command line,
# or is set in the environment variable AWS_STACK_NAME.
# Any host in the active CFN stack can tell us what we need to know.
# In real life the selection is random.
# For a simpler demo, just use the first one.
- hosts:
tag_aws_cloudformation_stack-name_{{ stack
|default(lookup('env','AWS_STACK_NAME')) }}[0]
gather_facts: no
tasks:
# Get some facts about the instance.
- action: ec2_facts
# In real life we might have more facts from various sources.
- set_fact: fubar='baz'
# This could be any hostname.
- set_fact: hostname_next='localhost'
# It's too late for variables set in this play to affect host matching
# in the next play, but we can add a new host to temporary inventory.
# Use a well-known group name, so we can set the hosts for the next play.
# It shouldn't matter if another playbook uses the same name,
# because this entry is exclusive to the running playbook.
- name: add new hostname to temporary inventory
connection: local
add_host: group=temp_inventory name='{{ hostname_next }}'
# Now proceed with the real work on the designated host.
- hosts: temp_inventory
gather_facts: no
tasks:
# The host has changed, so the facts from play #1 are out of scope.
# We can still get to them through hostvars, but it isn't easy.
# In real life we don't know which host ran play #1,
# so we have to check all of them.
- set_fact:
stack='{{ stack|default(lookup("env","AWS_STACK_NAME")) }}'
- set_fact:
group_name='{{ "tag_aws_cloudformation_stack-name_" + stack }}'
- set_fact:
fubar='{% for h in groups[group_name] %} {{
hostvars[h]["fubar"]|default("") }} {% endfor %}'
- set_fact:
instance_id='{% for h in groups[group_name] %} {{
hostvars[h]["ansible_ec2_instance_id"]|default("") }} {% endfor %}'
# Trim extra leading and trailing whitespace.
- set_fact: fubar='{{ fubar|replace(" ", "") }}'
- set_fact: instance_id='{{ instance_id|replace(" ", "") }}'
# Now we can use the variables instance_id and fubar.
- debug: var='{{ fubar }}'
- debug: var='{{ instance_id }}'
# end
It's not entirely clear what you're doing - but from what I gather, you're using the add_host module in a play.
It seems logical that you cannot limit that same play to those hosts, because they don't exist yet... so this can never work:
- name: Play - add a host
hosts: new_host
tasks:
- name: add new host
add_host: name=new_host
But you're free to add multiple plays to a single plabook file (which you also seem to have figured out):
- name: Play 1 - add a host
hosts: a_single_host
tasks:
- name: add new host
add_host: name=new_host
- name: Play 2 - do stuff
hosts: new_host
tasks:
- name: do stuff
It sounds like you are modifying the Ansible inventory file with your playbook, and then wanting to use the new contents of the file. Just modifying the contents of the file on disk, however, won't cause the inventory that Ansible is working with to be updated. The way Ansible works is that it reads that file (and any other inventory source you have) when it first begins and puts the host names it finds into memory. From then on it works only with the inventory that it has stored in memory, the stuff that existed when it first started running. It has no knowledge of any subsequent changes to the file.
But there are ways to do what you want! One option you could use is to add the new host into the inventory file, and also load it into memory using the add_host module. That's two separate steps: 1) add the new host to the file's inventory, and then 2) add the same new host to in-memory inventory using the add_host module:
- name: add a host to in-memory inventory
add_host:
name: "{{ new_host_name }}"
groups: "{{ group_name }}"
A second option is to tell Ansible to refresh the in-memory inventory from the file. But you have to explicitly tell it to do that. Using this option, you have two related steps: 1) add the new host to the file's inventory, like you already did, and then 2) use the meta module:
- name: Refresh inventory to ensure new instances exist in inventory
meta: refresh_inventory