I'm writing an ansible custom module that does the same as the URI module with some additional features.
In particular, I would like the data returned to be available either as a global var or as host specific var
(where the host is the inventory host running the task).
based on a value passed to the module.
However, no matter what I do it seems that I can only create a global variable.
Let's say that I run the playbook for N hosts and I execute the custom module only once (run_once: yes).
[...]
- name: Run the custom module and generate output in custom_var
run_once: yes
custom_module:
field1: ...
field2: ...
global_data: yes -> yes, to create a global fact called: custom_var
- name: DEBUG - Print the content of custom_var
debug: msg="{{ custom_var }}"
This works fine, all N hosts are able to see the custom_var.
Is there a way that I can have custom_var defined only for the actual host that executes the task?
Strangely enough, I also tried to register the result of the first task as follows:
[...]
- name: Run the custom module and generate output in custom_var
run_once: yes
custom_module:
field1: ...
field2: ...
global_data: no -> no, if I don't create any global fact
register: register_task
- name: DEBUG - Print the content of custom_var
debug: msg="{{ register_task }}"
but it looks like that also was available to all the hosts in the inventory (with the same content). Is that the expected behaviour?
And any idea about how to make the custom_var available only for the host that actually runs the task?
Thanks in advance!
Yes, this is expected behaviour with run_once: yes as it overrides host loop.
With this option set, the task is executed only once (on the first host in a batch) but result is copied to all hosts in the same batch. So you end up with custom_var/register_task defined for every host.
If you want the opposite, you can change run_once: yes to conditional statement like:
when: play_hosts.index(inventory_hostname) == 0
This will execute the task only on the first host but will not override host loop (still skip the task for other hosts because of conditional); This way registered fact will be available only for the first host.
Related
One can recover failed hosts using rescue. How can I configure Ansible so that the other hosts in the play are aware of the host which will be recovered?
I thought I was smart, and tried using difference between ansible_play_hosts_all and ansible_play_batch, but Ansible doesn't list the failed host, since it's rescued.
---
- hosts:
- host1
- host2
gather_facts: false
tasks:
- block:
- name: fail one host
shell: /bin/false
when: inventory_hostname == 'host1'
# returns an empty list
- name: list failed hosts
debug:
msg: "{{ ansible_play_hosts_all | difference(ansible_play_batch) }}"
rescue:
- shell: /bin/true
"How can I configure Ansible so that the other hosts in the play are aware of the host which will be recovered?"
It seems that according documentation Handling errors with blocks
If any tasks in the block return failed, the rescue section executes tasks to recover from the error. ... Ansible provides a couple of variables for tasks in the rescue portion of a block: ansible_failed_task, ansible_failed_result
as well the source of ansible/playbook/block.py, such functionality isn't implemented yet.
You may need to implement some logic to keep track of the content of return values of ansible_failed_task and on which host it happened during execution. Maybe it is possible to use add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory with parameter groups: has_rescued_tasks.
Or probably do further investigation beginning with default Callback plugin and Ansible Issue #48418 "Add stats on rescued/ignored tasks" since it added statistics about rescued tasks.
When using Ansible, the controller is the machine you execute playbooks from.
I want to only run an Ansible task in a playbook when the controller has a file.
Is there a way to write something like this and exists(x.tgz), below?
- name: "copy up and unarchive the x for x if we have it on our controller"
unarchive:
src: x.tgz
dest: /usr/local/lib/x
when: "{{ { enable_me | bool } and exists(x.tgz) }}"
I want to only run an Ansible task in a playbook when the controller has a file.
By default, lookup plugins execute and are evaluated on the Ansible control machine only.
Is there a way to write something like this and exists(x.tgz), below?
Therefore and instead of using multiple tasks to check for the file existence, addressing delegation and running once, registering the result, running tasks conditionally on the registered result, you could probably write something like this
when: ( enable_x | bool ) and lookup('file', './x.tgz', errors='warn')
since it will
Check if the file exists on the Ansible Control Node
Returns the path to file found
Filter returns true if there is a path
Further Documentation
Lookup plugins
Like all templating, lookups execute and are evaluated on the Ansible control machine.
first_found lookup – return first file found from list
Further Q&A
How to search for ... a file using Ansible when condition
my playbook structure looks like:
- hosts: all
name: all
roles:
- roles1
- roles2
In tasks of roles1, I define such a variable
---
# tasks for roles1
- name: Get the zookeeper image tag # rel3.0
run_once: true
shell: echo '{{item.split(":")[-1]}}' # Here can get the string rel3.0 normally
with_items: "{{ret.stdout.split('\n')}}"
when: "'zookeeper' in item"
register: zk_tag
ret.stdout:
Loaded image: test/old/kafka:latest
Loaded image: test/new/mysql:v5.7
Loaded image: test/old/zookeeper:rel3.0
In tasks of roles2, I want to use the zk_tag variable
- name: Test if the variable zk_tag can be used in roles2
debug: var={{ zk_tag.stdout }}
Error :
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'
I think I encountered the following 2 problems:
When registering a variable with register, when condition is added, this variable cannot be used in all groups. How to solve this problem? How to make this variable available to all groups?
is my title, How to use variables between different roles in ansible?
You're most likely starting a new playbook for a new host. Meaning all previous collected vars are lost.
What you can do is pass a var to another host with the add_host module.
- name: Pass variable from this play to the other host in the same play
add_host:
name: hostname2
var_in_play_2: "{{ var_in_play_1 }}"
--- EDIT ---
It's a bit unclear. Why do you use the when statement in the first place if you want every host in the play for it to be available?
You might want to use the group_vars/all.yml file to place vars in.
Also, using add_host should be the way to go as how I read it. Can you post your playbook, and the outcome of your playbook on a site, e.g. pastebin?
If there is any chance the var is not defined because of a when condition, you should use a default value to force the var to be defined when using it. While you are at it, use the debug module for your tests rather than echoing something in a shell
- name: Debug my var
debug:
msg: "{{ docker_exists | default(false) }}"
Is there an easy way to log output from multiple remote hosts to a single file on the server running ansible-playbook?
I have a variable called validate which stores the output of a command executed on each server. I want to take validate.stdout_lines and drop the lines from each host into one file locally.
Here is one of the snippets I wrote but did not work:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
delegate_to: localhost
When I executed my playbook w/ the above, it was only able to capture the output from one of the remote hosts. I want to capture the lines from all hosts in that single /var/log/ansible/log file.
One thing you should do is to add a marker to the blockinfile to wrap the result from each single host in a unique block.
The second problem is that the tasks would run in parallel (even with delegate_to: localhost, because the loop here is realised by the Ansible engine) with effectively one task overwriting the other's /var/log/ansible/log file.
As a quick workaround you can serialise the whole play:
- hosts: ...
serial: 1
tasks:
- name: Write results to logfile
blockinfile:
create: yes
path: "/var/log/ansible/log"
insertafter: BOF
block: "{{ validate.stdout }}"
marker: "# {{ inventory_hostname }} {mark}"
delegate_to: localhost
The above produces the intended result, but if serial execution is a problem, you might consider writing your own loop for this single task (for ideas refer to support for "serial" on an individual task #12170).
Speaking of other methods, in two tasks: you can concatenate the results into a single list (no issue with parallel execution then, but pay attention to delegated facts) and then write to a file using copy module (see Write variable to a file in Ansible).
As part of an Ansible playbook, I want to verify whether my servers are in sync.
To do so, I execute a script on each of my servers using the shell module and I register the result into a variable, say result_value.
The tricky part is to verify whether result_value is the same on all the servers. The expected value is not known beforehand.
Is there an idiomatic way to achieve this in Ansible?
One of possible ways:
---
- hosts: all
tasks:
- shell: /usr/bin/my_check.sh
register: result_value
- assert:
that: hostvars[item]['result_value'].stdout == result_value.stdout
with_items: "{{play_hosts}}"
run_once: true