We have several playbooks that use hostvars[item]['ansible_nodename'] where item is a host alias. It usually works. But sometimes does not, giving the following error:
'dict object' has no attribute 'ansible_nodename`
I printed the contents of hostvars and did not see this attribute there.
I could not find the documentation of hostvars to be able to see what we can safely use instead ansible_nodename. So, the questions are:
Can I safely replace ansible_nodename with ansible_host?
Where can I find a description of the contents of hostvars and algorithm of its creation?
But sometimes does not, giving the following error...
This can be because facts for those hosts are not gathered at that moment in time.
You may want to add empty play to gather facts for all hosts at the very top of your playbook:
- name: gather facts
hosts: all
- name: balancer configuration
hosts: balancer
tasks:
- name: generate configs
template:
src: conf.j2
dest: "/conf/{{ hostvars[item]['ansible_nodename'] }}.conf"
with_hostnames: nodes
Without 'gather facts' play, configuration task for balancer will fail because ansible_nodename is not gathered.
Related
I have this example role where I install some packages based on the OS and package manager:
# THIS IS IN tasks/main.yml
- name: Ensure archlinux-keyring is updated
community.general.pacman:
name: archlinux-keyring
state: latest
update_cache: yes
when: ansible_pkg_mgr == "pacman"
If this role is run as part of a playbook, Ansible will gather the facts and everything is just fine. However, if I run it as a standalone role:
ansible localhost -m include_role -a name=examplerole
I get this error
localhost | FAILED! => {
"msg": "The conditional check 'ansible_pkg_mgr == \"pacman\"' failed. The error was: error while evaluating conditional (ansible_pkg_mgr == \"pacman\"): 'ansible_pkg_mgr' is undefined
I know I can force Ansible to gather these facts inside the role, but gathering facts over and over again will be super slow (as this is a recurring problem I have with several roles, and I can't always include them in a playbook, and also, if they ARE in a playbook, this isn't needed).
Is there any way to check if facts were already gathered, and gather them inside of the role only when needed?
Regarding
...Is there any way to check if facts were already gathered ... gather them inside of the role only when needed?
and if you like to gather facts based on conditionals only, you may have a look into the following example
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ hostvars['localhost'].ansible_facts }}"
- name: Gather date and time only
setup:
gather_subset:
- "date_time"
- "!min"
when: ansible_date_time is not defined
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
which is gathering date and time only and if not defined before.
This means, regarding
gathering facts over and over again will be super slow
it is recommended to get familiar with the data structure of the gathered facts and the possible subsets in order to gather only the information which is needed. And as well with Caching facts.
Further Documentation
setup module – Gathers facts about remote hosts
What is the exact list of Ansible setup min?
I'm trying to convert a playbook which deploys vSphere VMs. The current version
of the playbook gets individual IP addresses and source template information
from vars/main.yml in the role (I'm using the best practice directory layout.)
vms:
- name: demo-server-0
ip_address: 1.1.1.1
- name: demo-server-1
ip_address: 1.1.1.2
Template, and other information is stored elsewhere in the vars.yml file but
it makes more sense to use the standard inventory, so I created these entries
in the inventory file:
test_and_demo:
hosts:
demo-server-0:
ip-address: 1.1.1.1
demo-server-1:
ip-address: 1.1.1.2
vars:
vc_template: xdr-template-1
The play is pretty much unchanged, apart from this key change:
FROM:
with_items: '{{ vms }}'
TO:
with_items: "{{ query('inventory_hostnames', 'test_and_demo') }}"
But this throws the following error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
failed: [localhost] (item=demo-deployer) => {"ansible_loop_var": "item", "changed": false, "item": "demo-deployer", "msg": "Failed to import the required Python library (PyVmomi) on lubuntu's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
I don't believe this is a platform issue, as reverting back to with_items: '{{ vms }}'
works just fine (vars\main.yml still exists and duplicates data).
I'm probably using query incorrectly but can anyone give me a hint what I'm
doing wrong?
Accessing sub-values
If I can get the VMs to deploy using with_items: + query I'd then need to
access the variables of the host and group for the various items I need to
specify, can someone advise me here, too.
Many thanks.
Although I was convinced the problem was in the way I was doing the query, it was in the root Play which I hadn't noticed. By adding become: yes I think it was switching to the root user which didn't have the module installed.
Once it was removed, the problem was resolved.
- hosts: localhost
roles:
- deploy_demo_servers
become: yes
The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.
I'm sitting in front of a fairly complex Ansible project that we're using to set up our local development environments (multiple VMs) and there's one role that uses the facts gathered by Ansible to set up the /etc/hosts file on every VM. Unfortunately, when you want to run the playbook for one host only (using the -limit parameter) the facts from the other hosts are (obviously) missing.
Is there a way to force Ansible to gather facts on all hosts, even if you limit the playbook to one specific host?
We tried to add a play to the playbook to gather facts from all hosts, but of course that also gets limited to the one host given by the -limit parameter. If there'd be a way to force this play to run on all hosts before the other plays, that would be perfect.
I've googled a bit and found the solution with fact caching with redis, but since our playbook is used locally, I wanted to avoid the need for additional software. I know, it's not a big deal, but I was just looking for a "cleaner", Ansible-only solution and was wondering, if that would exist.
Ansible version 2 introduced a clean, official way to do this using delegated facts (see: http://docs.ansible.com/ansible/latest/playbooks_delegation.html#delegated-facts).
when: hostvars[item]['ansible_default_ipv4'] is not defined is a check to ensure you don't check for facts in a host you already know the facts about
---
# This play will still work as intended if called with --limit "<host>" or --tags "some_tag"
- name: Hostfile generation
hosts: all
become: true
pre_tasks:
- name: Gather facts from ALL hosts (regardless of limit or tags)
setup:
delegate_to: "{{ item }}"
delegate_facts: True
when: hostvars[item]['ansible_default_ipv4'] is not defined
with_items: "{{ groups['all'] }}"
tasks:
- template:
src: "templates/hosts.j2"
dest: "/etc/hosts"
tags:
- hostfile
...
In general the way to get facts for all hosts even when you don't want to run tasks on all hosts is to do something like this:
- hosts: all
tasks: [ ]
But as you mentioned, the --limit parameter will limit what hosts this would be applied to.
I don't think there's a way to simply tell Ansible to ignore the --limit parameter on any plays. However there may be another way to do what you want entirely within Ansible.
I haven't used it personally, but as of Ansible 1.8 fact caching is available. In a nutshell, with fact caching enabled Ansible will use a redis server to cache all the facts about hosts it encounters and you'll be able to reference them in subsequent playbooks:
With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook.
This still seems to be an issue without a clean solution here in 2016, but newer versions of Ansible offer a "jsonfile" fact caching backend, which seems to be a decent compromise to installing Redis locally just to address this need. Now I just fire off an ansible all -m setup before running a playbook with the --limit option. Good enough for jazz!
http://docs.ansible.com/ansible/playbooks_variables.html#fact-caching
You could modify your playbook to:
...
- hosts: "{{ limit_hosts|default('default_group') }}"
tasks:
...
...
And when you run it, if some_var is not defined (normal state) then it will run on the default_group inventory group, BUT if you run it as:
ansible-playbook --extra-vars "limit_hosts=myHost" myplaybook.yml
Then it will only run on your myHost, but you could still have other sections with different hosts: .. declarations, for fact gathering, or anything else actually.
I have a task file that looks like this:
- name: Drop schemas
mysql_db: state=import name=mysql target={{ role_path }}/files/schemas/drop-imdb-perf.sql login_user={{ MYSQL_ROOT_USER }} login_password={{ MYSQL_ROOT_PWD }} login_host={{ inventory_hostname }}
I'm invoking it from a playbook that looks like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
I'm using Ansible version 1.9.4 so I'm thinking role_path should be a valid variable.
But when I run the playbook, I get this output:
TASK: [Drop schemas] **************************************************
fatal: [imdb] => One or more undefined variables: 'role_path' is undefined
I can't figure out why role_path is undefined. According to the ansible docs, it seems like for versions 1.8 and above it should be populated with the directory of the role in question, but I'm clearly mistaken about something or other.
I don't see you using any roles. Without looking into the Ansible code it seems obvious that role_path is defined within a role. Including a file of a role does not make it run in context of a role though.
If your include is by intend, role_path won't be defined. You could try to set it yourself together with the include like so:
tasks:
- include: ../roles/mysql/tasks/drop-perf.yml
role_path: ../roles/mysql
That might work or not, since role_path still is a magic variable and therefore might not be manually changed.
If you actually meant to include the role, then you need to define your playbook like this:
- name: Drop mySQL data
gather_facts: no
hosts: imdb
connection: local
tags:
- mysql-data-drop
roles:
- role: ../roles/mysql
But my guess is, you're trying to only run a single tasks file of that role and not the whole role. But what you're trying to do there seem to be against best practice. My recommendation would be to move the tag mysql-data-drop into the tasks of the file drop-perf.yml, because that's what tags are for: to trigger a limited set of tasks of roles or playbooks.