Ansible not caching facts for later use - vagrant

I'm using Vagrant and Ansible 1.9 to create a QA environment with an haproxy front-end (in a loadbalancer group) and two jetty backend servers (in a webservers group). The haproxy.cfg is a template with the following lines to display what Ansible is caching:
{% for host in groups['webservers'] %}
server {{ host }} {{ hostvars[host] }}:8080 maxconn 1024
{% endfor %}
The final config would use something like:
server {{ host }} {{ hostvars[host]['ansible_eth1']['ipv4']['address'] }}:8080 maxconn 1024
to configure the ip address of each backend service.
Based on this: http://docs.ansible.com/guide_rolling_upgrade.html I'm using the following to try provoke Ansible to cache the ip address of each webserver:
- hosts: webservers
user: vagrant
name: Gather facts from webservers
tasks: []
The webservers are built first, and the loadbalancer last. However, the webservers' ip addresses are not being cached. This is all that's there:
server qa01 {'ansible_ssh_host': '127.0.0.1', 'group_names': ['webservers'], 'inventory_hostname': 'qa01', 'ansible_ssh_port': 2222, 'inventory_hostname_short': 'qa01'}:8080 maxconn 1024
What do I need to do for those values to cache? It seems like Ansible knows the values, since:
- hosts: all
user: vagrant
sudo: yes
tasks:
- name: Display all variables/facts known for a host
debug: var=hostvars[inventory_hostname]
will display the full collection of values, including the ip address I need.

I learned at http://jpmens.net/2015/01/29/caching-facts-in-ansible/ that cached facts are flushed between groups by default. But all the facts, including the info I need from the webservers, can be made available when provisioning the loadbalancer by configuring ansible to drop the cache on disk. Creating this ansible.cfg in the same folder as my Vagrantfile fixes the issue:
[defaults]
fact_caching = jsonfile
fact_caching_connection = /tmp/mycachedir
This block was unnecessary and I removed it:
- hosts: webservers
user: vagrant
name: Gather facts from webservers
tasks: []

Related

Ansible - How to apply a different ssh config to a specific host?

Below is the play for applying the ssh config to a specific host:
- name: Add a host in the configuration
community.general.ssh_config:
ssh_config_file: "{{ ssh_config_path }}"
host: "example.com"
state: present
We have two variants of ssh config: ssh_config1 & ssh_config2
ssh_config1 needs to be applied on group(group1) of hosts
ssh_config2 needs to be applied on a specific single host host1.abc.com(not in group1)
How should the play perform this functionality?
Requirement is to apply a specific
Taking for granted that:
this playbook will target any hosts in your inventory
you have a valid inventory already containing group1, the specific host host1.abc.com and any other
you want to skip that config task for any host not having ssh_config_path configured.
you have removed any definition of the ssh_config_path variable from any other place in your environment
In group_vars/group1.yml
---
ssh_config_path: /path/to/ssh_config1
In host_vars/host1.abc.com.yml
---
ssh_config_path: /path/to/ssh_config2
Then the following (untested) playbook should meet your requirement
---
- hosts: all
gather_facts: false
tasks:
- name: Add a host in the configuration
community.general.ssh_config:
ssh_config_file: "{{ ssh_config_path }}"
host: "example.com"
state: present
when: ssh_config_path is defined
Alternatively, if you want to play the above only on the given hosts, with the exact same configuration and inner task, you can simply replace the host: stanza above with:
hosts: group1:host1.abc.com

Ansible role dependencies and facts with delegate_to

The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.

How to get ip or hostname of Ansible server in jinja2 template

How do I get the IP or hostname of Ansible server in a jinja2 template? I'm talking about the IP of Ansible control machine, not the ip of the target servers, e.g. {{ ansible_fqdn }}
Make sure you have gathered facts for localhost. Do this in a seperate play in your playbook if needed. The following should be sufficient:
- name: Simply gather facts for localhost
hosts: localhost
gather_facts: true
Use the var from localhost in your template i.e. {{ hostvars['localhost'].ansible_fqdn }}

Use Ansible control master's IP address in Jinja template

I would like to insert an IP address in to a J2 template which is used by an Ansible playbook. That IP adress is not the address of the host which is being provisioned, but the IP of the host from which the provisioning is done. Everything I have found so far covers using variables/facts related to the hosts being provisioned.
In other words: the IP I’d like to insert is the one in ['ansible_default_ipv4']['address'] when executing ansible -m setup 127.0.0.1.
I think that I could use a local playbook to write a dynamically generated template file containing the IP, but I was hoping that this might be possible “the Ansible way”.
Just use this:
{{ ansible_env["SSH_CLIENT"].split()[0] }}
You can force Ansible to fetch facts about the control host by running the setup module locally by using either a local_action or delegate_to. You can then either register that output and parse it or simply use set_fact to give it a useful name to use later in your template.
An example play might look something like:
tasks:
- name: setup
setup:
delegate_to: 127.0.0.1
- name: set ansible control host IP fact
set_fact:
ansible_control_host_address: "{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"
delegate_to: 127.0.0.1
- name: template file with ansible control host IP
template:
src: /path/to/template.j2
dest: /path/to/destination/file
And then use the ansible_control_host_address variable in your template as normal:
...
Ansible control host IP: {{ ansible_control_host_address }}
...
This is how I solved the same problem (Ansible 2.7):
- name: Get local IP address
setup:
delegate_to: 127.0.0.1
delegate_facts: yes
- name: Add to hosts file
become: yes
lineinfile:
line: "{{ hostvars['127.0.0.1']['ansible_default_ipv4']['address'] }} myhost"
path: /etc/hosts
Seems to work like a charm. :-)

Ansible ec2 only provision required servers

I've got a basic Ansible playbook like so:
---
- name: Provision ec2 servers
hosts: 127.0.0.1
connection: local
roles:
- aws
- name: Configure {{ application_name }} servers
hosts: webservers
sudo: yes
sudo_user: root
remote_user: ubuntu
vars:
- setup_git_repo: no
- update_apt_cache: yes
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
roles:
- common
- db
- memcached
- web
with the following inventory:
[localhost]
127.0.0.1 ansible_python_interpreter=/usr/local/bin/python
The Provision ec2 servers task does what you'd expect. It creates an ec2 instance; it also creates a host group [webservers] and adds the created instance IP to it.
The Configure {{ application_name }} servers step then configures that server, installing everything I need.
So far so good, this all does exactly what I want and everything seems to work.
Here's where I'm stuck. I want to be able to fire up an ec2 instance for different roles. Ideally I'd create a dbserver, a webserver and maybe a memcached server. I'd like to be able to deploy any part(s) of this infrastructure in isolation, e.g. create and provision just the db servers
The only ways I can think of to make this work... well, they don't work.
I tried simply declaring the host groups without hosts in the inventory:
[webservers]
[dbservers]
[memcachedservers]
but that's a syntax error.
I would be okay with explicitly provisioning each server and declaring the host group it is for, like so:
- name: Provision webservers
hosts: webservers
connection: local
roles:
- aws
- name: Provision dbservers
hosts: dbservers
connection: local
roles:
- aws
- name: Provision memcachedservers
hosts: memcachedservers
connection: local
roles:
- aws
but those groups don't exist until after the respective step is complete, so I don't think that will work either.
I've seen lots about dynamic inventories, but I haven't been able to understand how that would help me. I've also looked through countless examples of ansible ec2 provisioning projects, they are all invariably either provisioning pre-existing ec2 instances, or just create a single instance and install everything on it.
In the end I realised it made much more sense to just separate the different parts of the stack into separate playbooks, with a full-stack playbook that called each of them.
My remote hosts file stayed largely the same as above. An example of one of the playbooks for a specific part of the stack is:
---
- name: Provision ec2 apiservers
hosts: apiservers #important bit
connection: local #important bit
vars:
- host_group: apiservers
- security_group: blah
roles:
- aws
- name: Configure {{ application_name }} apiservers
hosts: apiservers:!127.0.0.1 #important bit
sudo: yes
sudo_user: root
remote_user: ubuntu
vars_files:
- env_vars/common.yml
- env_vars/remote.yml
vars:
- setup_git_repo: no
- update_apt_cache: yes
roles:
- common
- db
- memcached
- web
This means that the first step of each layer's play adds a new host to the apiservers group, with the second step (Configure ... apiservers) then being able to exclude the localhost without getting a no hosts matching error.
The wrapping playbook is dead simple, just:
---
- name: Deploy all the {{ application_name }} things!
hosts: all
- include: webservers.yml
- include: apiservers.yml
I'm very much a beginner w/regards to ansible, so please do take this for what it is, some guy's attempt to find something that works. There may be better options and this could violate best practice all over the place.
ec2_module supports an "exact_count" property, not just a "count" property.
It will create (or terminate!) instances that match specified tags ("instance_tags")

Resources