Ansible dynamic hosts skips - ansible

I am trying to set up a very hacky vm deployment Environment utilizing ansible 1.9.4 for my homeoffice.
I am pretty far, but the last step just won't work. I have a written 10loc plugin which generates temporary dns names for the staging vlan and I want to pass that value as a host to the next playbook role.
TASK: [localhost-eval | debug msg="{{ hostvars['127.0.0.1'].dns_record.stdout_lines }}"] ***
ok: [127.0.0.1] => {
"msg": "['vm103.local', 'vm-tmp103.local']"
}
It's accessible in the global playbook scope via the hostvars:
{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}
and should be passed on to:
- name: configure vm template
hosts: "{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
gather_facts: no
roles:
- template-vm-configure
which results in:
PLAY [configure vm template] **************************************************
skipping: no hosts matched
my inventory looks like this and seems to work. Hardcoding 'vm-tmp103.local'
gets the role running.
[vm]
vm[001:200].local
[vm-tmp]
vm-tmp[001:200].local
Thank you in advance, hopefully someone can point me in the right direction. Plan B would consists of passing the dns-records to a bash script for finalizing configuration, since I just want to configure the Network interface before adding the vm to the monitoring.
Edit:
Modified a play to use add hosts and add them to a temp group
- add_host: name={{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }} groups=just_vm
but it still doesn't match.

That's not intended behaviour I think.
#9733 lead me to a solution.
Adding this task let's me use staging as a group.
tasks:
- set_fact: hostname_next="{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
- add_host: group=staging name='{{ hostname_next }}'

Related

Ansible delegate_to wrong host?

I use ansible on the machine HostAnsible to configure firewalls named FW1,FW2, and so on.
I want to supervise those firewalls, so I have a task like this:
- name: Configure supervision
mymodule:
...
delegate_to: "{{ supervision_host }}"
The variable supervision_host is set at the top level of the inventory to HostSupervision.
I check it with a debug :
- debug:
var: supervision_host
It is correct on all firewalls
The task shows this :
FW1 -> HostSupervision(HostAnsible): BlahBlah
And the SSH connection is made on the HostAnsible and not the HostSupervision.
What is happening ?
I had defined a variable ansible_host: HostAnsible in my inventory, this was messing up with the delegate.
Problem solved.

Ansible role dependencies and facts with delegate_to

The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.

ansible share variables between hosts during run time

I am using ansible 2.5.4 and I need to share variables between hosts.
I tried many examples thtat I saw on-line ( share with set_fact or using a dummy host ) and it is all not working.
maybe I am doing something different,
this is my playbook:
---
- hosts: master[0]
tasks:
- name: generate kubernetes BootrapToken
command: kubeadm token generate
register: generate_token_result
- set_fact: token="{{generate_token_result}}"
- hosts: new # requires creating new group in inventory.cfg named new
tasks:
- name: include docker-host role
include_role:
name: docker-host
when: not skip_nodes_setup
- name: include kubernetes-host role
include_role:
name: kubernetes-host
when: not skip_nodes_setup
- name: include kubernetes-operator role
include_role:
name: kubernetes-operator
when: not skip_nodes_setup
- name: join node to kubernetes cluster
command: "kubeadm join --token {{ hostvars['master[0]']['token']['stdout'] }} --discovery-token-unsafe-skip-ca-verification {{ hostvars['kubernetes_machines']['kube_apiserver'] }}"
I am getting the following error:
The task includes an option with an undefined variable. The error was: "hostvars['master[0]']" is undefined
the first task is able to run on master[0] but the second task does not recognize that host.
please help.
thanks
adding the inventory.cfg:
[kubernetes_machines:vars]
kube_apiserver=10.82.72.54:6443
[kubernetes_machines:children]
masters
nodes
new
[masters]
srv12
[nodes]
srv13
[new]
prd4
If you ask for "hostvars['master[0]']", you've got the entire master[0] inside quotes so you're referring to a host with the literal name master[0]. If you mean the first member of the master hostgroup, you need a variable reference, not a string, and you'll need to use the groups variable (and you need to remember your hostgroup is named masters not master):
hostvars[groups.masters.0]
You can find relevant documentation here.
Quoting from Playbook Basics
The hosts line is a list of one or more groups or host patterns
Pattern master[0] doesn't match hostname master[0]. If the hostname is master0 then the hostvars reference should be
hostvars['master0']
It's not clear why hosts: master[0] works. It should not according to the documentation. hosts: master.0 which should be the same doesn't work.

Ansible etcd lookup plugin issue

I've etcd running on the Ansible control machine (local). I can get and put the values as shown below but Ansible wouldn't get values, any thoughts?
I can also get the value using curl
I got this simple playbook
#!/usr/bin/env ansible-playbook
---
- name: simple ansible playbook ping
hosts: all
gather_facts: false
tasks:
- name: look up value in etcd
debug: msg="{{ lookup('etcd', 'weather') }}"
And running this playbook wouldn't fetch values from etcd
TASK: [look up value in etcd] *************************************************
ok: [app1.test.com] => {
"msg": ""
}
ok: [app2.test.com] => {
"msg": ""
}
Currently (31.05.2016) Ansible etcd lookup plugin support only calls to v1 API and not compatible with newer etcd instances that publish v2 API endpoint.
Here is the issue.
You can use my quickly patched etcd2.py lookup plugin.
Place it into lookup_plugins subdirectory near you playbook (or into Ansible global lookup_plugins path).
Use lookup('etcd2', 'weather') in your playbook.

How to list all currently targeted hosts in an Ansible play

I am running an Ansible play and would like to list all the hosts targeted by it. Ansible docs mentions that this is possible, but their method doesn't seem to work with a complex targeted group (targeting like hosts: web_servers:&data_center_primary)
I'm sure this is doable, but cant seem to find any further documentation on it. Is there a var with all the currently targeted hosts?
You are looking for 'play_hosts' variable
---
- hosts: all
tasks:
- name: Create a group of all hosts by app_type
group_by: key={{app_type}}
- debug: msg="groups={{groups}}"
run_once: true
- hosts: web:&some_other_group
tasks:
- debug: msg="play_hosts={{play_hosts}}"
run_once: true
would result in
TASK: [Create a group of all hosts by app_type] *******************************
changed: [web1] => {"changed": true, "groups": {"web": ["web1", "web2"], "load_balancer": ["web3"]}}
TASK: [debug msg="play_hosts={{play_hosts}}"] *********************************
ok: [web1] => {
"msg": "play_hosts=['web1']"
}
inventory:
[proxy]
web1 app_type=web
web2 app_type=web
web3 app_type=load_balancer
[some_other_group]
web1
web3
You can use the option --list-hosts to only list hosts a playbook would affect.
Also, there is the dict hostvars which holds all hosts currently known to Ansible. But I think the setup module had to be run on all hosts, so you can not skip that step via gather_facts: no.

Resources