Ansible etcd lookup plugin issue - ansible

I've etcd running on the Ansible control machine (local). I can get and put the values as shown below but Ansible wouldn't get values, any thoughts?
I can also get the value using curl
I got this simple playbook
#!/usr/bin/env ansible-playbook
---
- name: simple ansible playbook ping
hosts: all
gather_facts: false
tasks:
- name: look up value in etcd
debug: msg="{{ lookup('etcd', 'weather') }}"
And running this playbook wouldn't fetch values from etcd
TASK: [look up value in etcd] *************************************************
ok: [app1.test.com] => {
"msg": ""
}
ok: [app2.test.com] => {
"msg": ""
}

Currently (31.05.2016) Ansible etcd lookup plugin support only calls to v1 API and not compatible with newer etcd instances that publish v2 API endpoint.
Here is the issue.
You can use my quickly patched etcd2.py lookup plugin.
Place it into lookup_plugins subdirectory near you playbook (or into Ansible global lookup_plugins path).
Use lookup('etcd2', 'weather') in your playbook.

Related

Execute multiple tasks one by one for a block in a role

I am creating an Ansible role to update hosts configured in a cluster. There are 3 'roles' defined in this PSA architecture.
host-1 # primary
host-2 # secondary
host-3 # secondary
host-4 # secondary
host-5 # arbiter
I want to configure Ansible to execute a block of tasks, one by one per host, if the host is a secondary. But in that same block, execute a single task as the primary
E.g:
- name: remove host from cluster
shell: command_to_remove
- name: update
yum:
state: latest
name: application
- name: add host to cluster
shell: command_to_add
- name: verify cluster
shell: verify cluster
when: primary
The use of serial would be perfect here, but that isn't supported for a block.
The result of Ansible should be:
TASK [remove host from cluster]
changed: [host-2]
TASK [update]
changed: [host-2]
TASK [add host from cluster]
changed: [host-2]
TASK [verify cluster]
changed: [host-1]
# rinse and repeat for host-3
TASK [remove host from cluster]
changed: [host-3]
TASK [update]
changed: [host-3]
TASK [add host from cluster]
changed: [host-3]
TASK [verify cluster]
changed: [host-1]
# and so on, until host-4
While it might not be "correct" in some cases, you could put a hostvar to declare a sort of "profile" for each of the hosts, then do an "include_tasks" with a profile.yaml based on that, then inside that profile.yaml put your tasks.
I do something similar with a playbook I use to configure a machine based on a profile declared on ansible invocation.
This playbook is invoked on the laptop being configured, so it uses localhost, but you could use it remotely with some tweaks
hosts.yaml
all:
hosts:
localhost:
ansible_connection: local
vars:
profile: all
playbook.yaml
- hosts: all
tasks:
...
- name: Run install tasks
include_role:
name: appinstalls
...
roles/appinstalls/tasks/main.yaml
- name: Include tasks for profile "all"
include_tasks: profile_all.yml
when: 'profile == "all"'
- name: Include tasks for profile "office"
include_tasks: profile_office.yml
when: 'profile == "office"'
roles/appinstalls/tasks/profile_all.yml
- name: Run tasks for profile "all"
include_tasks: "{{ item }}.yml"
with_items:
- app1
- app2
- app3
- app4
roles/appinstalls/tasks/profile_office.yml
- name: Run tasks for profile "all"
include_tasks: "{{ item }}.yml"
with_items:
- app1
- app2
- app3
- app4
- office_app1
- office_app2
- office_app3
- office_app4
When invoked using:
ansible-playbook playbook.yml -i hosts.yaml -K
It will run using the default profile "all". If invoked using
ansible-playbook playbook.yml -i hosts.yaml -K -e profile=office
It will run using the "office" profile instead
What I like about this method, is that all the includes happen in one go, rather than as you go along:
TASK [appinstalls : Run tasks for profile "office"] ******************************************************************************************************************************************
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app1.yml for localhost => (item=app1)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app2.yml for localhost => (item=app2)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app3.yml for localhost => (item=app3)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/app4.yml for localhost => (item=app4)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app1.yml for localhost => (item=office_app1)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app2.yml for localhost => (item=office_app2)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app3.yml for localhost => (item=office_app3)
included: /home/blenderfox/machine-setup/files/roles/appinstalls/tasks/office_app4.yml for localhost => (item=office_app4)
So in your case, you could put your tasks in the equivalent of the profile_all or profile_office
This method has no fancy stuff and it makes it easier to tack on future changes.
It fully allows each profile to independently evolve from the other -- just change the list in each of the Run tasks for profile block

Ansible setup module's filter is not working when use with playbook

I am trying to filter setup module for a specific fact, but no output when doing so with a playbook. It works well with adhoc command!
example playbook:
---
- name: facts test
hosts: localhost
connection: local
gather_facts: false
tasks:
- ansible.builtin.setup:
filter:
- "ansible_all_ipv4_addresses"
- debug: var=ansible_all_ipv4_addresses
...
Output:
TASK [debug var=ansible_all_ipv4_addresses] ****************************************************************
ok: [localhost] => {
"ansible_all_ipv4_addresses": "VARIABLE IS NOT DEFINED!"
}
Expected Output:
TASK [debug var=ansible_all_ipv4_addresses] ****************************************************************
ok: [localhost] => {
"ansible_all_ipv4_addresses": [
"192.168.1.1"
]
}
Adhoc command ansible localhost -m setup -a "filter=ansible_all_ipv4_addresses" produces proper output.
Any idea what is wrong here?
Ansible Version I've tested:
$ ansible --version
ansible 2.9.6
Thanks,
That is almost certainly a problem with your ansible version. The filter parameter takes a list since ansible-core version 2.11, you are probably using an older one.
You have two possibilities:
Change the filter to filter: "ansible_all_ipv4_addresses"
Update to the latest version of ansible

Ansible etd lookup not working

I'm testing etcd as permanent storage for Ansible's dynamic variables.
Somehow I'm not able to get {{lookup('etcd', '/key')}} to return the value for the key.
After checking the etcd.py the only thing poping out is the variable for ANSIBLE_ETCD_URL which has been exported export ANSIBLE_ETCD_URL='http://localhost:2379'
play:
- name: lookup etcd
debug: {msg: "{{lookup('etcd', '/key')}}"}
etcd value:
$ etcdctl get key
value
What I'm getting in Ansible:
TASK [lookup etcd] *************************************************************************************************************************************************
task path: /home/michal/gits/softcat/platforms-ansible-plays/when_defined.yaml:37
ok: [127.0.0.1] => {
"msg": ""
}
setup:
etcd Version: 2.2.5
ansible 2.3.2.0
pyetcd (1.7.2)
Question:
How can I get this working, is there an extra python library needed for it to work?
Sorted,
I was missing the one more variable ANSIBLE_ETCD_VERSION once set to v2 it started to work.
export ANSIBLE_ETCD_VERSION=v2

Ansible dynamic hosts skips

I am trying to set up a very hacky vm deployment Environment utilizing ansible 1.9.4 for my homeoffice.
I am pretty far, but the last step just won't work. I have a written 10loc plugin which generates temporary dns names for the staging vlan and I want to pass that value as a host to the next playbook role.
TASK: [localhost-eval | debug msg="{{ hostvars['127.0.0.1'].dns_record.stdout_lines }}"] ***
ok: [127.0.0.1] => {
"msg": "['vm103.local', 'vm-tmp103.local']"
}
It's accessible in the global playbook scope via the hostvars:
{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}
and should be passed on to:
- name: configure vm template
hosts: "{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
gather_facts: no
roles:
- template-vm-configure
which results in:
PLAY [configure vm template] **************************************************
skipping: no hosts matched
my inventory looks like this and seems to work. Hardcoding 'vm-tmp103.local'
gets the role running.
[vm]
vm[001:200].local
[vm-tmp]
vm-tmp[001:200].local
Thank you in advance, hopefully someone can point me in the right direction. Plan B would consists of passing the dns-records to a bash script for finalizing configuration, since I just want to configure the Network interface before adding the vm to the monitoring.
Edit:
Modified a play to use add hosts and add them to a temp group
- add_host: name={{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }} groups=just_vm
but it still doesn't match.
That's not intended behaviour I think.
#9733 lead me to a solution.
Adding this task let's me use staging as a group.
tasks:
- set_fact: hostname_next="{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
- add_host: group=staging name='{{ hostname_next }}'

How to list all currently targeted hosts in an Ansible play

I am running an Ansible play and would like to list all the hosts targeted by it. Ansible docs mentions that this is possible, but their method doesn't seem to work with a complex targeted group (targeting like hosts: web_servers:&data_center_primary)
I'm sure this is doable, but cant seem to find any further documentation on it. Is there a var with all the currently targeted hosts?
You are looking for 'play_hosts' variable
---
- hosts: all
tasks:
- name: Create a group of all hosts by app_type
group_by: key={{app_type}}
- debug: msg="groups={{groups}}"
run_once: true
- hosts: web:&some_other_group
tasks:
- debug: msg="play_hosts={{play_hosts}}"
run_once: true
would result in
TASK: [Create a group of all hosts by app_type] *******************************
changed: [web1] => {"changed": true, "groups": {"web": ["web1", "web2"], "load_balancer": ["web3"]}}
TASK: [debug msg="play_hosts={{play_hosts}}"] *********************************
ok: [web1] => {
"msg": "play_hosts=['web1']"
}
inventory:
[proxy]
web1 app_type=web
web2 app_type=web
web3 app_type=load_balancer
[some_other_group]
web1
web3
You can use the option --list-hosts to only list hosts a playbook would affect.
Also, there is the dict hostvars which holds all hosts currently known to Ansible. But I think the setup module had to be run on all hosts, so you can not skip that step via gather_facts: no.

Resources