Ansible etd lookup not working - ansible

I'm testing etcd as permanent storage for Ansible's dynamic variables.
Somehow I'm not able to get {{lookup('etcd', '/key')}} to return the value for the key.
After checking the etcd.py the only thing poping out is the variable for ANSIBLE_ETCD_URL which has been exported export ANSIBLE_ETCD_URL='http://localhost:2379'
play:
- name: lookup etcd
debug: {msg: "{{lookup('etcd', '/key')}}"}
etcd value:
$ etcdctl get key
value
What I'm getting in Ansible:
TASK [lookup etcd] *************************************************************************************************************************************************
task path: /home/michal/gits/softcat/platforms-ansible-plays/when_defined.yaml:37
ok: [127.0.0.1] => {
"msg": ""
}
setup:
etcd Version: 2.2.5
ansible 2.3.2.0
pyetcd (1.7.2)
Question:
How can I get this working, is there an extra python library needed for it to work?

Sorted,
I was missing the one more variable ANSIBLE_ETCD_VERSION once set to v2 it started to work.
export ANSIBLE_ETCD_VERSION=v2

Related

Access Ansible inventory variables using with_items and vmware_guest for vSphere

I'm trying to convert a playbook which deploys vSphere VMs. The current version
of the playbook gets individual IP addresses and source template information
from vars/main.yml in the role (I'm using the best practice directory layout.)
vms:
- name: demo-server-0
ip_address: 1.1.1.1
- name: demo-server-1
ip_address: 1.1.1.2
Template, and other information is stored elsewhere in the vars.yml file but
it makes more sense to use the standard inventory, so I created these entries
in the inventory file:
test_and_demo:
hosts:
demo-server-0:
ip-address: 1.1.1.1
demo-server-1:
ip-address: 1.1.1.2
vars:
vc_template: xdr-template-1
The play is pretty much unchanged, apart from this key change:
FROM:
with_items: '{{ vms }}'
TO:
with_items: "{{ query('inventory_hostnames', 'test_and_demo') }}"
But this throws the following error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
failed: [localhost] (item=demo-deployer) => {"ansible_loop_var": "item", "changed": false, "item": "demo-deployer", "msg": "Failed to import the required Python library (PyVmomi) on lubuntu's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'pyVim'
I don't believe this is a platform issue, as reverting back to with_items: '{{ vms }}'
works just fine (vars\main.yml still exists and duplicates data).
I'm probably using query incorrectly but can anyone give me a hint what I'm
doing wrong?
Accessing sub-values
If I can get the VMs to deploy using with_items: + query I'd then need to
access the variables of the host and group for the various items I need to
specify, can someone advise me here, too.
Many thanks.
Although I was convinced the problem was in the way I was doing the query, it was in the root Play which I hadn't noticed. By adding become: yes I think it was switching to the root user which didn't have the module installed.
Once it was removed, the problem was resolved.
- hosts: localhost
roles:
- deploy_demo_servers
become: yes

Hashicorp Vault KV store version 2 inaccessible using hashi_vault Ansible plugin

I'm trying to access some Vault kv secrets in my Ansible playbook...
- hosts: localhost
connection: local
tasks:
- name: Retrieve secret/hello using native hashi_vault plugin
debug: msg="{{ lookup('hashi_vault', 'secret=secret/my-secret:foo token={{ vault_token }} url={{ vault_address }} ')}}"
I'm finding this works fine where secret/my-secret is stored using version one of the kv store engine. However, when using version two I see the following error...
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'hashi_vault'. Error was a <class 'ansible.errors.AnsibleError'>, original message: The secret secret/my-secret doesn't seem to exist for hashi_vault lookup"}
Am I doing something wrong, or is version two of the kv engine just not supported by the hasi_vault lookup plugin?
Cheers!
Mark
There is an pull request open for this: https://github.com/ansible/ansible/pull/41132
I use a slight variation of your workaround:
password: "{{ lookup('hashi_vault', 'secret=secret/data/my_secret:data')['password']}}"
That way you only need to modify one line.

Conditional role inclusion fails in Ansible

I want to run an Ansible role conditionally, i.e. only when some binary does NOT exist (which for me implies absence of some particular app installation).
Something like the pattern used here.
Using the following code in my playbook:
- hosts: my_host
tasks:
- name: check app existence
command: /opt/my_app/somebinary
register: myapp_exists
ignore_errors: yes
roles:
- { role: myconditional_role, when: myapp_exists|failed }
- another_role_to_be_included_either_way
Here is the output:
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [my_host ]
TASK [ myconditional_role : create temporary installation directory] ***********************
fatal: [my_host]: FAILED! => {"failed": true, "msg": "ERROR! The conditional check 'myapp_exists|failed' failed. The error was: ERROR! |failed expects a dictionary"}
Why is the conditional check failing?
Using ansible 2.0.0.2 on Ubuntu 16.04.01
btw: "create temporary installation directory" is the name of the first main task of the conditionally included role.
Tasks are executed after roles, so myapp_exists is undefined.
Use pre_tasks instead.
Also keep in mind that when is not actually a conditional role, it just attaches this when statement to every task in your role.

Ansible etcd lookup plugin issue

I've etcd running on the Ansible control machine (local). I can get and put the values as shown below but Ansible wouldn't get values, any thoughts?
I can also get the value using curl
I got this simple playbook
#!/usr/bin/env ansible-playbook
---
- name: simple ansible playbook ping
hosts: all
gather_facts: false
tasks:
- name: look up value in etcd
debug: msg="{{ lookup('etcd', 'weather') }}"
And running this playbook wouldn't fetch values from etcd
TASK: [look up value in etcd] *************************************************
ok: [app1.test.com] => {
"msg": ""
}
ok: [app2.test.com] => {
"msg": ""
}
Currently (31.05.2016) Ansible etcd lookup plugin support only calls to v1 API and not compatible with newer etcd instances that publish v2 API endpoint.
Here is the issue.
You can use my quickly patched etcd2.py lookup plugin.
Place it into lookup_plugins subdirectory near you playbook (or into Ansible global lookup_plugins path).
Use lookup('etcd2', 'weather') in your playbook.

Ansible dynamic hosts skips

I am trying to set up a very hacky vm deployment Environment utilizing ansible 1.9.4 for my homeoffice.
I am pretty far, but the last step just won't work. I have a written 10loc plugin which generates temporary dns names for the staging vlan and I want to pass that value as a host to the next playbook role.
TASK: [localhost-eval | debug msg="{{ hostvars['127.0.0.1'].dns_record.stdout_lines }}"] ***
ok: [127.0.0.1] => {
"msg": "['vm103.local', 'vm-tmp103.local']"
}
It's accessible in the global playbook scope via the hostvars:
{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}
and should be passed on to:
- name: configure vm template
hosts: "{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
gather_facts: no
roles:
- template-vm-configure
which results in:
PLAY [configure vm template] **************************************************
skipping: no hosts matched
my inventory looks like this and seems to work. Hardcoding 'vm-tmp103.local'
gets the role running.
[vm]
vm[001:200].local
[vm-tmp]
vm-tmp[001:200].local
Thank you in advance, hopefully someone can point me in the right direction. Plan B would consists of passing the dns-records to a bash script for finalizing configuration, since I just want to configure the Network interface before adding the vm to the monitoring.
Edit:
Modified a play to use add hosts and add them to a temp group
- add_host: name={{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }} groups=just_vm
but it still doesn't match.
That's not intended behaviour I think.
#9733 lead me to a solution.
Adding this task let's me use staging as a group.
tasks:
- set_fact: hostname_next="{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
- add_host: group=staging name='{{ hostname_next }}'

Resources