ansible - consul kv listing recursive and compare the key values - ansible

I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"

All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.

Related

How to fix "Infoblox IPAM is misconfigured?"

I'm calling infoblox from ansible using the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- name: Include infoblox_vault
include_vars:
file: 'infoblox_vault.yml'
- name: Install infoblox-client for DDI
pip:
name: infoblox-client
environment:
HTTP_PROXY: http://our_internal_proxy.net:8080
HTTPS_PROXY: http://our_internal_proxy.net:8080
delegate_to: localhost
- debug:
msg: can I decrypt username?--> "{{ vault_infoblox_username }}"
- name: Check if DNS Record exists
set_fact:
miqCreateVM_ddiRecord: "{{ lookup('nios', 'record:a', filter={'name': 'infoblox-devtest.net' }, provider={'host': 'ddi-qa.net', 'username': vault_infoblox_username, 'password': vault_infoblox_password }) }}"
- debug:
msg: check var miqCreateVM_ddiRecord "{{ miqCreateVM_ddiRecord }}"
- debug:
msg: test to see amazing vm_name! "{{ vm_name }}"
... code snipped
When the job runs, I get:
Vault password:
PLAY [localhost] ***************************************************************
TASK [Include infoblox_vault] **************************************************
ok: [127.0.0.1]
TASK [Install infoblox-client for DDI] *****************************************
ok: [127.0.0.1 -> localhost]
TASK [debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "can I decrypt username?--> \"manageiq-ddi\""
}
TASK [Check if DNS Record exists] **********************************************
fatal: [127.0.0.1]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Here's the main part: "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."
This playbook used to work in the past. I haven't worked on it for a few monhths. Not sure why it's broken.
I confirmed that I can log into infoblox client manually using the credentials. I also tried manually logging the username to ensure it's decrypting the creds from the ansible-vault file. That worked fine. So it's not the credentials, not the vault decryption. It's something else.
I found the following three related topics online, but none of them seem to resolve the problem:
This one (which references adding certs to the request. Anyone know how to do this? I can't find instructions)
This one (which mentions problems from upgrading. I showed the versions mentioned in that post to our networking folks and they said the version numbers didn't correlate at all with what we have in our environment, so it's hard to evaluate whether that's relevant.)
Last one (which calls for using a property 'http_request_timeout' : None that doesn't strike me as being the problem as I can't get it to work at all.)
Any theories? Thanks!
This might not solve it for others, but this solved it for me:
Got a new password for Ansible to use to log into Infoblox.
Create a new ansible vault file containing the new infoblox password. I made a new password for the vault file encryption also.
I created a new credential object in ansible to enable ansible to be able to read the new vault file.
I updated the playbook to use the new vault.
It works now. Something was wrong with the encryption.

Using Netbox Ansible Modules

I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present

Ansible module bigip_pool_member for BIGIP always returning "Changed" status

I am trying to add pool members to a bigip pool using bigip_pool_member.
Tested on ansible version 2.5 and 2.6
Result - Returns changed ALWAYS, even when it is not making any changes.
Involcation command:
ansible-playbook -i test_inventory add_pool_members.yaml --extra-vars '{"hostgroup": "test-bigip"}'
I am wondering if anyone has insights into what could be going on ?
The contents of the playbook are as under
--
- hosts: "{{ hostgroup }}"
gather_facts: no"
tasks:
- name: Add servers to connection pool
bigip_pool_member:
user: username
password: password
server: "{{inventory_hostname}}"
validate_certs: no
state: present
partition: test
pool: testpool
host: 14.34.45.X
name: test-server
port: 80
description: test
delegate_to: localhost
Run Result
PLAY [f5-test] *****************************************************************************
TASK [Add servers to connection pool ] *****************************************************
changed: [f5-test -> localhost]
PLAY RECAP *********************************************************************************
f5-test : ok=1 changed=1 unreachable=0 failed=0
This could be related to this known bug in the module.
When running playbook with bigip_pool_member module with state: present against live device, each run results in change being made when in reality there's no need for a change.
I'm nor f5 neither network expert but from I understand that happen if you set a monitor to your pool.
There is a pull request already with fixes related to correct state of down machine. Check if it applies to you, else I would suggest to add a detailed comment on the bug.

Running tasks on dynamically collected EC2 instances

I'm trying to write a role with tasks that filters out several EC2 instances, adds them to the inventory and then stops a PHP service on them.
This is how far I've gotten, the add_host idea I'm copying from here: http://docs.catalystcloud.io/tutorials/ansible-create-x-servers-using-in-memory-inventory.html
My service task does not appear to be running on the target instances but instead on the host specified in the playbook which runs this role.
---
- name: Collect ec2 data
connection: local
ec2_remote_facts:
region: "us-east-1"
filters:
"tag:Name": MY_TAG
register: ec2_info
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- name: Stop the service
hosts: foobar
become: yes
gather_facts: false
service: name=yii-queue#1 state=stopped enabled=yes
UPDATE: when I try baptistemm's suggestion, I get this:
PLAY [MAIN_HOST_NAME] ***************************
TASK [ec2-manage : Collect ec2 data]
*******************************************
ok: [MAIN_HOST_NAME]
TASK [ec2-manage : Add the hosts to a new group]
*******************************************************
PLAY RECAP **********************************************************
MAIN_HOST_NAME : ok=1 changed=0 unreachable=0 failed=0
UPDATE #2 - yes, the ec2_remote_tags filter does return instances (using the real tag value not the fake one I put in this post). Also, I have seen ec2_instance_facts but I ran into some issues with that (boto3 required although I have a workaround there, still I'm trying to fix the current issue first).
If you want to play tasks on a group of targets you need to define a new play after you created the in-memory list of targets with add_host (as mentionned in your link). So to do that you need like this:
… # omit the code before
- name: "Add the ec2 hosts to a group"
add_host:
name: "{{ item.id }}"
groups: foobar
ansible_user: root
with_items: "{{ ec2_info.instances }}"
- hosts: foobar
gather_facts: false
become: yes
tasks:
- name: Stop the service
hosts: foobar
service: name=yii-queue#1 state=stopped enabled=yes

Does Ansible delegate_to work with handlers?

I wrote a playbook to modify the IP address of several remote systems. I wrote the playbook to change only a few systems at a time, so I wanted to use delegate_to to change the DNS record on the nameservers as each system was modified, instead of adding a separate play targeted at the nameservers that would change all the host IPs at once.
However, it seems the handler is being run on the primary playbook target, not my delegate_to target. Does anyone have recommendations for working around this?
Here's my playbook:
---
host: hosts-to-modify
serial: 1
tasks:
- Modify IP for host-to-modify
//snip//
- name: Modify DNS entry
delegate_to: dns-servers
become: yes
replace:
args:
backup: yes
regexp: '^{{ inventory_hostname }}\s+IN\s+A\s+[\d\.]+$'
replace: "{{ inventory_hostname }} IN A {{ new_ip }}"
dest: /etc/bind/db.my.domain
notify:
- reload dns service
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
With an inventory file like the following:
[dns-servers]
ns01
ns02
[hosts-to-modify]
host1 new_ip=10.1.1.10
host2 new_ip=10.1.1.11
host3 new_ip=10.1.1.12
host4 new_ip=10.1.1.13
Output snippet, including error message:
TASK [Modify DNS entry] ********************************************************
Friday 02 September 2016 14:46:09 -0400 (0:00:00.282) 0:00:35.876 ******
changed: [host1 -> ns01]
changed: [host1 -> ns02]
RUNNING HANDLER [reload dns service] *******************************************
Friday 02 September 2016 14:47:00 -0400 (0:00:38.925) 0:01:27.385 ******
fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "no service or tool found for: bind9"}
First of all, you example playbook is invalid in several ways: play syntax is flawed and delegate_to can't be targeted to a group of hosts.
If you want to delegate to multiple servers, you should iterate over them.
And answering your main question: yes, you can use delegate_to with handlers:
handlers:
- name: reload dns service
become: yes
service:
args:
name: bind9
state: reloaded
delegate_to: "{{ item }}"
with_items: "{{ groups['dns-servers'] }}

Resources