Ansible fileglob: unable to find ... in expected paths - ansible

I am trying to use ansible to delete all of the files within a directory while keeping the directory. To that end, I'm using the with_fileglob key on a task to get all of the files out of that directory as item variables. I have created a minimum example that shows my issue here:
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.provision :ansible do |ansible|
ansible.limit = "all"
ansible.playbook = "local.yml"
end
end
local.yml:
- name: Test
hosts: all
become: true
tasks:
- name: Test debug
debug:
msg: "{{ item }}"
with_fileglob:
- "/vagrant/*"
I expect to get a debug message for each file in the /vagrant directory - since this is the directory synced with the VM via Vagrant, I should get a message for the Vagrantfile, and for local.yml. Instead, I get the following confusing warning:
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [default]
TASK [Test debug] **************************************************************
[WARNING]: Unable to find '/vagrant' in expected paths (use -vvvvv to see
paths)
PLAY RECAP *********************************************************************
default : ok=1 changed=0 unreachable=0 failed=0
What expected paths are being referred to here? I have tried this with multiple fileglobs, and they all fail in this way, what am I missing?

Ansible lookup plugins are executed on local machine (where you launch Ansible), not remote one(s).
https://docs.ansible.com/ansible/latest/plugins/lookup.html
Lookup plugins allow Ansible to access data from outside sources. This
can include reading the filesystem in addition to contacting external
datastores and services. Like all templating, these plugins are
evaluated on the Ansible control machine, not on the target/remote.

Since fileglob only works on the local machine, I tested find als alternative and want to provide an example here. If we want to locate all *.jar files, this could be done like this:
- name: Find jar files
find:
paths: "{{ install_jar_path }}"
patterns: "*.jar"
register: jar_src_file
changed_when: False
jar_src_file is now an array. Every item has a property path.
- set_fact:
first_jar: "{{ jar_src_file.files[0].path }}"
This will give you the path of the first jar file for testing purpose. You could loop over the items using with_items for example if needed.

Related

How to fix "Infoblox IPAM is misconfigured?"

I'm calling infoblox from ansible using the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- name: Include infoblox_vault
include_vars:
file: 'infoblox_vault.yml'
- name: Install infoblox-client for DDI
pip:
name: infoblox-client
environment:
HTTP_PROXY: http://our_internal_proxy.net:8080
HTTPS_PROXY: http://our_internal_proxy.net:8080
delegate_to: localhost
- debug:
msg: can I decrypt username?--> "{{ vault_infoblox_username }}"
- name: Check if DNS Record exists
set_fact:
miqCreateVM_ddiRecord: "{{ lookup('nios', 'record:a', filter={'name': 'infoblox-devtest.net' }, provider={'host': 'ddi-qa.net', 'username': vault_infoblox_username, 'password': vault_infoblox_password }) }}"
- debug:
msg: check var miqCreateVM_ddiRecord "{{ miqCreateVM_ddiRecord }}"
- debug:
msg: test to see amazing vm_name! "{{ vm_name }}"
... code snipped
When the job runs, I get:
Vault password:
PLAY [localhost] ***************************************************************
TASK [Include infoblox_vault] **************************************************
ok: [127.0.0.1]
TASK [Install infoblox-client for DDI] *****************************************
ok: [127.0.0.1 -> localhost]
TASK [debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "can I decrypt username?--> \"manageiq-ddi\""
}
TASK [Check if DNS Record exists] **********************************************
fatal: [127.0.0.1]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Here's the main part: "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."
This playbook used to work in the past. I haven't worked on it for a few monhths. Not sure why it's broken.
I confirmed that I can log into infoblox client manually using the credentials. I also tried manually logging the username to ensure it's decrypting the creds from the ansible-vault file. That worked fine. So it's not the credentials, not the vault decryption. It's something else.
I found the following three related topics online, but none of them seem to resolve the problem:
This one (which references adding certs to the request. Anyone know how to do this? I can't find instructions)
This one (which mentions problems from upgrading. I showed the versions mentioned in that post to our networking folks and they said the version numbers didn't correlate at all with what we have in our environment, so it's hard to evaluate whether that's relevant.)
Last one (which calls for using a property 'http_request_timeout' : None that doesn't strike me as being the problem as I can't get it to work at all.)
Any theories? Thanks!
This might not solve it for others, but this solved it for me:
Got a new password for Ansible to use to log into Infoblox.
Create a new ansible vault file containing the new infoblox password. I made a new password for the vault file encryption also.
I created a new credential object in ansible to enable ansible to be able to read the new vault file.
I updated the playbook to use the new vault.
It works now. Something was wrong with the encryption.

Using Netbox Ansible Modules

I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present

ansible - consul kv listing recursive and compare the key values

I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"
All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.

Ansible etcd lookup plugin issue

I've etcd running on the Ansible control machine (local). I can get and put the values as shown below but Ansible wouldn't get values, any thoughts?
I can also get the value using curl
I got this simple playbook
#!/usr/bin/env ansible-playbook
---
- name: simple ansible playbook ping
hosts: all
gather_facts: false
tasks:
- name: look up value in etcd
debug: msg="{{ lookup('etcd', 'weather') }}"
And running this playbook wouldn't fetch values from etcd
TASK: [look up value in etcd] *************************************************
ok: [app1.test.com] => {
"msg": ""
}
ok: [app2.test.com] => {
"msg": ""
}
Currently (31.05.2016) Ansible etcd lookup plugin support only calls to v1 API and not compatible with newer etcd instances that publish v2 API endpoint.
Here is the issue.
You can use my quickly patched etcd2.py lookup plugin.
Place it into lookup_plugins subdirectory near you playbook (or into Ansible global lookup_plugins path).
Use lookup('etcd2', 'weather') in your playbook.

Ansible dynamic hosts skips

I am trying to set up a very hacky vm deployment Environment utilizing ansible 1.9.4 for my homeoffice.
I am pretty far, but the last step just won't work. I have a written 10loc plugin which generates temporary dns names for the staging vlan and I want to pass that value as a host to the next playbook role.
TASK: [localhost-eval | debug msg="{{ hostvars['127.0.0.1'].dns_record.stdout_lines }}"] ***
ok: [127.0.0.1] => {
"msg": "['vm103.local', 'vm-tmp103.local']"
}
It's accessible in the global playbook scope via the hostvars:
{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}
and should be passed on to:
- name: configure vm template
hosts: "{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
gather_facts: no
roles:
- template-vm-configure
which results in:
PLAY [configure vm template] **************************************************
skipping: no hosts matched
my inventory looks like this and seems to work. Hardcoding 'vm-tmp103.local'
gets the role running.
[vm]
vm[001:200].local
[vm-tmp]
vm-tmp[001:200].local
Thank you in advance, hopefully someone can point me in the right direction. Plan B would consists of passing the dns-records to a bash script for finalizing configuration, since I just want to configure the Network interface before adding the vm to the monitoring.
Edit:
Modified a play to use add hosts and add them to a temp group
- add_host: name={{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }} groups=just_vm
but it still doesn't match.
That's not intended behaviour I think.
#9733 lead me to a solution.
Adding this task let's me use staging as a group.
tasks:
- set_fact: hostname_next="{{ hostvars['127.0.0.1'].dns_record.stdout_lines[1] }}"
- add_host: group=staging name='{{ hostname_next }}'

Resources