Accessing Ansible host variables in playbook - error - ansible

I have a simple ansible playbook with the following data
inventory
----------
[admin]
127.0.0.1
[admin:vars]
service_ip=12.1.2.1
[targets]
admin ansible_connection=local
main.yaml
----------
---
- hosts: admin
roles:
- admin
tags: admin
roles/admin/tasks/main.yaml
---------------------------
- debug: msg="{{ service_ip }}"
when I run the playbook using the command, ansible-playbook -i inventory main.yaml, I get the following error
PLAY [admin] ******************************************************************
GATHERING FACTS ***************************************************************
ok: [admin]
TASK: [admin | debug msg="{{ service_ip }}"] **********************************
fatal: [admin] => One or more undefined variables: 'service_ip' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/karthi/main.yaml.retry
admin : ok=1 changed=0 unreachable=1 failed=0
Any help appreciated. Thanks.

It looks like the issue is the inclusion of
[targets]
admin ansible_connection=local
as per the ansible documentation, You can also select the connection type and user on a per host basis.
So, changing your inventory file to have
[targets]
localhost ansible_connection=local
should fix the issue

Related

Ansible, module_defaults and package module

So I'm trying to write roles and plays for our environment that are as OS agnostic as possible.
We use RHEL, Debian, Gentoo, and FreeBSD.
Gentoo needs getbinpkg, which I can make the default for all calls to community.general.portage with module_defaults.
But I can, and do, install some packages on the other 3 systems by setting variables and just calling package.
Both of the following plays work for Gentoo, but both fail on Debian etc due to the unknown option getbinpkg in the first example and trying to specifically use portage on systems that don't have portage in the second example.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
tasks:
- package:
name: {{ packages }}
- hosts: localhost
module_defaults:
community.general.portage:
getbinpkg: yes
tasks:
- community.general.portage:
name: {{ packages }}
Is there any way to pass module_defaults "through" the package module to the portage module?
I did try when on module_defaults to restrict setting the defaults, interestingly it was completely ignored.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
when: ansible_facts['distribution'] == 'Gentoo'
tasks:
- package:
name: {{ packages }}
I really don't want to mess with EMERGE_DEFAULT_OPTS.
Q: "How to load distribution-specific module defaults?"
Update
Unfortunately, the solution described below doesn't work anymore. For some reason, the keys of the dictionary module_defaults must be static. The play below fails with the error
ERROR! The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, the keys of which must be static action, module, or group names. Only the values may contain templates. For example: {'ping': "{{ ping_defaults }}"}
See details, workaround, and solution.
A: The variable ansible_distribution, and other setup facts, are not available when a playbook starts. You can see the first task in a playbook "Gathering Facts" when not disabled. The solution is to collect the facts and declare a dictionary with module defaults in the first play, and use them in the rest of the playbook. For example
shell> cat pb.yml
- hosts: test_01
gather_subset: min
tasks:
- debug:
var: ansible_distribution
- set_fact:
my_module_defaults:
Ubuntu:
debug:
msg: "Ubuntu"
FreeBSD:
debug:
msg: "FreeBSD"
- hosts: test_01
gather_facts: false
module_defaults: "{{ my_module_defaults[ansible_distribution] }}"
tasks:
- debug:
gives
shell> ansible-playbook pb.yml
PLAY [test_01] ****
TASK [Gathering Facts] ****
ok: [test_01]
TASK [debug] ****
ok: [test_01] =>
ansible_distribution: FreeBSD
TASK [set_fact] ****
ok: [test_01]
PLAY [test_01] ****
TASK [debug] ****
ok: [test_01] =>
msg: FreeBSD
PLAY RECAP ****
test_01: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

How to fix "Infoblox IPAM is misconfigured?"

I'm calling infoblox from ansible using the following playbook:
- hosts: localhost
gather_facts: false
tasks:
- name: Include infoblox_vault
include_vars:
file: 'infoblox_vault.yml'
- name: Install infoblox-client for DDI
pip:
name: infoblox-client
environment:
HTTP_PROXY: http://our_internal_proxy.net:8080
HTTPS_PROXY: http://our_internal_proxy.net:8080
delegate_to: localhost
- debug:
msg: can I decrypt username?--> "{{ vault_infoblox_username }}"
- name: Check if DNS Record exists
set_fact:
miqCreateVM_ddiRecord: "{{ lookup('nios', 'record:a', filter={'name': 'infoblox-devtest.net' }, provider={'host': 'ddi-qa.net', 'username': vault_infoblox_username, 'password': vault_infoblox_password }) }}"
- debug:
msg: check var miqCreateVM_ddiRecord "{{ miqCreateVM_ddiRecord }}"
- debug:
msg: test to see amazing vm_name! "{{ vm_name }}"
... code snipped
When the job runs, I get:
Vault password:
PLAY [localhost] ***************************************************************
TASK [Include infoblox_vault] **************************************************
ok: [127.0.0.1]
TASK [Install infoblox-client for DDI] *****************************************
ok: [127.0.0.1 -> localhost]
TASK [debug] *******************************************************************
ok: [127.0.0.1] => {
"msg": "can I decrypt username?--> \"manageiq-ddi\""
}
TASK [Check if DNS Record exists] **********************************************
fatal: [127.0.0.1]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."}
PLAY RECAP *********************************************************************
127.0.0.1 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Here's the main part: "An unhandled exception occurred while running the lookup plugin 'nios'. Error was a <type 'exceptions.Exception'>, original message: Infoblox IPAM is misconfigured: infoblox_username and infoblox_password are incorrect."
This playbook used to work in the past. I haven't worked on it for a few monhths. Not sure why it's broken.
I confirmed that I can log into infoblox client manually using the credentials. I also tried manually logging the username to ensure it's decrypting the creds from the ansible-vault file. That worked fine. So it's not the credentials, not the vault decryption. It's something else.
I found the following three related topics online, but none of them seem to resolve the problem:
This one (which references adding certs to the request. Anyone know how to do this? I can't find instructions)
This one (which mentions problems from upgrading. I showed the versions mentioned in that post to our networking folks and they said the version numbers didn't correlate at all with what we have in our environment, so it's hard to evaluate whether that's relevant.)
Last one (which calls for using a property 'http_request_timeout' : None that doesn't strike me as being the problem as I can't get it to work at all.)
Any theories? Thanks!
This might not solve it for others, but this solved it for me:
Got a new password for Ansible to use to log into Infoblox.
Create a new ansible vault file containing the new infoblox password. I made a new password for the vault file encryption also.
I created a new credential object in ansible to enable ansible to be able to read the new vault file.
I updated the playbook to use the new vault.
It works now. Something was wrong with the encryption.

Ansible not gathering facts on tags

I'm habituated to use --tags when I'm using ansible-playbook to filter what tasks will be executed.
I recently switched from Ansible 2.7 to 2.9 (huge gap, eh ?).
I was surprised ansible did not gathering facts exclusively when I'm using --tags. And I saw multiple similar cases closed in GitHub like this one or this one. It seems affects ansible since 2.8 version, but shown as resolved. Is anybody can confirm this behavior ? It seems happening from 2.8.
ANSIBLE VERSION :
ansible --version
ansible 2.9.9.post0
config file = None
configured module search path = [u'/opt/ansible/ansible/library']
ansible python module location = /opt/ansible/ansible/lib/ansible
executable location = /opt/ansible/ansible/bin/ansible
python version = 2.7.6 (default, Nov 13 2018, 12:45:42) [GCC 4.8.4]
ANSIBLE CONFIG :
ansible-config dump --only-changed
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/opt/ansible/ansible/library']
STEPS TO REPRODUCE :
playbook test.yml :
- name: First test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: first
- name: Second test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: second
role : roles/test/tasks/main.yml
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
Results :
ansible-playbook test.yml --check
= No errors.
ansible-playbook test.yml --check --tags "test"
= failed: 1
"The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined [...]"
And I can see on the output that facts are not gathered.
Well, it seems to be a purposed behaviour when you have tags on a play level:
This is intended behavior. Tagging a play with tags applies those tags to the gather_facts step and removes the always tag which is applied by default. If the goal is to tag the play, you can add a setup task with tags in order to gather facts.
samdoran commented on 11 Jun 2019
Please note that this is, then, not linked to the usage of roles, as it can be reproduced by simply doing:
- name: First test
hosts: all
tags:
- first
tasks:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
tags: test
Which yields the failing recap:
$ ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [debug] ******************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined
The error appears to be in '/ansible/play.yml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- debug:
^ here
PLAY RECAP ********************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So you'll either have to remove the tag of you play level, or use the setup module, as prompted.
This can be done inside your role, so your role stops relying on variables that could possibly not be set.
Given the role roles/test/tasks/main.yml
- setup:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
And the playbook:
- name: First test
hosts: all
tags:
- first
roles:
- role: test
tags:
- test
- name: Second test
hosts: all
tags:
- second
roles:
- role: test
tags:
- test
Here would be the run and recap for it:
$ ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY [Second test] ************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
All this run on:
ansible 2.9.9
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 15 2020, 01:53:50) [GCC 9.3.0]

Using Netbox Ansible Modules

I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present

Need ansible inventory file details

Could someone please help me to write ansible inventory file to connect to bitbucket - clone a file and place into ansible machine.
Playbook
---
- hosts: bitbucketURL
tasks:
- git:
repo: https://p-bitbucket.com:5999/projects/VIT/repos/sample-playbooks/browse/hello.txt
dest: /home/xxx/demo/output/
Inventory file
[bitbucketURL]
p-bitbucket.com:5999
[bitbucketURL:vars]
ansible_connection=winrm
ansible_user=xxx
ansible_pass=<passwd>
I am getting error while using this playbook and inventory file
-bash-4.2$ ansible-playbook -i inv demo_draft1.yml
PLAY [bitbucketURL] *****************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************
fatal: [p-bitbucket.nl.eu.abnamro.com]: UNREACHABLE! => {"changed": false, "msg": "ssl: auth method ssl requires a password", "unreachable": true}
to retry, use: --limit #/home/c55016a/demo/demo_draft1.retry
PLAY RECAP **************************************************************************************************************************************************
p-bitbucket.nl.eu.abnamro.com : ok=0 changed=0 unreachable=1 failed=0
Please help me write a proper inventory file with correct parameters
You need no inventory at all. All you need to do is to set the play to execute on localhost:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- git:
repo: https://p-bitbucket.com:5999/projects/VIT/repos/sample-playbooks/browse/hello.txt
dest: /home/xxx/demo/output/
That said, URL should point to Git repository, not a single file (if hello.txt is a single file).

Resources