I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
Related
So I'm trying to write roles and plays for our environment that are as OS agnostic as possible.
We use RHEL, Debian, Gentoo, and FreeBSD.
Gentoo needs getbinpkg, which I can make the default for all calls to community.general.portage with module_defaults.
But I can, and do, install some packages on the other 3 systems by setting variables and just calling package.
Both of the following plays work for Gentoo, but both fail on Debian etc due to the unknown option getbinpkg in the first example and trying to specifically use portage on systems that don't have portage in the second example.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
tasks:
- package:
name: {{ packages }}
- hosts: localhost
module_defaults:
community.general.portage:
getbinpkg: yes
tasks:
- community.general.portage:
name: {{ packages }}
Is there any way to pass module_defaults "through" the package module to the portage module?
I did try when on module_defaults to restrict setting the defaults, interestingly it was completely ignored.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
when: ansible_facts['distribution'] == 'Gentoo'
tasks:
- package:
name: {{ packages }}
I really don't want to mess with EMERGE_DEFAULT_OPTS.
Q: "How to load distribution-specific module defaults?"
Update
Unfortunately, the solution described below doesn't work anymore. For some reason, the keys of the dictionary module_defaults must be static. The play below fails with the error
ERROR! The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, the keys of which must be static action, module, or group names. Only the values may contain templates. For example: {'ping': "{{ ping_defaults }}"}
See details, workaround, and solution.
A: The variable ansible_distribution, and other setup facts, are not available when a playbook starts. You can see the first task in a playbook "Gathering Facts" when not disabled. The solution is to collect the facts and declare a dictionary with module defaults in the first play, and use them in the rest of the playbook. For example
shell> cat pb.yml
- hosts: test_01
gather_subset: min
tasks:
- debug:
var: ansible_distribution
- set_fact:
my_module_defaults:
Ubuntu:
debug:
msg: "Ubuntu"
FreeBSD:
debug:
msg: "FreeBSD"
- hosts: test_01
gather_facts: false
module_defaults: "{{ my_module_defaults[ansible_distribution] }}"
tasks:
- debug:
gives
shell> ansible-playbook pb.yml
PLAY [test_01] ****
TASK [Gathering Facts] ****
ok: [test_01]
TASK [debug] ****
ok: [test_01] =>
ansible_distribution: FreeBSD
TASK [set_fact] ****
ok: [test_01]
PLAY [test_01] ****
TASK [debug] ****
ok: [test_01] =>
msg: FreeBSD
PLAY RECAP ****
test_01: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I'm habituated to use --tags when I'm using ansible-playbook to filter what tasks will be executed.
I recently switched from Ansible 2.7 to 2.9 (huge gap, eh ?).
I was surprised ansible did not gathering facts exclusively when I'm using --tags. And I saw multiple similar cases closed in GitHub like this one or this one. It seems affects ansible since 2.8 version, but shown as resolved. Is anybody can confirm this behavior ? It seems happening from 2.8.
ANSIBLE VERSION :
ansible --version
ansible 2.9.9.post0
config file = None
configured module search path = [u'/opt/ansible/ansible/library']
ansible python module location = /opt/ansible/ansible/lib/ansible
executable location = /opt/ansible/ansible/bin/ansible
python version = 2.7.6 (default, Nov 13 2018, 12:45:42) [GCC 4.8.4]
ANSIBLE CONFIG :
ansible-config dump --only-changed
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/opt/ansible/ansible/library']
STEPS TO REPRODUCE :
playbook test.yml :
- name: First test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: first
- name: Second test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: second
role : roles/test/tasks/main.yml
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
Results :
ansible-playbook test.yml --check
= No errors.
ansible-playbook test.yml --check --tags "test"
= failed: 1
"The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined [...]"
And I can see on the output that facts are not gathered.
Well, it seems to be a purposed behaviour when you have tags on a play level:
This is intended behavior. Tagging a play with tags applies those tags to the gather_facts step and removes the always tag which is applied by default. If the goal is to tag the play, you can add a setup task with tags in order to gather facts.
samdoran commented on 11 Jun 2019
Please note that this is, then, not linked to the usage of roles, as it can be reproduced by simply doing:
- name: First test
hosts: all
tags:
- first
tasks:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
tags: test
Which yields the failing recap:
$ ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [debug] ******************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined
The error appears to be in '/ansible/play.yml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- debug:
^ here
PLAY RECAP ********************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So you'll either have to remove the tag of you play level, or use the setup module, as prompted.
This can be done inside your role, so your role stops relying on variables that could possibly not be set.
Given the role roles/test/tasks/main.yml
- setup:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
And the playbook:
- name: First test
hosts: all
tags:
- first
roles:
- role: test
tags:
- test
- name: Second test
hosts: all
tags:
- second
roles:
- role: test
tags:
- test
Here would be the run and recap for it:
$ ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY [Second test] ************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
All this run on:
ansible 2.9.9
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 15 2020, 01:53:50) [GCC 9.3.0]
I'm trying to run an ansible playbook against multiple hosts that are running containers using the same name. There are 3 hosts each running a container called "web". I'm trying to use the docker connection.
I'm using the typical pattern of a hosts file which works fine for running ansible modules on the host.
- name: Ping
ping:
- name: Add web container to inventory
add_host:
name: web
ansible_connection: docker
ansible_docker_extra_args: "-H=tcp://{{ ansible_host }}:2375"
ansible_user: root
changed_when: false
- name: Remove old logging directory
delegate_to: web
file:
state: absent
path: /var/log/old_logs
It only works against the first host in the hosts file
PLAY [all]
TASK [Gathering Facts]
ok: [web1]
ok: [web2]
ok: [web3]
TASK [web-playbook : Ping]
ok: [web1]
ok: [web2]
ok: [web3]
TASK [web-playbook : Add sensor container to inventory]
ok: [web1]
PLAY RECAP
web1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web2 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
web3 : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've tried setting name to web_{{ ansible_host }} to make it unique between hosts but it then tries to connect to web_web1. I've been running the commands using sudo docker exec web rm -rf /var/log/old_logs which of course works, but I'd like to be able to use the ansible modules directly in the docker containers.
The result you get is absolutely expected. Quoting the add_host documentation
This module bypasses the play host loop and only runs once for all the hosts in the play, if you need it to iterate use a with-loop construct.
i.e. you cannot rely on the hosts loop for the add_host and need to make a loop yourself.
Moreover, you definitely need to have different names (i.e. inventory_hostname) for your dynamically created hosts but since all your docker containers have the same name, their ansible_host should be the same.
Assuming all your docker host machines are in the group dockerhosts, the following playbook should do the job. I'm currently not in a situation where I can test this myself so you may have to adjust a bit. Let me know if it helped you and if I have to edit my answer.
Note that even though the add_host task will not naturally loop, I kept the hosts on your original group in the first play so that facts are gathered correctly and correctly populated in the hostvars magic variable
---
- name: Create dynamic inventory for docker containers
hosts: dockerhosts
tasks:
- name: Add web container to inventory
add_host:
name: "web_{{ item }}"
groups:
- dockercontainers
ansible_connection: docker
ansible_host: web
ansible_docker_extra_args: "-H=tcp://{{ hostvars[item].ansible_host }}:2375"
ansible_user: root
loop: "{{ groups['dockerhosts'] }}"
- name: Play needed commands on containers
hosts: dockercontainers
tasks:
- name: Remove old logging directory
file:
state: absent
path: /var/log/old_logs
I am trying to add pool members to a bigip pool using bigip_pool_member.
Tested on ansible version 2.5 and 2.6
Result - Returns changed ALWAYS, even when it is not making any changes.
Involcation command:
ansible-playbook -i test_inventory add_pool_members.yaml --extra-vars '{"hostgroup": "test-bigip"}'
I am wondering if anyone has insights into what could be going on ?
The contents of the playbook are as under
--
- hosts: "{{ hostgroup }}"
gather_facts: no"
tasks:
- name: Add servers to connection pool
bigip_pool_member:
user: username
password: password
server: "{{inventory_hostname}}"
validate_certs: no
state: present
partition: test
pool: testpool
host: 14.34.45.X
name: test-server
port: 80
description: test
delegate_to: localhost
Run Result
PLAY [f5-test] *****************************************************************************
TASK [Add servers to connection pool ] *****************************************************
changed: [f5-test -> localhost]
PLAY RECAP *********************************************************************************
f5-test : ok=1 changed=1 unreachable=0 failed=0
This could be related to this known bug in the module.
When running playbook with bigip_pool_member module with state: present against live device, each run results in change being made when in reality there's no need for a change.
I'm nor f5 neither network expert but from I understand that happen if you set a monitor to your pool.
There is a pull request already with fixes related to correct state of down machine. Check if it applies to you, else I would suggest to add a detailed comment on the bug.
I am getting error while trying to retrieve the key values from consul kv store.
we have key values are stored under config/app-name/ folder. there are many keys. I want to retrieve all the key values from the consul using ansible.
But getting following error:
PLAY [Adding host to inventory] **********************************************************************************************************************************************************
TASK [Adding new host to inventory] ******************************************************************************************************************************************************
changed: [localhost]
PLAY [Testing consul kv] *****************************************************************************************************************************************************************
TASK [show the lookups] ******************************************************************************************************************************************************************
fatal: [server1]: FAILED! => {"failed": true, "msg": "{{lookup('consul_kv','config/app-name/')}}: An unhandled exception occurred while running the lookup plugin 'consul_kv'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Error locating 'config/app-name/' in kv store. Error was 500 No known Consul servers"}
PLAY RECAP *******************************************************************************************************************************************************************************
server1 : ok=0 changed=0 unreachable=0 failed=1
localhost : ok=1 changed=1 unreachable=0 failed=0
Here is the code i am trying.
---
- name: Adding host to inventory
hosts: localhost
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: Testing consul kv
hosts: "{{ target }}"
vars:
kv_info: "{{lookup('consul_kv','config/app-name/')}}"
become: yes
tasks:
- name: show the lookups
debug: msg="{{ kv_info }}"
but removing folder and adding folder are working well. but getting the key values from consul cluster is throwing error. please suggest some better way here.
- name: remove folder from the store
consul_kv:
key: 'config/app-name/'
value: 'removing'
recurse: true
state: absent
- name: add folder to the store
consul_kv:
key: 'config/app-name/'
value: 'adding'
I tried this but still the same error.
---
- name: Adding host to inventory
hosts: localhost
environment:
ANSIBLE_CONSUL_URL: "http://consul-1.abcaa.com"
tasks:
- name: Adding new host to inventory
add_host:
name: "{{ target }}"
- name: show the lookups
debug: kv_info= "{{lookup('consul_kv','config/app-name/')}}"
All lookup plugins in Ansible are always evaluated on localhost, see docs:
Note:
Lookups occur on the local computer, not on the remote computer.
I guess you expect kv_info to be populated by executing consul fetch from
{{ target }} server.
But this lookup is actually executed on your Ansible control host (localhost), and if you have no ANSIBLE_CONSUL_URL set, you get No known Consul servers error.
When you use consul_kv module (to create/delete folders), it is executed on {{ target }} host in contrast to consul_kv lookup plugin.