Ansible, module_defaults and package module - ansible

So I'm trying to write roles and plays for our environment that are as OS agnostic as possible.
We use RHEL, Debian, Gentoo, and FreeBSD.
Gentoo needs getbinpkg, which I can make the default for all calls to community.general.portage with module_defaults.
But I can, and do, install some packages on the other 3 systems by setting variables and just calling package.
Both of the following plays work for Gentoo, but both fail on Debian etc due to the unknown option getbinpkg in the first example and trying to specifically use portage on systems that don't have portage in the second example.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
tasks:
- package:
name: {{ packages }}
- hosts: localhost
module_defaults:
community.general.portage:
getbinpkg: yes
tasks:
- community.general.portage:
name: {{ packages }}
Is there any way to pass module_defaults "through" the package module to the portage module?
I did try when on module_defaults to restrict setting the defaults, interestingly it was completely ignored.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
when: ansible_facts['distribution'] == 'Gentoo'
tasks:
- package:
name: {{ packages }}
I really don't want to mess with EMERGE_DEFAULT_OPTS.

Q: "How to load distribution-specific module defaults?"
Update
Unfortunately, the solution described below doesn't work anymore. For some reason, the keys of the dictionary module_defaults must be static. The play below fails with the error
ERROR! The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, the keys of which must be static action, module, or group names. Only the values may contain templates. For example: {'ping': "{{ ping_defaults }}"}
See details, workaround, and solution.
A: The variable ansible_distribution, and other setup facts, are not available when a playbook starts. You can see the first task in a playbook "Gathering Facts" when not disabled. The solution is to collect the facts and declare a dictionary with module defaults in the first play, and use them in the rest of the playbook. For example
shell> cat pb.yml
- hosts: test_01
gather_subset: min
tasks:
- debug:
var: ansible_distribution
- set_fact:
my_module_defaults:
Ubuntu:
debug:
msg: "Ubuntu"
FreeBSD:
debug:
msg: "FreeBSD"
- hosts: test_01
gather_facts: false
module_defaults: "{{ my_module_defaults[ansible_distribution] }}"
tasks:
- debug:
gives
shell> ansible-playbook pb.yml
PLAY [test_01] ****
TASK [Gathering Facts] ****
ok: [test_01]
TASK [debug] ****
ok: [test_01] =>
ansible_distribution: FreeBSD
TASK [set_fact] ****
ok: [test_01]
PLAY [test_01] ****
TASK [debug] ****
ok: [test_01] =>
msg: FreeBSD
PLAY RECAP ****
test_01: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Related

Is it possible to run a tasklist from within an Ansible module?

I am developing a collection containing a number of playbooks for complex system configurations. However, I'd like to simplify the usage of them to a single module plugin, named 'install'. In the Python code of the module will be decided what playbook will be called to perform the actual task. Say
- name: Some playbook written by the end user
hosts: all
tasks:
...
- install:
name: docker
version: latest
state: present
...
Based on the specified name, version and state will the Python code of the install module invoke programmatically the adequate playbook(s) to perform the installation of the latest release of Docker. These playbooks are also included into the collection.
If task lists are a better fit than playbooks, then task lists it is.
In fact, I don't mind the precise implementation. For as long as:
Everything is packed into a collection
The end user does it all with one 'install' task as depicted above
Is that even possible ?
Yes. It's possible. For example, given the variable in a file
shell> cat foo_bar_install.yml
foo_bar_install:
name: docker
version: latest
state: present
Create the playbooks in the collection foo.bar
shell> tree collections/ansible_collections/foo/bar/
collections/ansible_collections/foo/bar/
├── galaxy.yml
└── playbooks
├── install_docker.yml
└── install.yml
1 directory, 3 files
shell> cat collections/ansible_collections/foo/bar/playbooks/install_docker.yml
- name: Install docker
hosts: all
gather_facts: false
tasks:
- debug:
msg: |
Playbook foo.bar.install_docker.yml
version: {{ version|d('UNDEF') }}
state: {{ state|d('UNDEF') }}
shell> cat collections/ansible_collections/foo/bar/playbooks/install.yml
- import_playbook: "foo.bar.install_{{ foo_bar_install.name }}.yml"
vars:
version: "{{ foo_bar_install.version }}"
state: "{{ foo_bar_install.state }}"
Given the inventory
shell> cat hosts
host1
host2
Run the playbook foo.bar.install.yml and provide the extra variables in the file foo_bar_install.yml
shell> ansible-playbook foo.bar.install.yml -e #foo_bar_install.yml
PLAY [Install docker] ************************************************************************
TASK [debug] *********************************************************************************
ok: [host1] =>
msg: |-
Playbook foo.bar.install_docker.yml
version: latest
state: present
ok: [host2] =>
msg: |-
Playbook foo.bar.install_docker.yml
version: latest
state: present
PLAY RECAP ***********************************************************************************
host1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

ansible playbook run with play tag gathers facts even that gathering facts set to 'no'

Below ansible-playbook is run using ansible-playbook playbook.yml --tags=rancher
- name: instal docker
hosts: rancher-server
become: yes
gather_facts: yes
roles:
- role: some_galaxy_role
- name: install rancher
hosts: rancher-server
become: yes
gather_facts: no
tasks:
- name: install rancher
debug:
tags:
- rancher
Only install rancher play is selected by rancher tag and runs as expected. However fact gathering of the first play install docker still runs and takes time. Why and is there a way to avoid it?
Below is the output of the playbook run:
PLAY [install docker] *********************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************
ok: [rancher-server1]
ok: [rancher-server2]
PLAY [install rancher]
You can put a tag on the play level so the whole Instal Docker play is skipped.
Given:
- name: Install Docker
hosts: localhost
gather_facts: yes
tags:
- docker
tasks:
- debug:
- name: Install rancher
hosts: localhost
gather_facts: yes
tags:
- rancher
tasks:
- debug:
When run with --tags rancher, this yields:
PLAY [Install Docker] *********************************************************************************************
PLAY [Install rancher] ********************************************************************************************
TASK [Gathering Facts] ********************************************************************************************
ok: [localhost]
TASK [debug] ******************************************************************************************************
ok: [localhost] =>
msg: Hello world!
PLAY RECAP ********************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
On the other hand, please mind that you are not forced to gather all the facts, you can also gather subsets, to speed up plays.
For example, you can use a minimal subset of the facts only:
- name: Install Docker
hosts: localhost
gather_subset:
- min
tasks:
- debug:
Of course, it all just depends what is needed in the some_galaxy_role that requires you to gather facts.

Can I run on specific host or group of hosts in a Ansible task?

Can I run on specific host or group of hosts in a Ansible task?
---
- hosts: all
become: yes
tasks:
- name: Disable tuned
hosts: client1.local
service:
name: tuned
enabled: false
state: stopped
It does not work anyway. Here is the error:
[root#centos7 ansible]# ansible-playbook playbook/demo.yaml
ERROR! conflicting action statements: hosts, service
The error appears to be in '/root/ansible/playbook/demo.yaml': line 24, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Disable tuned
^ here
For example
- hosts: test_11,test_12,test_13
vars:
client1:
showroom: [test11, test12]
local: [test_13]
tasks:
- debug:
var: inventory_hostname
- debug:
msg: "Local client: {{ inventory_hostname }}"
when: inventory_hostname in client1.local
gives
TASK [debug] ****************************************************************
ok: [test_11] =>
inventory_hostname: test_11
ok: [test_12] =>
inventory_hostname: test_12
ok: [test_13] =>
inventory_hostname: test_13
TASK [debug] ****************************************************************
skipping: [test_11]
skipping: [test_12]
ok: [test_13] =>
msg: 'Local client: test_13'
Since I had the similar requirement to structure hosts into groups I found the following approach working for me.
First I've structured my inventory according my environment, administrative groups and tasks.
inventory
[infrastructure:children]
prod
qa
dev
[prod:children]
tuned_hosts
[tuned_hosts]
client1.local
Then I can use in
playbook
---
- hosts: all
become: yes
tasks:
- name: Disable tuned
service:
name: tuned
enabled: false
state: stopped
when: ('tuned_hosts' in group_names) # or ('prod' in group_names)
as well something like
when: ("dev" not in group_names)
depending on what I try to achieve.
Documentation
How to build your inventory
Special Variables
Playbook Conditionals

Ansible not gathering facts on tags

I'm habituated to use --tags when I'm using ansible-playbook to filter what tasks will be executed.
I recently switched from Ansible 2.7 to 2.9 (huge gap, eh ?).
I was surprised ansible did not gathering facts exclusively when I'm using --tags. And I saw multiple similar cases closed in GitHub like this one or this one. It seems affects ansible since 2.8 version, but shown as resolved. Is anybody can confirm this behavior ? It seems happening from 2.8.
ANSIBLE VERSION :
ansible --version
ansible 2.9.9.post0
config file = None
configured module search path = [u'/opt/ansible/ansible/library']
ansible python module location = /opt/ansible/ansible/lib/ansible
executable location = /opt/ansible/ansible/bin/ansible
python version = 2.7.6 (default, Nov 13 2018, 12:45:42) [GCC 4.8.4]
ANSIBLE CONFIG :
ansible-config dump --only-changed
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/opt/ansible/ansible/library']
STEPS TO REPRODUCE :
playbook test.yml :
- name: First test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: first
- name: Second test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: second
role : roles/test/tasks/main.yml
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
Results :
ansible-playbook test.yml --check
= No errors.
ansible-playbook test.yml --check --tags "test"
= failed: 1
"The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined [...]"
And I can see on the output that facts are not gathered.
Well, it seems to be a purposed behaviour when you have tags on a play level:
This is intended behavior. Tagging a play with tags applies those tags to the gather_facts step and removes the always tag which is applied by default. If the goal is to tag the play, you can add a setup task with tags in order to gather facts.
samdoran commented on 11 Jun 2019
Please note that this is, then, not linked to the usage of roles, as it can be reproduced by simply doing:
- name: First test
hosts: all
tags:
- first
tasks:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
tags: test
Which yields the failing recap:
$ ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [debug] ******************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined
The error appears to be in '/ansible/play.yml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- debug:
^ here
PLAY RECAP ********************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So you'll either have to remove the tag of you play level, or use the setup module, as prompted.
This can be done inside your role, so your role stops relying on variables that could possibly not be set.
Given the role roles/test/tasks/main.yml
- setup:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
And the playbook:
- name: First test
hosts: all
tags:
- first
roles:
- role: test
tags:
- test
- name: Second test
hosts: all
tags:
- second
roles:
- role: test
tags:
- test
Here would be the run and recap for it:
$ ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY [Second test] ************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
All this run on:
ansible 2.9.9
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 15 2020, 01:53:50) [GCC 9.3.0]

Using Netbox Ansible Modules

I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present

Resources