I am new to Ansible and couldn't figure out why the playbook is not picking up the group_vars/ and host_vars I have defined. According to the document:
You can also add group_vars/ and host_vars/ directories to your playbook directory. The ansible-playbook command looks for these directories in the current working directory by default.
My playbook, inventory, and other files structure are quite simple. It should be matching the default.
Inventory file:
dummy
[spider]
s0ra
s0ra_slave
The playbook:
- name: base mix release upgrade Prod.
hosts: spider
gather_facts: false
# vars_files:
# - vars/s0ra_sup.yaml
tasks:
- name: check release bin
stat:
path: "{{ sh_lastrel }}"
register: rel_bin
When I tried to run the playbook by ansible-playbook -i inventory.ini mix_upgrade.yaml, it complains:
PLAY [base mix release upgrade Prod.] **********************************************************************************
TASK [check release bin] ***********************************************************************************************
fatal: [s0ra]: FAILED! => {"msg": "The task includes an option with an undefined variable.
The error was: 'sh_lastrel' is undefined\n\n
The error appears to be in 'xxx/ansible/mix_upgrade.yaml': line 19, column 7, but may\n
be elsewhere in the file depending on the exact syntax problem.\n\n
The offending line appears to be:\n\n\n
- name: check release bin\n ^ here\n"}
PLAY RECAP *************************************************************************************************************
s0ra : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
But the sh_lastrel is defined in spider.yaml actually. I don't know why it is not loaded. I tried to turn on -v mode but it does not seem to have more debugging info. Any hint of the cause or how to further debug is greatly appreciated.
My ansible version is as below:
╰─ ansible --version ✔ 22:01:57
ansible 2.9.9
config file = None
configured module search path = ['/Users/kenchen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/kenchen/.pyenv/versions/3.8.2/lib/python3.8/site-packages/ansible
executable location = /Users/kenchen/.pyenv/versions/3.8.2/bin/ansible
python version = 3.8.2 (default, May 18 2020, 00:02:00) [Clang 10.0.1 (clang-1001.0.46.4)]
Make sure group_vars/spider.yml is available. For example,
shell> cat group_vars/spider.yml
sh_lastrel: value of sh_lastrel defined in group_vars/spider.yml
shell> cat inventory.ini
dummy
[spider]
s0ra
s0ra_slave
shell> cat mix_upgrade.yaml
- hosts: spider
gather_facts: false
tasks:
- debug:
var: sh_lastrel
shell> ansible-playbook -i inventory.ini mix_upgrade.yaml
ok: [s0ra] =>
sh_lastrel: value of sh_lastrel defined in group_vars/spider.yml
ok: [s0ra_slave] =>
sh_lastrel: value of sh_lastrel defined in group_vars/spider.yml
Related
I am developing a collection containing a number of playbooks for complex system configurations. However, I'd like to simplify the usage of them to a single module plugin, named 'install'. In the Python code of the module will be decided what playbook will be called to perform the actual task. Say
- name: Some playbook written by the end user
hosts: all
tasks:
...
- install:
name: docker
version: latest
state: present
...
Based on the specified name, version and state will the Python code of the install module invoke programmatically the adequate playbook(s) to perform the installation of the latest release of Docker. These playbooks are also included into the collection.
If task lists are a better fit than playbooks, then task lists it is.
In fact, I don't mind the precise implementation. For as long as:
Everything is packed into a collection
The end user does it all with one 'install' task as depicted above
Is that even possible ?
Yes. It's possible. For example, given the variable in a file
shell> cat foo_bar_install.yml
foo_bar_install:
name: docker
version: latest
state: present
Create the playbooks in the collection foo.bar
shell> tree collections/ansible_collections/foo/bar/
collections/ansible_collections/foo/bar/
├── galaxy.yml
└── playbooks
├── install_docker.yml
└── install.yml
1 directory, 3 files
shell> cat collections/ansible_collections/foo/bar/playbooks/install_docker.yml
- name: Install docker
hosts: all
gather_facts: false
tasks:
- debug:
msg: |
Playbook foo.bar.install_docker.yml
version: {{ version|d('UNDEF') }}
state: {{ state|d('UNDEF') }}
shell> cat collections/ansible_collections/foo/bar/playbooks/install.yml
- import_playbook: "foo.bar.install_{{ foo_bar_install.name }}.yml"
vars:
version: "{{ foo_bar_install.version }}"
state: "{{ foo_bar_install.state }}"
Given the inventory
shell> cat hosts
host1
host2
Run the playbook foo.bar.install.yml and provide the extra variables in the file foo_bar_install.yml
shell> ansible-playbook foo.bar.install.yml -e #foo_bar_install.yml
PLAY [Install docker] ************************************************************************
TASK [debug] *********************************************************************************
ok: [host1] =>
msg: |-
Playbook foo.bar.install_docker.yml
version: latest
state: present
ok: [host2] =>
msg: |-
Playbook foo.bar.install_docker.yml
version: latest
state: present
PLAY RECAP ***********************************************************************************
host1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
So I'm trying to write roles and plays for our environment that are as OS agnostic as possible.
We use RHEL, Debian, Gentoo, and FreeBSD.
Gentoo needs getbinpkg, which I can make the default for all calls to community.general.portage with module_defaults.
But I can, and do, install some packages on the other 3 systems by setting variables and just calling package.
Both of the following plays work for Gentoo, but both fail on Debian etc due to the unknown option getbinpkg in the first example and trying to specifically use portage on systems that don't have portage in the second example.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
tasks:
- package:
name: {{ packages }}
- hosts: localhost
module_defaults:
community.general.portage:
getbinpkg: yes
tasks:
- community.general.portage:
name: {{ packages }}
Is there any way to pass module_defaults "through" the package module to the portage module?
I did try when on module_defaults to restrict setting the defaults, interestingly it was completely ignored.
- hosts: localhost
module_defaults:
package:
getbinpkg: yes
when: ansible_facts['distribution'] == 'Gentoo'
tasks:
- package:
name: {{ packages }}
I really don't want to mess with EMERGE_DEFAULT_OPTS.
Q: "How to load distribution-specific module defaults?"
Update
Unfortunately, the solution described below doesn't work anymore. For some reason, the keys of the dictionary module_defaults must be static. The play below fails with the error
ERROR! The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, the keys of which must be static action, module, or group names. Only the values may contain templates. For example: {'ping': "{{ ping_defaults }}"}
See details, workaround, and solution.
A: The variable ansible_distribution, and other setup facts, are not available when a playbook starts. You can see the first task in a playbook "Gathering Facts" when not disabled. The solution is to collect the facts and declare a dictionary with module defaults in the first play, and use them in the rest of the playbook. For example
shell> cat pb.yml
- hosts: test_01
gather_subset: min
tasks:
- debug:
var: ansible_distribution
- set_fact:
my_module_defaults:
Ubuntu:
debug:
msg: "Ubuntu"
FreeBSD:
debug:
msg: "FreeBSD"
- hosts: test_01
gather_facts: false
module_defaults: "{{ my_module_defaults[ansible_distribution] }}"
tasks:
- debug:
gives
shell> ansible-playbook pb.yml
PLAY [test_01] ****
TASK [Gathering Facts] ****
ok: [test_01]
TASK [debug] ****
ok: [test_01] =>
ansible_distribution: FreeBSD
TASK [set_fact] ****
ok: [test_01]
PLAY [test_01] ****
TASK [debug] ****
ok: [test_01] =>
msg: FreeBSD
PLAY RECAP ****
test_01: ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I'm habituated to use --tags when I'm using ansible-playbook to filter what tasks will be executed.
I recently switched from Ansible 2.7 to 2.9 (huge gap, eh ?).
I was surprised ansible did not gathering facts exclusively when I'm using --tags. And I saw multiple similar cases closed in GitHub like this one or this one. It seems affects ansible since 2.8 version, but shown as resolved. Is anybody can confirm this behavior ? It seems happening from 2.8.
ANSIBLE VERSION :
ansible --version
ansible 2.9.9.post0
config file = None
configured module search path = [u'/opt/ansible/ansible/library']
ansible python module location = /opt/ansible/ansible/lib/ansible
executable location = /opt/ansible/ansible/bin/ansible
python version = 2.7.6 (default, Nov 13 2018, 12:45:42) [GCC 4.8.4]
ANSIBLE CONFIG :
ansible-config dump --only-changed
DEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/opt/ansible/ansible/library']
STEPS TO REPRODUCE :
playbook test.yml :
- name: First test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: first
- name: Second test
hosts: localhost
connection: local
gather_facts: yes
roles:
- { role: test, tags: test }
tags: second
role : roles/test/tasks/main.yml
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
Results :
ansible-playbook test.yml --check
= No errors.
ansible-playbook test.yml --check --tags "test"
= failed: 1
"The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined [...]"
And I can see on the output that facts are not gathered.
Well, it seems to be a purposed behaviour when you have tags on a play level:
This is intended behavior. Tagging a play with tags applies those tags to the gather_facts step and removes the always tag which is applied by default. If the goal is to tag the play, you can add a setup task with tags in order to gather facts.
samdoran commented on 11 Jun 2019
Please note that this is, then, not linked to the usage of roles, as it can be reproduced by simply doing:
- name: First test
hosts: all
tags:
- first
tasks:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
tags: test
Which yields the failing recap:
$ ANSIBLE_STDOUT_CALLBACK=debug ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [debug] ******************************************************************************************************
fatal: [localhost]: FAILED! => {}
MSG:
The task includes an option with an undefined variable. The error was: 'ansible_product_uuid' is undefined
The error appears to be in '/ansible/play.yml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- debug:
^ here
PLAY RECAP ********************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
So you'll either have to remove the tag of you play level, or use the setup module, as prompted.
This can be done inside your role, so your role stops relying on variables that could possibly not be set.
Given the role roles/test/tasks/main.yml
- setup:
- debug:
msg: System {{ inventory_hostname }} has uuid {{ ansible_product_uuid }}
And the playbook:
- name: First test
hosts: all
tags:
- first
roles:
- role: test
tags:
- test
- name: Second test
hosts: all
tags:
- second
roles:
- role: test
tags:
- test
Here would be the run and recap for it:
$ ansible-playbook play.yml --tags "test"
PLAY [First test] *************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY [Second test] ************************************************************************************************
TASK [test : setup] ***********************************************************************************************
ok: [localhost]
TASK [test : debug] ***********************************************************************************************
ok: [localhost] => {
"msg": "System localhost has uuid 3fc44bc9-0000-0000-b25d-bf9e26ce0762"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
All this run on:
ansible 2.9.9
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.3 (default, May 15 2020, 01:53:50) [GCC 9.3.0]
When I execute the playbook, only one task will be displayed
playbook: test.yaml
play #1 (lab): lab TAGS: []
tasks:
Install pip TAGS: []
And when I execute the playbook, it is indeed normal
PLAY [lab] *****************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************
ok: [my_ipaddress]
TASK [Install pip] *********************************************************************************************************************
ok: [my_ipaddress]
PLAY RECAP *****************************************************************************************************************************
my_ipaddress : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and in /var/log/ansible.log also look normal as same as Execution output
So the question is, do I have to do less settings? Why is there a task that is not in the execution list, or there are other debug outputs that can display more detailed output information?
here is my ansible configuration
OS version:Ubuntu 18.04.5 LTS
ansible version:
ansible 2.9.12
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/primula/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/primula/.local/lib/python3.6/site-packages/ansible
executable location = /home/primula/.local/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
my playbook:
---
- hosts: lab
roles:
- { role: apache2, become: yes }
- { role: pip, become: yes }
apache2 role configuration
path:/etc/ansible/roles/apache2/tasks/maim.yaml
---
- name: Install apache2
apt:
name: apache2
update_cache: yes
pip role configuration
path:/etc/ansible/roles/pip/tasks/main.yaml
---
- name: Install pip
apt:
name: python-pip
update_cache: yes
here is my ansible invotory & ansible.cfg
invotory
[lab]
<ipaddress> ansible_ssh_user=<user_name> ansible_ssh_pass='<ssh_pass>' ansible_become_user=<root_user> ansible_become=true ansible_become_pass='<root_pass>'
ansible.cfg
[defaults]
private_key_file = /root/.ssh/id_rsa
roles_path = /etc/ansible/roles
inventory = /etc/ansible/hosts
timeout = 10
log_path = /var/log/ansible.log
deprecation_warnings = False
strategy = debug
any_errors_fatal = True
The task that is not on your execution list when using ansible-playbook --list-tasks your_playbook.yml is the one related to fact gathering done by the setup module
It is an implicit automatic task that is turned on by default for all hosts in your play. If implicit, it is not reported by the above command.
You can control fact gathering at play level with the gather_facts play keyword, e.g.
---
- name: Some play without facts gathering
hosts: my_group
gather_facts: false
tasks:
- name: dummy demo task
debug:
msg: I am dummy task
Regarding your question about a more detailed output, you can turn on ansible(-playbook) verbose mode with the -v(vv) switch (the more vs, the more details).
I've been wanting to try out Ansible modules available for Netbox [1].
However, I find myself stuck right in the beginning.
Here's what I've tried:
Add prefix/VLAN to netbox [2]:
cat setup-vlans.yml
---
- hosts: netbox
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present
That gives me the following error:
ansible-playbook setup-vlans.yml
PLAY [netbox] *********************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [NETBOX]
TASK [Create prefix 192.168.10.0/24 in Netbox] ************************************************************************************************
fatal: [NETBOX]: FAILED! => {"changed": false, "msg": "Failed to establish connection to Netbox API"}
PLAY RECAP ************************************************************************************************************************************
NETBOX : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Can someone please point me where I am going wrong?
Note: The NetBox URL is an https://url setup with nginx and netbox-docker [3].
Thanks & Regards,
Sana
[1] https://github.com/netbox-community/ansible_modules
[2] https://docs.ansible.com/ansible/latest/modules/netbox_prefix_module.html
[3]
https://github.com/netbox-community/netbox-docker
I had the same. Apparently the pynetbox api has changed in instantiation (ssl_verify is now replaced by requests session parameters).
I had to force ansible galaxy to update to the latest netbox module with:
ansible-galaxy collection install netbox.netbox -f
The force option did the trick for me.
All playbooks using API modules like netbox (but this is the same for gcp or aws) must use as host not the target but the host that will execute the playbook to call the API. Most of the time this is localhost, but that can be also a dedicated node like a bastion.
You can see in the example on the documentation you linked that it uses hosts: localhost.
Hence I think your playbook should be
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Create prefix 192.168.10.0/24 in Netbox
netbox_prefix:
netbox_token: "{{ netbox_token }}"
netbox_url: "{{ netbox_url }}"
data:
prefix: 192.168.10.0/24
state: present