I'm having a very difficult time understanding how to organize large playbooks with many roles, using inventory with multiple "environments" and using sub-plays to try and organize things. All the while having common variables at the parent playbook, sharing those with sub-plays. I use ansible but in a very limited way so I'm trying to expand my knowledge of it by doing this exercise.
Directory structure (simplified for testing)
├── inventory
│ ├── dev
│ │ ├── group_vars
│ │ │ └── all.yml
│ │ └── hosts
│ └── prod
│ ├── group_vars
│ │ └── all.yml
│ └── hosts
├── playbooks
│ └── infra
│ └── site.yml
├── site.yml
└── vars
└── secrets.yml
Various secrets are in the secrets.yml file, including the ansible_ssh_user and ansible_become_pass.
Contents of all.yml
---
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
Contents of site.yml
---
- name: test plays
hosts: all
vars_files:
- vars/secrets.yml
become: true
gather_facts: true
pre_tasks:
- include_vars: secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- import_playbook: playbooks/infra/site.yml
Content of playbooks/infra/site.yml
---
- name: test sub-play
hosts: all
become: true
gather_facts: true
tasks:
- debug:
var: ansible_ssh_user
The main parent playbook is being called with ansible-playbook -i inventory/dev site.yml. The problem is I can't access vault_ansible_ssh_user or vault_ansible_become_pass (or any secrets in vault) from within the sub-play if I don't include both var_files AND pre_tasks: - include_vars
If I remove var_files, I can't access the secrets in the parent playbook. If I remove pre_tasks: - include_vars, I can't access any secrets in the imported sub-play. Any idea why I need both of these variable include statements for this to work? Also, is this just a terrible design and I'm doing things completely wrong? I'm having a hard time wrapping my head around the best way to organize huge playbooks with a lot of required variables so I ended up with a directory structure like this to try and compartmentalize the variables to avoid very large variables files and the need to duplicate variable files all over the place. This probably just boils down to me wanting to fit a round peg in a square hole but I can't find a great best practices example for something like this.
This issue might also have to do with me trying to put ansible vault variables in an inventory var file maybe. If so, is that something I should or shouldn't be doing? As I was writing this, I may have had a "light bulb" moment and finally understand how I should handle this but I need to test some things to understand it fully but regardless, I'm still very interested in what the stackoverflow community has to say about how I'm currently doing this.
EDIT: turns out my "light bulb" idea is just the same as I have here just moved around in a different way, with the same issues
Q: "If I remove ... include_vars, I can't access any secrets in the imported sub-play."
A: To share variables among the plays use include_vars or set_fact. Quoting from Variable scope: how long is a value available?
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like set_fact and include_vars, are available to all plays. These ‘host scope’ variables are also available via the hostvars[] dictionary.
Given the files below
shell> cat inventory/prod/hosts
test_01
test_02
shell> cat inventory/prod/group_vars/all.yml
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
shell> cat vars/secrets.yml
vault_ansible_ssh_user: ansible-ssh-user
vault_ansible_become_pass: ansible-become-pass
shell> cat site.yml
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
- import_playbook: playbooks/infra/site.yml
shell> cat playbooks/infra/site.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: ansible_ssh_user
The variables declared by vars_files are not shared among the plays and the second play will fail. The abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
PLAY [test sub-plays] ****
TASK [debug] ****
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
fatal: [test_02]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
The problem will disappear if you use include_vars or set_fact, i.e. "instantiate" the variables. Commenting set_fact and uncommenting include_vars, or uncommenting both, will give the same result
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
# - include_vars: secrets.yml
- set_fact:
ansible_ssh_user: "{{ ansible_ssh_user }}"
ansible_become_pass: "{{ ansible_become_pass }}"
- import_playbook: playbooks/infra/site.yml
Then the abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
TASK [set_fact] ****
ok: [test_01]
ok: [test_02]
PLAY [test sub-plays] ****
TASK [debug] ****
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
Notes
In this example, it's not important whether the variables are encrypted or not.
become and gather_facts don't influence this problem.
There might be other issues. It's a good idea to review include and import issues.
Q: "Why is the vars_files needed?"
A: The variable ansible_become_pass is needed to escalate the user's privilege when a task is sent to the remote host. As a result, when the variable vault_ansible_become_pass is declared in the task include_vars only, this variable won't be available before the tasks are executed, and the play will fail with the error
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
See
Understanding variable precedence
Understanding privilege escalation: become
No vars_files is needed if there are user-defined variables only. For example, the playbook below works as expected
shell> cat inventory/prod/group_vars/all.yml
var1: "{{ vault_var1 }}"
var2: "{{ vault_var2 }}"
shell> cat vars/secrets2.yml
vault_var1: test-var1
vault_var2: test-var2
shell> cat site2.yml
- name: test plays
hosts: all
gather_facts: false
tasks:
- include_vars: secrets2.yml
- debug:
var: var1
- debug:
var: var2
- import_playbook: playbooks/infra/site2.yml
shell> cat playbooks/infra/site2.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: var1
- debug:
var: var2
Related
I have the current node role:
$ tree roles/node
roles/node
├── defaults
│ └── main.yaml
└── tasks
├── main.yaml
├── reset.yaml
└── unmount.yaml
Current provisioning.yaml playbook is using the main tasks:
- name: Node Provisioning
hosts: node
become: true
gather_facts: true
roles:
- role: node
I would like to create a separate reset.yaml playbook which uses the reset tasks:
- name: Node Reset
hosts: node
become: true
gather_facts: true
roles:
- role: node
I understand I could create a separate role or use tags, but my goal is to use the same role name and define into playbook the reset tasks name, instead of main.
Is there a proper solution allowing me to use a specific tasks_from in my playbook scenario? The example above is a simplified playbook, for proof of concept.
There are three ways to include a role in your playbook:
Use the classic roles: directive in the play;
Using the dynamic include_role task
Using the static import_role task
While the roles: directive doesn't support a tasks_from argument, the other two options do. You could write:
- name: Node Reset
hosts: node
become: true
gather_facts: true
tasks:
- import_role:
name: node
tasks_from: reset.yaml
Here's a complete test walk-through. If used the following layout:
.
├── playbook.yaml
└── roles
└── node
└── tasks
├── main.yaml
├── reset.yaml
└── umount.yaml
Where roles/node/tasks/reset.yaml contains:
- debug:
msg: "This is reset.yaml"
- name: Umount filesystem
ansible.builtin.include_tasks:
file: umount.yaml
with_items:
- /run/netns
- /var/lib/kubelet
loop_control:
loop_var: mounted_fs
And roles/node/tasks/unmount.yaml contains:
- debug:
msg: "This is umount.yaml; fs: {{ mounted_fs }}"
If I run this playbook.yml:
- hosts: localhost
gather_facts: false
tasks:
- import_role:
name: node
tasks_from: reset
I get as output:
PLAY [localhost] ***************************************************************
TASK [node : debug] ************************************************************
ok: [localhost] => {
"msg": "This is reset.yaml"
}
TASK [node : Umount filesystem] ************************************************
included: /home/lars/tmp/ansible/roles/node/tasks/umount.yaml for localhost => (item=/run/netns)
included: /home/lars/tmp/ansible/roles/node/tasks/umount.yaml for localhost => (item=/var/lib/kubelet)
TASK [node : debug] ************************************************************
ok: [localhost] => {
"msg": "This is umount.yaml; fs: /run/netns"
}
TASK [node : debug] ************************************************************
ok: [localhost] => {
"msg": "This is umount.yaml; fs: /var/lib/kubelet"
}
PLAY RECAP *********************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You can find my complete test setup here.
I'd like to load a zshenv file (using source command) and then use the ENVs in another task.
This is what I have. I'm hoping there's a better solution
files
directory structure
.
├── ansible.cfg
├── hosts.yaml
├── profiles
│ └── macos.yaml
├── roles
│ └── base
│ ├── tasks
│ │ ├── git.yaml
│ │ └── main.yaml
│ └── vars
└── tools
└── zsh
└── .zshenv
./ansible.cfg
[defaults]
inventory = ./hosts.yaml
roles_path = ./roles/
stdout_callback = yaml
./hosts.yaml
---
all:
hosts:
localhost
./profiles/macos.yaml
---
# run MacOS configs
# - hosts: localhost
# connection: local
# tags: macos
# roles:
# - macos
# # when: ansible_distribution == "MacOSX"
- hosts: localhost
connection: local
tags: base
roles:
- base
./roles/base/main.yaml
---
- import_tasks: tasks/git.yaml
./roles/base/git.yaml
---
- name: source zshenv
shell:
cmd: source ../tools/zsh/.zshenv; echo $GIT_CONFIG_PATH
register: gitConfigPath
- name: Link gitconfig file
file:
# PWD: ./profiles
src: "{{ ansible_env.PWD }}/../tools/git/.gitconfig"
dest: "{{ gitConfigPath.stdout }}"
state: link
# - name: print ansible_env
# debug:
# msg: "{{ ansible_env }}"
#
# - name: print gitConfigPath
# debug:
# msg: "{{ gitConfigPath.stdout }}"
#
./tools/zsh/.zshenv
export XDG_CONFIG_HOME="$HOME/.config"
export GIT_CONFIG_PATH="$XDG_CONFIG_HOME/git/config"
command to run
ansible-playbook profiles/macos.yaml -v
PS: It'd be easier to do something like this in ansible
source tools/zsh/.zshenv && ansible-playbook profiles/macos.yaml -v
Given the simplified project without a role
shell> tree -a .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── tools
├── git
└── zsh
└── .zshenv
shell> cat hosts
localhost
shell> cat tools/zsh/.zshenv
export GIT_CONFIG_PATH=/home/admin/git/.gitconfig
export ENV1=env1
export ENV2=env2
export ENV3=env3
eval "$(direnv hook zsh)"
Parse the environment on your own. For example
zshenv: "{{ dict(lookup('file', 'tools/zsh/.zshenv').splitlines()|
select('match', '^\\s*export .*$')|
map('regex_replace', '^\\s*export\\s+', '')|
map('split', '=')) }}"
gives
zshenv:
ENV1: env1
ENV2: env2
ENV3: env3
GIT_CONFIG_PATH: /home/admin/git/.gitconfig
Then, use the dictionary zshenv
- name: Link gitconfig file
file:
dest: "{{ playbook_dir }}/tools/git/.gitconfig"
src: "{{ zshenv.GIT_CONFIG_PATH }}"
state: link
gives, running with --check -- diff options
TASK [Link gitconfig file] *******************************************************************
--- before
+++ after
## -1,2 +1,2 ##
path: /export/scratch/tmp7/test-116/tools/git/.gitconfig
-state: absent
+state: link
changed: [localhost]
Notes
Example of a complete playbook for testing
shell> cat pb.yml
- hosts: localhost
vars:
zshenv: "{{ dict(lookup('file', 'tools/zsh/.zshenv').splitlines()|
select('match', '^\\s*export .*$')|
map('regex_replace', '^\\s*export\\s+', '')|
map('split', '=')) }}"
tasks:
- debug:
var: zshenv
- name: Link gitconfig file
file:
dest: "{{ playbook_dir }}/tools/git/.gitconfig"
src: "{{ zshenv.GIT_CONFIG_PATH }}"
state: link
gives
shell> ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
zshenv:
ENV1: env1
ENV2: env2
ENV3: env3
GIT_CONFIG_PATH: /home/admin/git/.gitconfig
TASK [Link gitconfig file] *******************************************************************
changed: [localhost]
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The link tools/git/.gitconfig -> /home/admin/git/.gitconfig was created
shell> tree -a .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── tools
├── git
│ └── .gitconfig -> /home/admin/git/.gitconfig
└── zsh
└── .zshenv
You can use the dictionary zshenv to set the environment. For example,
- command: echo $GIT_CONFIG_PATH
environment: "{{ zshenv }}"
register: out
- debug:
var: out.stdout
gives
out.stdout: /home/admin/git/.gitconfig
Cache the dictionary if you want to use this environment globally in the whole play. For example,
shell> grep fact_caching ansible.cfg
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_cache
fact_caching_prefix = ansible_facts_
fact_caching_timeout = 86400
- set_fact:
zshenv: "{{ dict(lookup('file', 'tools/zsh/.zshenv').splitlines()|
select('match', '^\\s*export .*$')|
map('regex_replace', '^\\s*export\\s+', '')|
map('split', '=')) }}"
cacheable: true
Then,
- hosts: localhost
environment: "{{ zshenv }}"
tasks:
- command: echo $GIT_CONFIG_PATH
register: out
- debug:
var: out.stdout
gives
PLAY [localhost] *****************************************************************************
TASK [command] *******************************************************************************
changed: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] =>
out.stdout: /home/admin/git/.gitconfig
PLAY RECAP ***********************************************************************************
localhost: ok=7 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
With the help from Vladimir Botka, using the answer from https://stackoverflow.com/a/74924664/3053548
I modified a little bit of his code
TLDR
source the zshenv file
print out all the ENV in the shell session
store the output to as ansible facts
access the ENV in a task
Files
./profiles/macos.yaml
---
# run MacOS configs
- hosts: localhost
connection: local
tags: always
tasks:
- name: source zshenv
shell:
cmd: source ../tools/zsh/.zshenv; env
register: out
changed_when: false
- name: store zshenv as fact
set_fact:
zshenv: "{{ dict(out.stdout.splitlines() | map('split', '=')) }}"
changed_when: false
# - hosts: localhost
# connection: local
# tags: macos
# roles:
# - macos
# # when: ansible_distribution == "MacOSX"
- hosts: localhost
connection: local
tags: base
roles:
- base
./roles/base/tasks/git.yaml
---
- name: Link gitconfig file
file:
src: "{{ ansible_env.PWD }}/../tools/git/.gitconfig"
dest: "{{ zshenv.GIT_CONFIG_PATH }}"
state: link
command to run
ansible-playbook profiles/macos.yaml
I have got a simple playbook:
- hosts: all
serial: 1
order: "{{ run_order }}"
gather_facts: false
tasks:
- ping:
register: resultt
My inventory tree:
.
├── group_vars
│ └── g1
│ └── values.yml
├── host_vars
└── inv
inv file:
[g1:children]
m1
m2
[g2:children]
m3
m4
[m1]
192.168.0.60
[m2]
192.168.0.61
[m3]
192.168.0.62
[m4]
192.168.0.63
group_vars/g1/values.yml:
---
run_order: 'reverse_sorted'
When I try to run playbook:
ansible-playbook -i inv-test/inv --limit='g1' my-playbook.yml
I get the error:
ERROR! The field 'order' has an invalid value, which includes an undefined variable. The error was: 'run_order' is undefined
The error appears to be in '/home/mk/throttle': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- hosts: all
^ here
Although the output from ansible-inventory -i inv --graph --vars is correct
Why ansible doesn't parse vars from inventory groups_vars?
Ansible version 2.10.15
The problem is that you want to use a host variable in the scope of a play. See Scoping variables. It's possible if you create such a host variable first. Then you can use hostvars to reference the host variable. For example, given the simplified inventory
shell> cat hosts
[g1]
host_01
host_02
and the group_vars
shell> cat group_vars/g1.yml
run_order: 'reverse_sorted'
The playbook
shell> cat pb.yml
- hosts: all
gather_facts: false
tasks:
- debug:
var: run_order
gives (abridged)
shell> ansible-playbook -i hosts pb.yml
ok: [host_01] =>
run_order: reverse_sorted
ok: [host_02] =>
run_order: reverse_sorted
But, the playbook below
shell> cat pb.yml
- hosts: all
gather_facts: false
order: "{{ run_order }}"
tasks:
- debug:
var: run_order
fails because the variable run_order is not defined in the scope of the play
ERROR! The field 'order' has an invalid value, which includes an undefined variable. The error was: 'run_order' is undefined
The (awkward) solution is to create the hostvars in the first play and use it in the second play. For example, the second play in the playbook below is executed in the reverse order
shell> cat pb.yml
- hosts: all
gather_facts: false
tasks:
- debug:
var: run_order
- hosts: all
gather_facts: false
serial: 1
order: "{{ hostvars.host_01.run_order }}"
tasks:
- debug:
var: run_order
gives
shell> ansible-playbook -i hosts pb.yml
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [host_01] =>
run_order: reverse_sorted
ok: [host_02] =>
run_order: reverse_sorted
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [host_02] =>
run_order: reverse_sorted
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [host_01] =>
run_order: reverse_sorted
I have playbook like this
test_playbook/
├── dep_test.yaml
├── my_hosts_file
└── roles
├── common
│ └── vars
│ └── main.yaml
├── dep_test
│ ├── meta
│ │ └── main.yaml
│ └── tasks
│ └── main.yaml
├── dep_test_a
│ └── tasks
│ └── main.yaml
└── dep_test_b
├── meta
│ └── main.yaml
└── tasks
└── main.yaml
Files content are as below.
dep_test.yaml
- hosts: my_host
gather_facts: no
roles:
- common
- dep_test
my_hosts_file
[my_host]
localhost
roles/common/vars/main.yaml
python_version: "3"
roles/dep_test/tasks/main.yaml
- name: debug test
debug:
msg: test debug
roles/dep_test/meta/main.yaml
dependencies:
- role: dep_test_a
# pyenv_versions: ["{{ python_version }}"]
pyenv_versions: ["3"]
- role: dep_test_b
# python_versions: ["{{ python_version }}"]
python_versions: ["3"]
roles/dep_test_a/tasks/main.yaml
- name: Dep test a
debug:
msg: "Dependency test a called with {{ pyenv_versions }}"
roles/dep_test_b/tasks/main.yaml
- name: Dep test b
debug:
msg: "Dependency test b called with {{ python_versions }}"
roles/dep_test_b/meta/main.yaml
dependencies:
- role: dep_test_a
# pyenv_versions: "{{ python_versions }}"
pyenv_versions: ["3"]
When I pass parameter as ["3"] it works fine and apply the Role Duplication and Execution
ansible-playbook dep_test.yaml -i my_hosts_file -u root --ask-pass
SSH password:
PLAY [my_host] ****************************************************************************************************
TASK [dep_test_a : Dep test a] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_b : Dep test b] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test b called with [u'3']"
}
TASK [dep_test : debug test] **************************************************************************************
ok: [localhost] => {
"msg": "test debug"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
When I change the parameter from ["3"] to use the variable form common/vars/main.yaml's python_version, it fail the Duplication rule and execute same role with duplicate arguments.
After change code would be
roles/dep_test/meta/main.yaml
dependencies:
- role: dep_test_a
pyenv_versions: ["{{ python_version }}"]
# pyenv_versions: ["3"]
- role: dep_test_b
python_versions: ["{{ python_version }}"]
# python_versions: ["3"]
roles/dep_test_b/meta/main.yaml
dependencies:
- role: dep_test_a
pyenv_versions: "{{ python_versions }}"
# pyenv_versions: ["3"]
Playbook execution output.
ansible-playbook dep_test.yaml -i my_hosts_file -u root --ask-pass
SSH password:
PLAY [my_host] ****************************************************************************************************
TASK [dep_test_a : Dep test a] ***********************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_a : Dep test a] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_b : Dep test b] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test b called with [u'3']"
}
TASK [dep_test : debug test] **************************************************************************************
ok: [localhost] => {
"msg": "test debug"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0
Role dep_test_a called 2 times with same arguments [u'3']
TASK [dep_test_a : Dep test a] ***********************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_a : Dep test a] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
One for the dependencies in dep_test role and another for dep_test_b.
As per the rule of dependencies, this should be called only once.
Question: Why dependent role runs twice when passing the parameter?
Question: Why dependent role runs twice when passing the parameter?
Answer: Because Ansible uses lazy evaluation, hence Jinja2 templating is not being triggered until the variable is used.
Passing a variable to a role is not considered a usage, so it passes and compares the templates, not the values.
You call the dep_test_a role twice:
- role: dep_test_a
pyenv_versions: ["{{ python_version }}"]
and:
- role: dep_test_a
pyenv_versions: "{{ python_versions }}"
["{{ python_version }}"] is not equal to "{{ python_versions }}", thus Ansible executes the role twice.
And btw, the code to illustrate the behaviour in the question can be shortened to:
- hosts: localhost
connection: local
gather_facts: no
vars:
my_var1: 1
my_var2: 1
roles:
- role: my_role
role_param: "{{ my_var1 }}"
- role: my_role
role_param: "{{ my_var2 }}"
I want to make the most of variable precedence with ansible.
So let’s have a look at this simplified project:
├── group_vars
│ └── all
│ └── web
├── hosts
│ └── local
└── site.yml
The inventory file hosts/local:
[local_web]
192.168.1.20
[local_db]
192.168.1.20
[web:children]
local_web
[db:children]
local_db
The group_vars/all file:
test: ALL
The group_vars/web file:
test: WEB
The site.yml file:
---
- name: Test
hosts: db
tasks:
- debug: var=test
Alright, so this is just to test variable precedence. As I run ansible in the db group, the test variable should display “ALL”, as ansible will only looks into group_vars/all, right?
Wrong:
TASK: [debug var=test] ********************************************************
ok: [192.168.1.20] => {
"var": {
"test": "WEB"
}
}
Actually, if local_web and local_db hosts are different, then it works.
Why ansible still looks into an unrelated config file when hosts are the same? Is that a bug or just me?
You're stating that 192.168.1.20 is a member of all 4 of your defined groups, and that's independent of how you reference the host in your playbook. No matter how you reference the host in your playbook Ansible is going to evaluate all the groups that host is in and import variables based on those groups.
Here's a handy test to demonstrate this:
- name: Test
hosts: db
tasks:
- debug: msg="{{ inventory_hostname }} is in group {{ item }}"
when: inventory_hostname in groups[item]
with_items: group_names
The output of this is:
TASK: [debug msg="host is in group {{ item }}"] *******************************
ok: [192.168.1.20] => (item=db) => {
"item": "db",
"msg": "192.168.1.20 is in group db"
}
ok: [192.168.1.20] => (item=local_db) => {
"item": "local_db",
"msg": "192.168.1.20 is in group local_db"
}
ok: [192.168.1.20] => (item=local_web) => {
"item": "local_web",
"msg": "192.168.1.20 is in group local_web"
}
ok: [192.168.1.20] => (item=web) => {
"item": "web",
"msg": "192.168.1.20 is in group web"
}
Since the host is in the web group then the web group_vars file was included.
#Bruce P answer is right.
However, this ansible behavior is not satisfying for me, because it change variable precedence depending on the hosts. Instead of group_vars, I use the vars_files dict.
I moved group_vars into a directory named vars.
The updated site.yml:
---
- name: Test
hosts: db
tasks:
- debug: var=test
vars_files:
- vars/all
- vars/db
Now, test displays “ALL”, as I wanted. It first read vars/all, then vars/db (which is empty).
(Note: variable precedence seems to be a bit buggy at the moment - v1.9.2 -. This means if you use variables as var files name, ansible will not load files in the order expected.)
Another solution that can leave variables in group_vars is to replace the inventory file hosts/local by two files : hosts/inv_db and hosts/inv_web, and to use the -i option to specify the inventory file to use.