Use a different task name for Ansible roles execution? - ansible

I have the current node role:
$ tree roles/node
roles/node
├── defaults
│   └── main.yaml
└── tasks
   ├── main.yaml
   ├── reset.yaml
   └── unmount.yaml
Current provisioning.yaml playbook is using the main tasks:
- name: Node Provisioning
hosts: node
become: true
gather_facts: true
roles:
- role: node
I would like to create a separate reset.yaml playbook which uses the reset tasks:
- name: Node Reset
hosts: node
become: true
gather_facts: true
roles:
- role: node
I understand I could create a separate role or use tags, but my goal is to use the same role name and define into playbook the reset tasks name, instead of main.
Is there a proper solution allowing me to use a specific tasks_from in my playbook scenario? The example above is a simplified playbook, for proof of concept.

There are three ways to include a role in your playbook:
Use the classic roles: directive in the play;
Using the dynamic include_role task
Using the static import_role task
While the roles: directive doesn't support a tasks_from argument, the other two options do. You could write:
- name: Node Reset
hosts: node
become: true
gather_facts: true
tasks:
- import_role:
name: node
tasks_from: reset.yaml
Here's a complete test walk-through. If used the following layout:
.
├── playbook.yaml
└── roles
└── node
└── tasks
├── main.yaml
├── reset.yaml
└── umount.yaml
Where roles/node/tasks/reset.yaml contains:
- debug:
msg: "This is reset.yaml"
- name: Umount filesystem
ansible.builtin.include_tasks:
file: umount.yaml
with_items:
- /run/netns
- /var/lib/kubelet
loop_control:
loop_var: mounted_fs
And roles/node/tasks/unmount.yaml contains:
- debug:
msg: "This is umount.yaml; fs: {{ mounted_fs }}"
If I run this playbook.yml:
- hosts: localhost
gather_facts: false
tasks:
- import_role:
name: node
tasks_from: reset
I get as output:
PLAY [localhost] ***************************************************************
TASK [node : debug] ************************************************************
ok: [localhost] => {
"msg": "This is reset.yaml"
}
TASK [node : Umount filesystem] ************************************************
included: /home/lars/tmp/ansible/roles/node/tasks/umount.yaml for localhost => (item=/run/netns)
included: /home/lars/tmp/ansible/roles/node/tasks/umount.yaml for localhost => (item=/var/lib/kubelet)
TASK [node : debug] ************************************************************
ok: [localhost] => {
"msg": "This is umount.yaml; fs: /run/netns"
}
TASK [node : debug] ************************************************************
ok: [localhost] => {
"msg": "This is umount.yaml; fs: /var/lib/kubelet"
}
PLAY RECAP *********************************************************************
localhost : ok=5 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You can find my complete test setup here.

Related

Can I use ansible-playbook "--extra-vars" to execute roles conditionally

I have an Ansible playbook I am working on and I am trying to make the execution a little more dynamic. I have two roles that don't always need to be run in conjunction, sometimes only one of them needs to be run. I have been digging into the Ansible docs, and I am wondering if I can pass --extra-vars parameters to only run a specific role.
Currently, my playbook looks like this:
---
- hosts: default
become: true
roles:
- role: upgrades
when: {{ upgrades_role }}
- role: custom-packages
when: {{ custom_packages_role }}
So, the goal is to be able to run:
ansible-playbook playbook.yml -e "upgrades_role=upgrades"
And this would only run the upgrades role and skip the custom_packages role.
Similarly, if I want to run both roles on the same hosts/system:
ansible-playbook playbook.yml -e "upgrades_role=upgrades custom_packages_role=custom-packages"
This would run both roles.
Based on my understating of Ansible syntax and the --extra-vars, -e parameter, this seems like it should work. I just want to be sure I am doing this the proper way and avoiding anti-patterns.
Ansible Version: 2.14
Let me provide you with a framework to automate the use case that you described:
Pass --extra-vars parameters to only run a specific role
Create the project
shell> ls -1
ansible.cfg
hosts
playbook.yml.j2
roles
setup.yml
shell> cat ansible.cfg
[defaults]
gathering = explicit
collections_path = $HOME/.local/lib/python3.9/site-packages/
inventory = $PWD/hosts
roles_path = $PWD/roles
retry_files_enabled = false
stdout_callback = yaml
shell> cat hosts
localhost
The playbook setup.yml
Gets the list of the roles. Fit the variable my_roles_dir to your needs.
Creates file my_roles_order.yml with the list my_roles_order. The purpose of this file is to create the order of the roles.
Creates file my_roles_enable.yml with the dictionary my_roles_enable. The purpose of this file is to create defaults.
Creates file playbook.yml from the template. Fit the template to your needs.
shell> cat setup.yml
- hosts: localhost
vars:
my_roles_dir: "{{ lookup('config', 'DEFAULT_ROLES_PATH') }}"
tasks:
- set_fact:
my_roles: "{{ my_roles|default([]) +
lookup('pipe', 'ls -1 ' ~ item).splitlines() }}"
loop: "{{ my_roles_dir }}"
- copy:
dest: "{{ playbook_dir }}/my_roles_order.yml"
content: |
my_roles_order:
{{ my_roles|to_nice_yaml|indent(2, true) }}
force: "{{ my_roles_order_force|default(false) }}"
- include_vars: my_roles_order.yml
- copy:
dest: "{{ playbook_dir }}/my_roles_enable.yml"
content: |
my_roles_enable:
{% for role in my_roles %}
{{ role }}: false
{% endfor %}
force: "{{ my_roles_enable_force|default(false) }}"
- include_vars: my_roles_enable.yml
- template:
src: playbook.yml.j2
dest: "{{ playbook_dir }}/playbook.yml"
shell> cat playbook.yml.j2
- hosts: localhost
become: true
vars_files:
- my_roles_enable.yml
roles:
{% for role in my_roles_order %}
- role: {{ role }}
when: {{ role }}_role|default(my_roles_enable.{{ role }})|bool
{% endfor %}
Test the trivial roles
shell> tree roles/
roles/
├── current_packages
│   └── tasks
│   └── main.yml
├── custom_packages
│   └── tasks
│   └── main.yml
├── stable_packages
│   └── tasks
│   └── main.yml
└── upgrades
└── tasks
└── main.yml
8 directories, 4 files
shell> cat roles/*/tasks/main.yml
- debug:
msg: Role current_packages running ...
- debug:
msg: Role custom_packages running ...
- debug:
msg: Role stable_packages running ...
- debug:
msg: Role upgrades running ...
Run the playbook setup.yml
shell> ansible-playbook setup.yml
PLAY [localhost] ****************************************************************************************
TASK [set_fact] *****************************************************************************************
ok: [localhost] => (item=/scratch/tmp7/test-204/roles)
TASK [copy] *********************************************************************************************
changed: [localhost]
TASK [include_vars] *************************************************************************************
ok: [localhost]
TASK [copy] *********************************************************************************************
changed: [localhost]
TASK [include_vars] *************************************************************************************
ok: [localhost]
TASK [template] *****************************************************************************************
changed: [localhost]
PLAY RECAP **********************************************************************************************
localhost: ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
This creates playbook.yml
shell> cat playbook.yml
- hosts: localhost
become: true
vars_files:
- my_roles_enable.yml
roles:
- role: current_packages
when: current_packages_role|default(my_roles_enable.current_packages)|bool
- role: custom_packages
when: custom_packages_role|default(my_roles_enable.custom_packages)|bool
- role: stable_packages
when: stable_packages_role|default(my_roles_enable.stable_packages)|bool
- role: upgrades
when: upgrades_role|default(my_roles_enable.upgrades)|bool
and the files
shell> cat my_roles_order.yml
my_roles_order:
- current_packages
- custom_packages
- stable_packages
- upgrades
shell> cat my_roles_enable.yml
my_roles_enable:
current_packages: false
custom_packages: false
stable_packages: false
upgrades: false
By default, the playbook runs nothing. Here you can "pass --extra-vars parameters to only run a specific role". For example, enable the role upgrades
shell> ansible-playbook playbook.yml -e upgrades_role=true
PLAY [localhost] *****************************************************************************
TASK [current_packages : debug] **************************************************************
skipping: [localhost]
TASK [custom_packages : debug] ***************************************************************
skipping: [localhost]
TASK [stable_packages : debug] ***************************************************************
skipping: [localhost]
TASK [upgrades : debug] **********************************************************************
ok: [localhost] =>
msg: Role upgrades running ...
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
If you want to change the order and/or the enablement defaults of the roles edit the files my_roles_order.yml and my_roles_enable.yml. For example, put the role upgrades in the first place and enable it by default
shell> cat my_roles_order.yml
my_roles_order:
- upgrades
- current_packages
- custom_packages
- stable_packages
shell> cat my_roles_enable.yml
my_roles_enable:
current_packages: false
custom_packages: false
stable_packages: false
upgrades: true
Update the playbook
shell> ansible-playbook setup.yml
Test it
shell> ansible-playbook playbook.yml
PLAY [localhost] *****************************************************************************
TASK [upgrades : debug] **********************************************************************
ok: [localhost] =>
msg: Role upgrades running ...
TASK [current_packages : debug] **************************************************************
skipping: [localhost]
TASK [custom_packages : debug] ***************************************************************
skipping: [localhost]
TASK [stable_packages : debug] ***************************************************************
skipping: [localhost]
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
It's practical to create a file if you want to enable/disable multiple roles on the command line. For example,
shell> cat myroles.yml
upgrades_role: false
stable_packages_role: true
custom_packages_role: true
Use it in the command line
shell> ansible-playbook playbook.yml -e #myroles.yml
PLAY [localhost] *****************************************************************************
TASK [upgrades : debug] **********************************************************************
skipping: [localhost]
TASK [current_packages : debug] **************************************************************
skipping: [localhost]
TASK [custom_packages : debug] ***************************************************************
ok: [localhost] =>
msg: Role custom_packages running ...
TASK [stable_packages : debug] ***************************************************************
ok: [localhost] =>
msg: Role stable_packages running ...
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
Don't display skipped hosts
shell> ANSIBLE_DISPLAY_SKIPPED_HOSTS=false ansible-playbook playbook.yml -e #myroles.yml
PLAY [localhost] *****************************************************************************
TASK [custom_packages : debug] ***************************************************************
ok: [localhost] =>
msg: Role custom_packages running ...
TASK [stable_packages : debug] ***************************************************************
ok: [localhost] =>
msg: Role stable_packages running ...
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
The most common way will be to use tags; updating the example in the question:
---
- hosts: default
become: true
roles:
- role: upgrades
tags: upgrades
- role: custom-packages
tags: packages
...
So, the goal is to be able to run:
# Execute all the roles
ansible-playbook playbook.yml
# Execute only the upgrade
ansible-playbook playbook.yml -t upgrades
# Execute only the setup of custom packages
ansible-playbook playbook.yml -t packages
Here is the documentation for tags.
Using variables and when is possible, but you'll need to ensure to cast it to bool.

ansible load zshenv and use ENV in another task

I'd like to load a zshenv file (using source command) and then use the ENVs in another task.
This is what I have. I'm hoping there's a better solution
files
directory structure
.
├── ansible.cfg
├── hosts.yaml
├── profiles
│   └── macos.yaml
├── roles
│   └── base
│      ├── tasks
│      │   ├── git.yaml
│      │   └── main.yaml
│      └── vars
└── tools
└── zsh
└── .zshenv
./ansible.cfg
[defaults]
inventory = ./hosts.yaml
roles_path = ./roles/
stdout_callback = yaml
./hosts.yaml
---
all:
hosts:
localhost
./profiles/macos.yaml
---
# run MacOS configs
# - hosts: localhost
# connection: local
# tags: macos
# roles:
# - macos
# # when: ansible_distribution == "MacOSX"
- hosts: localhost
connection: local
tags: base
roles:
- base
./roles/base/main.yaml
---
- import_tasks: tasks/git.yaml
./roles/base/git.yaml
---
- name: source zshenv
shell:
cmd: source ../tools/zsh/.zshenv; echo $GIT_CONFIG_PATH
register: gitConfigPath
- name: Link gitconfig file
file:
# PWD: ./profiles
src: "{{ ansible_env.PWD }}/../tools/git/.gitconfig"
dest: "{{ gitConfigPath.stdout }}"
state: link
# - name: print ansible_env
# debug:
# msg: "{{ ansible_env }}"
#
# - name: print gitConfigPath
# debug:
# msg: "{{ gitConfigPath.stdout }}"
#
./tools/zsh/.zshenv
export XDG_CONFIG_HOME="$HOME/.config"
export GIT_CONFIG_PATH="$XDG_CONFIG_HOME/git/config"
command to run
ansible-playbook profiles/macos.yaml -v
PS: It'd be easier to do something like this in ansible
source tools/zsh/.zshenv && ansible-playbook profiles/macos.yaml -v
Given the simplified project without a role
shell> tree -a .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── tools
├── git
└── zsh
└── .zshenv
shell> cat hosts
localhost
shell> cat tools/zsh/.zshenv
export GIT_CONFIG_PATH=/home/admin/git/.gitconfig
export ENV1=env1
export ENV2=env2
export ENV3=env3
eval "$(direnv hook zsh)"
Parse the environment on your own. For example
zshenv: "{{ dict(lookup('file', 'tools/zsh/.zshenv').splitlines()|
select('match', '^\\s*export .*$')|
map('regex_replace', '^\\s*export\\s+', '')|
map('split', '=')) }}"
gives
zshenv:
ENV1: env1
ENV2: env2
ENV3: env3
GIT_CONFIG_PATH: /home/admin/git/.gitconfig
Then, use the dictionary zshenv
- name: Link gitconfig file
file:
dest: "{{ playbook_dir }}/tools/git/.gitconfig"
src: "{{ zshenv.GIT_CONFIG_PATH }}"
state: link
gives, running with --check -- diff options
TASK [Link gitconfig file] *******************************************************************
--- before
+++ after
## -1,2 +1,2 ##
path: /export/scratch/tmp7/test-116/tools/git/.gitconfig
-state: absent
+state: link
changed: [localhost]
Notes
Example of a complete playbook for testing
shell> cat pb.yml
- hosts: localhost
vars:
zshenv: "{{ dict(lookup('file', 'tools/zsh/.zshenv').splitlines()|
select('match', '^\\s*export .*$')|
map('regex_replace', '^\\s*export\\s+', '')|
map('split', '=')) }}"
tasks:
- debug:
var: zshenv
- name: Link gitconfig file
file:
dest: "{{ playbook_dir }}/tools/git/.gitconfig"
src: "{{ zshenv.GIT_CONFIG_PATH }}"
state: link
gives
shell> ansible-playbook pb.yml
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
zshenv:
ENV1: env1
ENV2: env2
ENV3: env3
GIT_CONFIG_PATH: /home/admin/git/.gitconfig
TASK [Link gitconfig file] *******************************************************************
changed: [localhost]
PLAY RECAP ***********************************************************************************
localhost: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The link tools/git/.gitconfig -> /home/admin/git/.gitconfig was created
shell> tree -a .
.
├── ansible.cfg
├── hosts
├── pb.yml
└── tools
├── git
│   └── .gitconfig -> /home/admin/git/.gitconfig
└── zsh
└── .zshenv
You can use the dictionary zshenv to set the environment. For example,
- command: echo $GIT_CONFIG_PATH
environment: "{{ zshenv }}"
register: out
- debug:
var: out.stdout
gives
out.stdout: /home/admin/git/.gitconfig
Cache the dictionary if you want to use this environment globally in the whole play. For example,
shell> grep fact_caching ansible.cfg
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_cache
fact_caching_prefix = ansible_facts_
fact_caching_timeout = 86400
- set_fact:
zshenv: "{{ dict(lookup('file', 'tools/zsh/.zshenv').splitlines()|
select('match', '^\\s*export .*$')|
map('regex_replace', '^\\s*export\\s+', '')|
map('split', '=')) }}"
cacheable: true
Then,
- hosts: localhost
environment: "{{ zshenv }}"
tasks:
- command: echo $GIT_CONFIG_PATH
register: out
- debug:
var: out.stdout
gives
PLAY [localhost] *****************************************************************************
TASK [command] *******************************************************************************
changed: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] =>
out.stdout: /home/admin/git/.gitconfig
PLAY RECAP ***********************************************************************************
localhost: ok=7 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
With the help from Vladimir Botka, using the answer from https://stackoverflow.com/a/74924664/3053548
I modified a little bit of his code
TLDR
source the zshenv file
print out all the ENV in the shell session
store the output to as ansible facts
access the ENV in a task
Files
./profiles/macos.yaml
---
# run MacOS configs
- hosts: localhost
connection: local
tags: always
tasks:
- name: source zshenv
shell:
cmd: source ../tools/zsh/.zshenv; env
register: out
changed_when: false
- name: store zshenv as fact
set_fact:
zshenv: "{{ dict(out.stdout.splitlines() | map('split', '=')) }}"
changed_when: false
# - hosts: localhost
# connection: local
# tags: macos
# roles:
# - macos
# # when: ansible_distribution == "MacOSX"
- hosts: localhost
connection: local
tags: base
roles:
- base
./roles/base/tasks/git.yaml
---
- name: Link gitconfig file
file:
src: "{{ ansible_env.PWD }}/../tools/git/.gitconfig"
dest: "{{ zshenv.GIT_CONFIG_PATH }}"
state: link
command to run
ansible-playbook profiles/macos.yaml

Tying to understand ansible variables and vault with sub-plays

I'm having a very difficult time understanding how to organize large playbooks with many roles, using inventory with multiple "environments" and using sub-plays to try and organize things. All the while having common variables at the parent playbook, sharing those with sub-plays. I use ansible but in a very limited way so I'm trying to expand my knowledge of it by doing this exercise.
Directory structure (simplified for testing)
├── inventory
│   ├── dev
│   │   ├── group_vars
│   │   │   └── all.yml
│   │   └── hosts
│   └── prod
│      ├── group_vars
│      │   └── all.yml
│   └── hosts
├── playbooks
│   └── infra
│      └── site.yml
├── site.yml
└── vars
└── secrets.yml
Various secrets are in the secrets.yml file, including the ansible_ssh_user and ansible_become_pass.
Contents of all.yml
---
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
Contents of site.yml
---
- name: test plays
hosts: all
vars_files:
- vars/secrets.yml
become: true
gather_facts: true
pre_tasks:
- include_vars: secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- import_playbook: playbooks/infra/site.yml
Content of playbooks/infra/site.yml
---
- name: test sub-play
hosts: all
become: true
gather_facts: true
tasks:
- debug:
var: ansible_ssh_user
The main parent playbook is being called with ansible-playbook -i inventory/dev site.yml. The problem is I can't access vault_ansible_ssh_user or vault_ansible_become_pass (or any secrets in vault) from within the sub-play if I don't include both var_files AND pre_tasks: - include_vars
If I remove var_files, I can't access the secrets in the parent playbook. If I remove pre_tasks: - include_vars, I can't access any secrets in the imported sub-play. Any idea why I need both of these variable include statements for this to work? Also, is this just a terrible design and I'm doing things completely wrong? I'm having a hard time wrapping my head around the best way to organize huge playbooks with a lot of required variables so I ended up with a directory structure like this to try and compartmentalize the variables to avoid very large variables files and the need to duplicate variable files all over the place. This probably just boils down to me wanting to fit a round peg in a square hole but I can't find a great best practices example for something like this.
This issue might also have to do with me trying to put ansible vault variables in an inventory var file maybe. If so, is that something I should or shouldn't be doing? As I was writing this, I may have had a "light bulb" moment and finally understand how I should handle this but I need to test some things to understand it fully but regardless, I'm still very interested in what the stackoverflow community has to say about how I'm currently doing this.
EDIT: turns out my "light bulb" idea is just the same as I have here just moved around in a different way, with the same issues
Q: "If I remove ... include_vars, I can't access any secrets in the imported sub-play."
A: To share variables among the plays use include_vars or set_fact. Quoting from Variable scope: how long is a value available?
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like set_fact and include_vars, are available to all plays. These ‘host scope’ variables are also available via the hostvars[] dictionary.
Given the files below
shell> cat inventory/prod/hosts
test_01
test_02
shell> cat inventory/prod/group_vars/all.yml
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
shell> cat vars/secrets.yml
vault_ansible_ssh_user: ansible-ssh-user
vault_ansible_become_pass: ansible-become-pass
shell> cat site.yml
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
- import_playbook: playbooks/infra/site.yml
shell> cat playbooks/infra/site.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: ansible_ssh_user
The variables declared by vars_files are not shared among the plays and the second play will fail. The abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
PLAY [test sub-plays] ****
TASK [debug] ****
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
fatal: [test_02]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
The problem will disappear if you use include_vars or set_fact, i.e. "instantiate" the variables. Commenting set_fact and uncommenting include_vars, or uncommenting both, will give the same result
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
# - include_vars: secrets.yml
- set_fact:
ansible_ssh_user: "{{ ansible_ssh_user }}"
ansible_become_pass: "{{ ansible_become_pass }}"
- import_playbook: playbooks/infra/site.yml
Then the abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
TASK [set_fact] ****
ok: [test_01]
ok: [test_02]
PLAY [test sub-plays] ****
TASK [debug] ****
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
Notes
In this example, it's not important whether the variables are encrypted or not.
become and gather_facts don't influence this problem.
There might be other issues. It's a good idea to review include and import issues.
Q: "Why is the vars_files needed?"
A: The variable ansible_become_pass is needed to escalate the user's privilege when a task is sent to the remote host. As a result, when the variable vault_ansible_become_pass is declared in the task include_vars only, this variable won't be available before the tasks are executed, and the play will fail with the error
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
See
Understanding variable precedence
Understanding privilege escalation: become
No vars_files is needed if there are user-defined variables only. For example, the playbook below works as expected
shell> cat inventory/prod/group_vars/all.yml
var1: "{{ vault_var1 }}"
var2: "{{ vault_var2 }}"
shell> cat vars/secrets2.yml
vault_var1: test-var1
vault_var2: test-var2
shell> cat site2.yml
- name: test plays
hosts: all
gather_facts: false
tasks:
- include_vars: secrets2.yml
- debug:
var: var1
- debug:
var: var2
- import_playbook: playbooks/infra/site2.yml
shell> cat playbooks/infra/site2.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: var1
- debug:
var: var2

Ansible role of duplicate works for the absolute parameters only

I have playbook like this
test_playbook/
├── dep_test.yaml
├── my_hosts_file
└── roles
├── common
│   └── vars
│   └── main.yaml
├── dep_test
│   ├── meta
│   │   └── main.yaml
│   └── tasks
│   └── main.yaml
├── dep_test_a
│   └── tasks
│   └── main.yaml
└── dep_test_b
├── meta
│   └── main.yaml
└── tasks
└── main.yaml
Files content are as below.
dep_test.yaml
- hosts: my_host
gather_facts: no
roles:
- common
- dep_test
my_hosts_file
[my_host]
localhost
roles/common/vars/main.yaml
python_version: "3"
roles/dep_test/tasks/main.yaml
- name: debug test
debug:
msg: test debug
roles/dep_test/meta/main.yaml
dependencies:
- role: dep_test_a
# pyenv_versions: ["{{ python_version }}"]
pyenv_versions: ["3"]
- role: dep_test_b
# python_versions: ["{{ python_version }}"]
python_versions: ["3"]
roles/dep_test_a/tasks/main.yaml
- name: Dep test a
debug:
msg: "Dependency test a called with {{ pyenv_versions }}"
roles/dep_test_b/tasks/main.yaml
- name: Dep test b
debug:
msg: "Dependency test b called with {{ python_versions }}"
roles/dep_test_b/meta/main.yaml
dependencies:
- role: dep_test_a
# pyenv_versions: "{{ python_versions }}"
pyenv_versions: ["3"]
When I pass parameter as ["3"] it works fine and apply the Role Duplication and Execution
ansible-playbook dep_test.yaml -i my_hosts_file -u root --ask-pass
SSH password:
PLAY [my_host] ****************************************************************************************************
TASK [dep_test_a : Dep test a] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_b : Dep test b] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test b called with [u'3']"
}
TASK [dep_test : debug test] **************************************************************************************
ok: [localhost] => {
"msg": "test debug"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
When I change the parameter from ["3"] to use the variable form common/vars/main.yaml's python_version, it fail the Duplication rule and execute same role with duplicate arguments.
After change code would be
roles/dep_test/meta/main.yaml
dependencies:
- role: dep_test_a
pyenv_versions: ["{{ python_version }}"]
# pyenv_versions: ["3"]
- role: dep_test_b
python_versions: ["{{ python_version }}"]
# python_versions: ["3"]
roles/dep_test_b/meta/main.yaml
dependencies:
- role: dep_test_a
pyenv_versions: "{{ python_versions }}"
# pyenv_versions: ["3"]
Playbook execution output.
ansible-playbook dep_test.yaml -i my_hosts_file -u root --ask-pass
SSH password:
PLAY [my_host] ****************************************************************************************************
TASK [dep_test_a : Dep test a] ***********************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_a : Dep test a] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_b : Dep test b] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test b called with [u'3']"
}
TASK [dep_test : debug test] **************************************************************************************
ok: [localhost] => {
"msg": "test debug"
}
PLAY RECAP ********************************************************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0
Role dep_test_a called 2 times with same arguments [u'3']
TASK [dep_test_a : Dep test a] ***********************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
TASK [dep_test_a : Dep test a] ************************************************************************************
ok: [localhost] => {
"msg": "Dependency test a called with [u'3']"
}
One for the dependencies in dep_test role and another for dep_test_b.
As per the rule of dependencies, this should be called only once.
Question: Why dependent role runs twice when passing the parameter?
Question: Why dependent role runs twice when passing the parameter?
Answer: Because Ansible uses lazy evaluation, hence Jinja2 templating is not being triggered until the variable is used.
Passing a variable to a role is not considered a usage, so it passes and compares the templates, not the values.
You call the dep_test_a role twice:
- role: dep_test_a
pyenv_versions: ["{{ python_version }}"]
and:
- role: dep_test_a
pyenv_versions: "{{ python_versions }}"
["{{ python_version }}"] is not equal to "{{ python_versions }}", thus Ansible executes the role twice.
And btw, the code to illustrate the behaviour in the question can be shortened to:
- hosts: localhost
connection: local
gather_facts: no
vars:
my_var1: 1
my_var2: 1
roles:
- role: my_role
role_param: "{{ my_var1 }}"
- role: my_role
role_param: "{{ my_var2 }}"

Ansible: playbook calling Role in a directory that is in the roles directory

I would like to shape my directory structure of my ansible roles and playbooks.
Currently I have a directory structure like.
group_vars
* all
* group-one
- group-vars.yml
- group-vault.yml
...
host_vars
- server1.yml
plays
- java_plays
* deploy_fun_java_stuff.yml
* deploy_playbook.yml
roles
- role1
- tasks
* main.yml
- handlers
- (the rest of the needed directories)
- role2
- java
- java_role1
- tasks
* main.yml
- handlers
- (the rest of the needed directories)
I would like to be able to call upon the role java_role1 in the play deploy_fun_java_stuff.yml
I can call
---
- name: deploy fun java stuff
hosts: java
roles:
- { role: role1 }
but I cannot call (I've tried multiple ways). Is this possible?
- name: deploy fun java stuff
hosts: java
roles:
- { role: java/java_role1 }
What I really want to accomplish is to be able to structure my plays in an orderly fashion along with my roles.
I will end up with a large number of both roles and plays I would like to organize them.
I can handle this with a separate ansible.cfg file for each play directory but I cannot add those cfg files to ansible tower (So I'm looking for an alternate solution).
I think the problem is that you need to set the relative path properly. Ansible first applies the given path relative to the called playbooks directory, then looks in the current working path (from which you are executing the ansible-playbook command) and finally checks in /etc/ansible/roles, so instead of { role: java/java_role1 } in your dir structure you could use { role: ../../roles/java/java_role1 } or { role: roles/java/java_role1 }. Yet another option would be to configure the paths in which ansible is looking for roles. For that you could set the roles_path inside your projects ansible.cfg as described in the Ansible docs.
Based on your example:
Dir tree:
ansible/
├── hosts
│   └── dev
├── plays
│   └── java_plays
│   └── java.yml
└── roles
├── java
│   └── java_role1
│   └── tasks
│   └── main.yml
└── role1
└── tasks
└── main.yml
To test it, the play would include java_role1 and role1.
plays/java_plays/java.yml:
---
- name: deploy java stuff
hosts: java
roles:
- { role: roles/role1 }
- { role: roles/java/java_role1 }
For testing purposes these roles simply print a debug msg.
role1/tasks/main.yml:
---
- debug: msg="Inside role1"
The dev hosts file simply sets localhost to the java group. Now I can use the playbook:
fishi#zeus:~/workspace/ansible$ ansible-playbook -i hosts/dev plays/java_plays/java.yml
PLAY [deploy java stuff] *******************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [role1 : debug] ***********************************************
ok: [localhost] => {
"msg": "Inside role1"
}
TASK [java_role1 : debug] *************************************
ok: [localhost] => {
"msg": "Inside java_role1"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
Now doing the same when you use { role: ../../roles/java/java_role1 } and { role: ../../roles/role1 } your log output inside the TASK brackets would show the whole relative path instead of just the role name:
fishi#zeus:~/workspace/ansible$ ansible-playbook -i hosts/dev plays/java_plays/java.yml
PLAY [deploy java stuff] *******************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [../../roles/role1 : debug] ***********************************************
ok: [localhost] => {
"msg": "Inside role1"
}
TASK [../../roles/java/java_role1 : debug] *************************************
ok: [localhost] => {
"msg": "Inside java_role1"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
Another option and one that I use is to create an ansible.cfg file in your playbook directory and place the following in it:
[defaults]
roles_path = /etc/ansible/roles: :
or in your case:
[defaults]
roles_path = /etc/ansible/roles:/etc/ansible/roles/java
Then don't use any relative paths.
A more elegant solution (imo) is symlinking your roles directory to inside the playbooks directory.
My directory structure is as follows:
inventory/
playbooks/
|-> roles -> ../roles
|-> group_vars -> ../group_vars
|-> host_vars -> ../host_vars
roles/
group_vars/
host_vars/
In my case, I created the symlink by running ln -s ../roles playbooks/roles

Resources