Ansible - Variable precedence fails when hosts are the same - ansible

I want to make the most of variable precedence with ansible.
So let’s have a look at this simplified project:
├── group_vars
│ └── all
│ └── web
├── hosts
│ └── local
└── site.yml
The inventory file hosts/local:
[local_web]
192.168.1.20
[local_db]
192.168.1.20
[web:children]
local_web
[db:children]
local_db
The group_vars/all file:
test: ALL
The group_vars/web file:
test: WEB
The site.yml file:
---
- name: Test
hosts: db
tasks:
- debug: var=test
Alright, so this is just to test variable precedence. As I run ansible in the db group, the test variable should display “ALL”, as ansible will only looks into group_vars/all, right?
Wrong:
TASK: [debug var=test] ********************************************************
ok: [192.168.1.20] => {
"var": {
"test": "WEB"
}
}
Actually, if local_web and local_db hosts are different, then it works.
Why ansible still looks into an unrelated config file when hosts are the same? Is that a bug or just me?

You're stating that 192.168.1.20 is a member of all 4 of your defined groups, and that's independent of how you reference the host in your playbook. No matter how you reference the host in your playbook Ansible is going to evaluate all the groups that host is in and import variables based on those groups.
Here's a handy test to demonstrate this:
- name: Test
hosts: db
tasks:
- debug: msg="{{ inventory_hostname }} is in group {{ item }}"
when: inventory_hostname in groups[item]
with_items: group_names
The output of this is:
TASK: [debug msg="host is in group {{ item }}"] *******************************
ok: [192.168.1.20] => (item=db) => {
"item": "db",
"msg": "192.168.1.20 is in group db"
}
ok: [192.168.1.20] => (item=local_db) => {
"item": "local_db",
"msg": "192.168.1.20 is in group local_db"
}
ok: [192.168.1.20] => (item=local_web) => {
"item": "local_web",
"msg": "192.168.1.20 is in group local_web"
}
ok: [192.168.1.20] => (item=web) => {
"item": "web",
"msg": "192.168.1.20 is in group web"
}
Since the host is in the web group then the web group_vars file was included.

#Bruce P answer is right.
However, this ansible behavior is not satisfying for me, because it change variable precedence depending on the hosts. Instead of group_vars, I use the vars_files dict.
I moved group_vars into a directory named vars.
The updated site.yml:
---
- name: Test
hosts: db
tasks:
- debug: var=test
vars_files:
- vars/all
- vars/db
Now, test displays “ALL”, as I wanted. It first read vars/all, then vars/db (which is empty).
(Note: variable precedence seems to be a bit buggy at the moment - v1.9.2 -. This means if you use variables as var files name, ansible will not load files in the order expected.)

Another solution that can leave variables in group_vars is to replace the inventory file hosts/local by two files : hosts/inv_db and hosts/inv_web, and to use the -i option to specify the inventory file to use.

Related

What is the best way to dynamically add variables as Ansible Facts?

I have an Ansible task where I navigate to a YAML variable file in GitHub, download the file, and add the variables as Ansible Facts where they're later used.
My YAML file looks like:
---
foo: bar
hello: world
I have a method where I loop over this file, and individually add the key/value pairs as the facts:
- name: Grab contents of variable file
win_shell: cat '{{ playbook_dir }}/DEV1.yml'
register: raw_config
- name: Add variables to workspace
vars:
config: "{{ raw_config.stdout | from_yaml }}"
set_fact:
"{{ item.key }}": "{{ item.value }}"
loop: "{{ config | dict2items }}"
This works but generates much larger log outputs that look like:
ok: [localhost] => (item={u'key': u'foo', u'value': u'bar'}) => {
"ansible_facts": {
"foo": "bar"
},
"ansible_loop_var": "item",
"changed": false,
"item": {
"key": "foo",
"value": "bar"
}
}
ok: [localhost] => (item={u'key': u'hello', u'value': u'world'}) => {
"ansible_facts": {
"hello": "world"
},
"ansible_loop_var": "item",
"changed": false,
"item": {
"key": "hello",
"value": "world"
}
}
I was wondering if it was possible to add the entire variable file as Ansible Facts instead of needing to loop through it. The way I tried was like:
- name: Grab contents of variable file
win_shell: cat '{{ playbook_dir }}/DEV1.yml'
register: raw_config
- name: Add variables to workspace
vars:
config: '{{ raw_config.stdout | from_yaml }}'
set_fact: '{{ config }}'
This almost works, but it looks like this:
ok: [msf1vpom04d.corp.tjxcorp.net] => {
"ansible_facts": {
"_raw_params": {
"foo": "bar",
"hello": "world"
…
Can I add the entire object as Ansible Facts without generating this _raw_params object?
... where I navigate to a YAML variable file in GitHub, download the file, and add the variables ... I was wondering if it was possible to add the entire variable file ...
There are several possibilities.
One option (annot.: like in Ansible Tower) can be to checkout, download, sync the variable file before executing the playbook. To do so
curl --silent --user "${ACCOUNT}:${PASSWORD}" -X GET "https://${REPOSITORY_URL}/raw/group_vars/test?at=refs%2Fheads%2Fmaster" -o group_vars/test && \
sshpass -p ${PASSWORD} ansible-playbook --user ${ACCOUNT} --ask-pass test.yml
... used a Bitbucket URL here for demonstration and test
The advantage of this approach is that there is no implementation or logic within the playbooks necessary at all. The only requirement is just to Organize host and group variables. Furthermore it is the there recommended approach
Keeping your inventory file and variables in a git repo (or other version control) is an excellent way to track changes to your inventory and host variables.
An other option would be to use include_vars module to
Loads YAML/JSON variables dynamically from a file or directory, recursively, during task runtime.
In respect to simplicity it is still recommended to sync before execute.
Further Q&A
... and as already mentioned within the comments
Getting variable values from URL
You can take advantage of the play level vars_files parameter and the fact that it will be loaded and expanded for each running task. We just need to have a fallback file when your file does not yet locally exist so that we don't get an error (during facts gathering for example).
Here is an example with a uri of mine containing default values for an ansible role.
First, create an empty.yml file which will be empty as its name suggest. (It's adjacent to my playbook for the example but you can put it wherever your want, just reflect this in your playbook accordingly)
Then the following playbook:
---
- hosts: localhost
gather_facts: false
vars:
external_vars_uri: https://raw.githubusercontent.com/ansible-ThoTeam/nexus3-oss/main/defaults/main.yml
external_vars_file: /tmp/external_vars.yml
vars_files:
- "{{ lookup('first_found', [external_vars_file, 'empty.yml']) }}"
tasks:
- name: make sure we have our external file
get_url:
url: "{{ external_vars_uri }}"
dest: "{{ external_vars_file }}"
# Note: we're only using localhost here so the below
# parameters are useless. But they will be necessary
# if you target other (groups of) hosts.
run_once: true
delegate_to: localhost
- name: debug a var we know is in the external file
debug:
var: nexus_repos_maven_proxy
Gives:
$ ansible-playbook play.yml
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [make sure we have our external file] ************************************************************************************************************************************************************************
changed: [localhost]
TASK [debug a var we know is in the external file] ****************************************************************************************************************************************************************************
ok: [localhost] => {
"nexus_repos_maven_proxy": [
{
"layout_policy": "permissive",
"name": "central",
"remote_url": "https://repo1.maven.org/maven2/"
},
{
"name": "jboss",
"remote_url": "https://repository.jboss.org/nexus/content/groups/public-jboss/"
}
]
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

How to run playbook across multiple environments with their own group_vars?

My current ansible structure looks something like this:
- inventory
- prod
- prod1
- hosts.yml
- group_vars
- all.yml
- prod2
- hosts.yml
- group_vars
- all.yml
- prod3
- hosts.yml
- group_vars
- all.yml
- nonprod
- dev
- hosts.yml
- group_vars
- all.yml
- qa
- hosts.yml
- group_vars
- all.yml
- uat
- hosts.yml
- group_vars
- all.yml
- roles
- main.yml (this isn't accurate just a sample playbook for the question)
I'd like to be able to run something like this: ansible-playbook main.yml -i inventory/prod and have it automatically cycle through the environments (each with distinct group_vars values).
Currently the command will find the hosts in each environment but not the vars which stops the playbook from running, but if I run ansible-playbook main.yml -i inventory/prod/prod1 it runs fine.
I would suggest restructuring your inventory. Ansible looks for the group_vars directory adjacent to your inventory files. If you run with ansible -i inventory ..., it won't find the group_vars file (it will only find it when running e.g. ansible -i inventory/prod/prod1).
Consider a layout like this:
inventory/
├── group_vars
│   ├── prod1.yaml
│   ├── prod2.yaml
│   └── prod3.yaml
└── prod
├── prod1.yaml
├── prod2.yaml
└── prod3.yaml
Where each inventory file places hosts into a similarly named hostgroup. E.g., inventory/prod/prod1.yaml contains:
all:
children:
prod1:
hosts:
prod1-node0:
prod1-node1:
prod1-node2:
If we have a variable defined with a different value for each group:
$ grep . inventory/group_vars/*
inventory/group_vars/prod1.yaml:location: datacenter1
inventory/group_vars/prod2.yaml:location: datacenter2
inventory/group_vars/prod3.yaml:location: datacenter3
And a playbook like this:
- hosts: all
gather_facts: false
tasks:
- debug:
var: location
We can run it against all the hosts (I'm only using two groups here, prod1 and prod2, in order to keep the output shorter):
$ ansible-playbook playbook.yaml -i inventory
TASK [debug] ********************************************************************************************
ok: [prod1-node0] => {
"location": "datacenter1"
}
ok: [prod1-node1] => {
"location": "datacenter1"
}
ok: [prod1-node2] => {
"location": "datacenter1"
}
ok: [prod2-node0] => {
"location": "datacenter2"
}
ok: [prod2-node1] => {
"location": "datacenter2"
}
ok: [prod2-node2] => {
"location": "datacenter2"
}
Or you can run the playbook against a specific group:
$ ansible-playbook playbook.yaml -i inventory -l prod2
TASK [debug] ********************************************************************************************
ok: [prod2-node0] => {
"location": "datacenter2"
}
ok: [prod2-node1] => {
"location": "datacenter2"
}
ok: [prod2-node2] => {
"location": "datacenter2"
}
In each case, hosts will use the values from the appropriate group_vars file based on their hostgroup.

Tying to understand ansible variables and vault with sub-plays

I'm having a very difficult time understanding how to organize large playbooks with many roles, using inventory with multiple "environments" and using sub-plays to try and organize things. All the while having common variables at the parent playbook, sharing those with sub-plays. I use ansible but in a very limited way so I'm trying to expand my knowledge of it by doing this exercise.
Directory structure (simplified for testing)
├── inventory
│   ├── dev
│   │   ├── group_vars
│   │   │   └── all.yml
│   │   └── hosts
│   └── prod
│      ├── group_vars
│      │   └── all.yml
│   └── hosts
├── playbooks
│   └── infra
│      └── site.yml
├── site.yml
└── vars
└── secrets.yml
Various secrets are in the secrets.yml file, including the ansible_ssh_user and ansible_become_pass.
Contents of all.yml
---
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
Contents of site.yml
---
- name: test plays
hosts: all
vars_files:
- vars/secrets.yml
become: true
gather_facts: true
pre_tasks:
- include_vars: secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- import_playbook: playbooks/infra/site.yml
Content of playbooks/infra/site.yml
---
- name: test sub-play
hosts: all
become: true
gather_facts: true
tasks:
- debug:
var: ansible_ssh_user
The main parent playbook is being called with ansible-playbook -i inventory/dev site.yml. The problem is I can't access vault_ansible_ssh_user or vault_ansible_become_pass (or any secrets in vault) from within the sub-play if I don't include both var_files AND pre_tasks: - include_vars
If I remove var_files, I can't access the secrets in the parent playbook. If I remove pre_tasks: - include_vars, I can't access any secrets in the imported sub-play. Any idea why I need both of these variable include statements for this to work? Also, is this just a terrible design and I'm doing things completely wrong? I'm having a hard time wrapping my head around the best way to organize huge playbooks with a lot of required variables so I ended up with a directory structure like this to try and compartmentalize the variables to avoid very large variables files and the need to duplicate variable files all over the place. This probably just boils down to me wanting to fit a round peg in a square hole but I can't find a great best practices example for something like this.
This issue might also have to do with me trying to put ansible vault variables in an inventory var file maybe. If so, is that something I should or shouldn't be doing? As I was writing this, I may have had a "light bulb" moment and finally understand how I should handle this but I need to test some things to understand it fully but regardless, I'm still very interested in what the stackoverflow community has to say about how I'm currently doing this.
EDIT: turns out my "light bulb" idea is just the same as I have here just moved around in a different way, with the same issues
Q: "If I remove ... include_vars, I can't access any secrets in the imported sub-play."
A: To share variables among the plays use include_vars or set_fact. Quoting from Variable scope: how long is a value available?
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like set_fact and include_vars, are available to all plays. These ‘host scope’ variables are also available via the hostvars[] dictionary.
Given the files below
shell> cat inventory/prod/hosts
test_01
test_02
shell> cat inventory/prod/group_vars/all.yml
ansible_ssh_user: "{{ vault_ansible_ssh_user }}"
ansible_become_pass: "{{ vault_ansible_become_pass }}"
shell> cat vars/secrets.yml
vault_ansible_ssh_user: ansible-ssh-user
vault_ansible_become_pass: ansible-become-pass
shell> cat site.yml
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
- import_playbook: playbooks/infra/site.yml
shell> cat playbooks/infra/site.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: ansible_ssh_user
The variables declared by vars_files are not shared among the plays and the second play will fail. The abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
PLAY [test sub-plays] ****
TASK [debug] ****
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
fatal: [test_02]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
The problem will disappear if you use include_vars or set_fact, i.e. "instantiate" the variables. Commenting set_fact and uncommenting include_vars, or uncommenting both, will give the same result
- name: test plays
hosts: all
gather_facts: false
vars_files: vars/secrets.yml
tasks:
- debug:
var: ansible_ssh_user
- debug:
var: ansible_become_pass
# - include_vars: secrets.yml
- set_fact:
ansible_ssh_user: "{{ ansible_ssh_user }}"
ansible_become_pass: "{{ ansible_become_pass }}"
- import_playbook: playbooks/infra/site.yml
Then the abridged result is
shell> ANSIBLE_INVENTORY=$PWD/inventory/prod/hosts ansible-playbook site.yml
PLAY [test plays] ****
TASK [debug] ****
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
TASK [debug] ****
ok: [test_01] => {
"ansible_become_pass": "ansible-become-pass"
}
ok: [test_02] => {
"ansible_become_pass": "ansible-become-pass"
}
TASK [set_fact] ****
ok: [test_01]
ok: [test_02]
PLAY [test sub-plays] ****
TASK [debug] ****
ok: [test_02] => {
"ansible_ssh_user": "ansible-ssh-user"
}
ok: [test_01] => {
"ansible_ssh_user": "ansible-ssh-user"
}
Notes
In this example, it's not important whether the variables are encrypted or not.
become and gather_facts don't influence this problem.
There might be other issues. It's a good idea to review include and import issues.
Q: "Why is the vars_files needed?"
A: The variable ansible_become_pass is needed to escalate the user's privilege when a task is sent to the remote host. As a result, when the variable vault_ansible_become_pass is declared in the task include_vars only, this variable won't be available before the tasks are executed, and the play will fail with the error
fatal: [test_01]: FAILED! => {"msg": "The field 'become_pass' has an invalid value, which includes an undefined variable. The error was: 'vault_ansible_become_pass' is undefined"}
See
Understanding variable precedence
Understanding privilege escalation: become
No vars_files is needed if there are user-defined variables only. For example, the playbook below works as expected
shell> cat inventory/prod/group_vars/all.yml
var1: "{{ vault_var1 }}"
var2: "{{ vault_var2 }}"
shell> cat vars/secrets2.yml
vault_var1: test-var1
vault_var2: test-var2
shell> cat site2.yml
- name: test plays
hosts: all
gather_facts: false
tasks:
- include_vars: secrets2.yml
- debug:
var: var1
- debug:
var: var2
- import_playbook: playbooks/infra/site2.yml
shell> cat playbooks/infra/site2.yml
- name: test sub-plays
hosts: all
gather_facts: false
tasks:
- debug:
var: var1
- debug:
var: var2

How to loop through inventory and assign value in Ansible

I have a task in my Ansible playbook that I'm wanting iterate over each host in the group that I have and for each host I would like to assign a name from the hostname list that I've created in the vars folder.
I'm familiar with looping through inventory already by writing loop: "{{ groups['mygroup'] }}" and I have a list of hostnames I would like to assign each IP in 'mygroup' within the host file.
# In tasks file - roles/company/tasks/main.yml
- name: change hostname
win_hostname:
name: "{{ item }}"
loop: "{{ hostname }}"
register: res
# In the Inventory file
[company]
10.0.10.128
10.0.10.166
10.0.10.200
# In vars - roles/company/vars/main.yml
hostname:
- GL-WKS-18
- GL-WKS-19
- GL-WKS-20
# site.yml file located under /etc/ansible
- hosts: company
roles:
- common
- company #This is where the loop exists mentioned above.
# Command to run playbook
ansible-playbook -i hosts company.yml
I seem to have the individual pieces down or know about it, but how can I combine iterating over hosts from an inventory group and assign names that I have in an already created list (in roles vars folder) already?
UPDATE
the task mentioned above has been updated to reflect changes mentioned in answer:
- name: change hostname
win_hostname:
name: "{{ item.1 }}"
loop: {{ groups.company|zip(hostname)|list }}"
register: res
However the output I'm getting is incorrect, this should not run 9 times rather only three times, once per IP in the [company] group in the inventory. Also there are only three hostnames in a list that need to be assigned to each of the hosts in the inventory sheet.
changed: [10.0.10.128] => (item=[u'10.0.10.128', u'GL-WKS-18'])
changed: [10.0.10.166] => (item=[u'10.0.10.128', u'GL-WKS-18'])
changed: [10.0.10.200] => (item=[u'10.0.10.128', u'GL-WKS-18'])
changed: [10.0.10.128] => (item=[u'10.0.10.166', u'GL-WKS-19'])
changed: [10.0.10.166] => (item=[u'10.0.10.166', u'GL-WKS-19'])
changed: [10.0.10.200] => (item=[u'10.0.10.166', u'GL-WKS-19'])
ok: [10.0.10.128] => (item=[u'10.0.10.200', u'GL-WKS-20'])
ok: [10.0.10.166] => (item=[u'10.0.10.200', u'GL-WKS-20'])
ok: [10.0.10.200] => (item=[u'10.0.10.200', u'GL-WKS-20'])
Whenever I have a question about looping in Ansible I also go visit the Loops documentation. It sounds like you want to iterate over two lists in parallel, pairing an item from the list of hosts in your inventory with an item from the list of hostnames. In previous versions of Ansible that would suggest using the with_together loop, while with more recent versions of Ansible that suggests the zip filter (there's an example in the docs here).
To demonstrate this for your use case, I started with an inventory file that has three hosts:
[mygroup]
hostA ansible_host=localhost
hostB ansible_host=localhost
hostC ansible_host=localhost
And the following playbook:
---
- hosts: all
- hosts: localhost
gather_facts: false
vars:
hostnames:
- hostname01
- hostname02
- hostname03
tasks:
- name: change hostname
debug:
msg:
win_hostname:
name: "{{ item }}"
loop: "{{ groups.mygroup|zip(hostnames)|list }}"
Here I'm using a debug task instead of actually running the win_hostname task. The output of running:
ansible-playbook -i hosts playbook.yml
Looks like:
TASK [change hostname] ********************************************************************************************************************************
ok: [localhost] => (item=[u'hostA', u'hostname01']) => {
"msg": {
"win_hostname": {
"name": [
"hostA",
"hostname01"
]
}
}
}
ok: [localhost] => (item=[u'hostB', u'hostname02']) => {
"msg": {
"win_hostname": {
"name": [
"hostB",
"hostname02"
]
}
}
}
ok: [localhost] => (item=[u'hostC', u'hostname03']) => {
"msg": {
"win_hostname": {
"name": [
"hostC",
"hostname03"
]
}
}
}
As you can see, it's paired each host from the inventory with a hostname from the hostnames list.
Update
Based on the additional information you've provided, I think what you
actually want is this:
- name: change hostname
win_hostname:
name: "{{ hostnames[group.company.index(inventory_hostname) }}"
This will assign one value from hostname to each host in your
inventory. We're looking up the position of the current
inventory_hostname in your group, and then using that to index into
the hostnames list.

Ansible - read inventory hosts and variables to group_vars/all file

I have a dummy doubt that keeps me stuck for a long time. I have a very banal inventory file with hosts and variables:
[lb]
10.112.84.122
[tomcat]
10.112.84.124
[jboss5]
10.112.84.122
...
[tests:children]
lb
tomcat
jboss5
[default:children]
tests
[tests:vars]
data_base_user=NETWIN-4.3
data_base_password=NETWIN
data_base_encrypted_password=
data_base_host=10.112.69.48
data_base_port=1521
data_base_service=ssdenwdb
data_base_url=jdbc:oracle:thin:#10.112.69.48:1521/ssdenwdb
The problem is that I need to access all these hosts and variables, in the inventory file, from the group_vars/all file.
I've tried the following manners to access the host IP:
{{ lb }}
"{{ hostvars[lb] }}"
"{{ hostvars['lb'] }}"
{{ hostvars[lb] }}
To access a host variable I tried:
"{{ hostvars[tests].['data_base_host'] }}"
All of them are wrong!!! Can anyone help me find out the best practice to access hosts and variables, not from a playbook but from a variables file?
EDIT:
Ok. Let's clarify.
Problem: Use a host declared in the inventory file in a variable file, let's say: group_vars/all.
Example: I have a DB host with IP:10.112.83.37.
Inventory file:
[db]
10.112.83.37
In the group:vars/all file I want to use that IP to build a variable.
group_vars/all file:
data_base_url=jdbc:oracle:thin:#{{ db }}:1521/ssdenwdb
In a template I use the variable built in the group_vars/all file.
Template file:
oracle_url = {{ data_base_url }}
The problem is that the {{ db }} variable in the group_vars/all file is not replaced by the DB host IP. The user can only edit the inventory file.
- name: host
debug: msg="{{ item }}"
with_items:
- "{{ groups['tests'] }}"
This piece of code will give the message:
'10.112.84.122'
'10.112.84.124'
as groups['tests'] basically return a list of unique ip addresses ['10.112.84.122','10.112.84.124'] whereas groups['tomcat'][0] returns 10.112.84.124.
If you want to programmatically access the inventory entries to include them in a task for example. You can refer to it like this:
{{ hostvars.tomcat }}
This returns you a structure with all variables related with that host. If you want just an IP address (or hostname), you can refer to it like this:
{{ hostvars.jboss5.ansible_ssh_host }}
Here is a list of variables which you can refer to: click. Moreover, you can declare a variable and set it with for example result of some step in a playbook.
- name: Change owner and group of some file
file: path=/tmp/my-file owner=new-owner group=new-group
register: chown_result
Then if you play this step on tomcat, you can access it from jboss5 like this:
- name: Print out the result of chown
debug: msg="{{ hostvars.tomcat.chown_result }}"
Just in case if the problem is still there,
You can refer to ansible inventory through ‘hostvars’, ‘group_names’, and ‘groups’ ansible variables.
Example:
To be able to get ip addresses of all servers within group "mygroup", use the below construction:
- debug: msg="{{ hostvars[item]['ansible_eth0']['ipv4']['address'] }}"
with_items:
- "{{ groups['mygroup'] }}"
Considering your previous example:
inventory file:
[db]
10.112.83.37
group_vars/all
data_base_url=jdbc:oracle:thin:#{{ db }}:1521/ssdenwdb
template file:
oracle_url = {{ data_base_url }}
You might want to replace your group_vars/all with
data_base_url="jdbc:oracle:thin:#{{ groups['db'][0] }}:1521/ssdenwdb"
Yes the example by nixlike works very well.
Inventory:
[docker-host]
myhost1 user=barbara
myhost2 user=heather
playbook:
---
- hosts: localhost
connection: local
tasks:
- name: loop debug inventory hostnames
debug:
msg: "the docker host is {{ item }}"
with_inventory_hostnames: docker-host
- name: loop debug items
debug:
msg: "the docker host is {{ hostvars[item]['user'] }}"
with_items: "{{ groups['docker-host'] }}"
output:
ansible-playbook ansible/tests/vars-test-local.yml
PLAY [localhost]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [loop debug inventory hostnames]
****************************************** ok: [localhost] => (item=myhost2) => {
"item": "myhost2",
"msg": "the docker host is myhost2" } ok: [localhost] => (item=myhost1) => {
"item": "myhost1",
"msg": "the docker host is myhost1" }
TASK [loop debug items]
******************************************************** ok: [localhost] => (item=myhost1) => {
"item": "myhost1",
"msg": "the docker host is barbara" } ok: [localhost] => (item=myhost2) => {
"item": "myhost2",
"msg": "the docker host is heather" }
PLAY RECAP
********************************************************************* localhost : ok=3 changed=0 unreachable=0
failed=0
thanks!
If you want to refer one host define under /etc/ansible/host in a task or role, the bellow link might help:
https://www.middlewareinventory.com/blog/ansible-get-ip-address/
If you want to have your vars in files under group_vars, just move them here. Vars can be in the inventory ([group:vars] section) but also (and foremost) in files under group_vars or hosts_vars.
For instance, with your example above, you can move your vars for group tests in the file group_vars/tests :
Inventory file :
[lb]
10.112.84.122
[tomcat]
10.112.84.124
[jboss5]
10.112.84.122
...
[tests:children]
lb
tomcat
jboss5
[default:children]
tests
group_vars/tests file :
data_base_user=NETWIN-4.3
data_base_password=NETWIN
data_base_encrypted_password=
data_base_host=10.112.69.48
data_base_port=1521
data_base_service=ssdenwdb
data_base_url=jdbc:oracle:thin:#10.112.69.48:1521/ssdenwdb

Resources