ansible - how to iterate children groups in ansible? [closed] - ansible

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I do have an inventory file as below
[ParentGroup]
ChildrenGroup1
ChildrenGroup2
[ChildrenGroup1]
host1
host2
host3
[ChildrenGroup2]
host4
host5
host6
Now i want to iterate Children wise..
i.e. Perform my task in parallel on host1,host2, host3 i.e only on hosts exists inChildrenGroup1 and once this is success, i Need to go with ChildrenGroup2 i.e on host4, host5, host6
Points to be taken care ?
if there is any failure on any one of the childrengroup hosts then we need to wait/pause before proceeding with next children group
I shall have many children groups on my inventory
I need to action my task only on one chidlrengroup at a time.
I shall make sure all the childrengroups are addressed in one-shot too.
Can you suggest on how to take this forward ?

The critical limitation here is the fact that a playbook can't start another playbook. The only option is import_playbook. Imported files must be available when a playbook starts. As a result, the solution is a two-step process. Create the playbooks in the first step and then run them. For example, given the inventory
shell> cat hosts
[ParentGroup:children]
ChildrenGroup1
ChildrenGroup2
[ChildrenGroup1]
host1
host2
host3
[ChildrenGroup2]
host4
host5
host6
you want to run the playbook pb.yml as described in the question. Take the playbook and create the template by putting {{ item }} to hosts:
shell> cat pb.yml.j2
- hosts: "{{ item }}"
gather_facts: false
tasks:
- debug:
msg: "{{ inventory_hostname }}: Playbook started."
1. Create playbooks
The playbook below creates the list of the groups my_groups in the first task. Then the template task iterates this list and creates playbooks for the groups. The next template task imports these playbooks into the playbook pb-groups.yml
shell> cat pb-init.yml
- hosts: localhost
vars:
groups_other: [ParentGroup, all, ungrouped]
tasks:
- set_fact:
my_groups: "{{ groups.keys()|difference(groups_other) }}"
- template:
src: pb.yml.j2
dest: "pb-{{ item }}.yml"
loop: "{{ my_groups }}"
- template:
src: pb-groups.yml.j2
dest: pb-groups.yml
shell> cat pb-groups.yml.j2
- hosts: localhost
gather_facts: false
{% for group in my_groups %}
- import_playbook: pb-{{ group }}.yml
{% endfor %}
See created files
shell> cat pb-ChildrenGroup1.yml
- hosts: "ChildrenGroup1"
gather_facts: false
tasks:
- debug:
msg: "localhost: Playbook started."
shell> cat pb-ChildrenGroup2.yml
- hosts: "ChildrenGroup2"
gather_facts: false
tasks:
- debug:
msg: "localhost: Playbook started."
shell> cat pb-groups.yml
- hosts: localhost
gather_facts: false
- import_playbook: pb-ChildrenGroup1.yml
- import_playbook: pb-ChildrenGroup2.yml
2. Run created playbooks
shell> ansible-playbook pb-groups.yml
PLAY [localhost] ****
PLAY [ChildrenGroup1] ****
TASK [debug] ****
ok: [host1] =>
msg: 'localhost: Playbook started.'
ok: [host2] =>
msg: 'localhost: Playbook started.'
ok: [host3] =>
msg: 'localhost: Playbook started.'
PLAY [ChildrenGroup2] ****
TASK [debug] ****
ok: [host4] =>
msg: 'localhost: Playbook started.'
ok: [host5] =>
msg: 'localhost: Playbook started.'
ok: [host6] =>
msg: 'localhost: Playbook started.'
PLAY RECAP ****
...
Many children groups on my inventory
Change the inventory. For example
shell> cat hosts
[ParentGroup:children]
ChildrenGroup1
ChildrenGroup2
ChildrenGroup3
[ChildrenGroup1]
host1
host2
[ChildrenGroup2]
host4
host5
[ChildrenGroup3]
host3
host6
The commands below work as expected
shell> ansible-playbook pb-init.yml
...
shell> ansible-playbook pb-groups.yml
PLAY [localhost] ****
PLAY [ChildrenGroup1] ****
TASK [debug] ****
ok: [host1] =>
msg: 'localhost: Playbook started.'
ok: [host2] =>
msg: 'localhost: Playbook started.'
PLAY [ChildrenGroup2] ****
TASK [debug] ****
ok: [host4] =>
msg: 'localhost: Playbook started.'
ok: [host5] =>
msg: 'localhost: Playbook started.'
PLAY [ChildrenGroup3] ****
TASK [debug] ****
ok: [host3] =>
msg: 'localhost: Playbook started.'
ok: [host6] =>
msg: 'localhost: Playbook started.'
PLAY RECAP ****
...

Related

Ansible: How to check multiple servers for a text file value, to decide which servers to run the script on?

I am trying to ask Ansible to check if a server is passive or active based on the value of a specific file in each server, then Ansible will decide which server it runs the next script on.
For example with 2 servers:
Server1
cat /tmp/currentstate
PASSIVE
Server2
cat /tmp/currentstate
ACTIVE
In Ansible
Trigger next set of jobs on server where the output was ACTIVE.
Once the jobs complete, trigger next set of jobs on server where output was PASSIVE
What I have done so far to grab the state, and output the value to Ansible is
- hosts: "{{ hostname1 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of first server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server1
- debug:
msg: "{{ state_server1.stdout }}"
- hosts: "{{ hostname2 | mandatory }}"
gather_facts: no
tasks:
- name: Grab state of second server
shell: |
cat {{ ans_script_path }}currentstate.log
register: state_server2
- debug:
msg: "{{ state_server2.stdout }}"
What I have done so far to trigger the script
- hosts: "{{ active_hostname | mandatory }}"
tasks:
- name: Run the shutdown on active server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: "{{ passive_hostname | mandatory }}"
tasks:
- name: Run the shutdown on passive server first
shell: sh {{ ans_script_path }}stopstart_terracotta_main.sh shutdown
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
but I don't know how to set the value of active_hostname & passive_hostname based on the value from the script above.
How can I set the Ansible variable of active_hostname & passive_hostname based on the output of the first section?
A better solution came to my mind is to include hosts in new groups according to their state.
This would be more optimal in case there are more than two hosts.
- hosts: all
gather_facts: no
vars:
ans_script_path: /tmp/
tasks:
- name: Grab state of server
shell: |
cat {{ ans_script_path }}currentstate.log
register: server_state
- add_host:
hostname: "{{ item }}"
# every host will be added to a new group according to its state
groups: "{{ 'active' if hostvars[item].server_state.stdout == 'ACTIVE' else 'passive' }}"
# Shorter, but the new groups will be in capital letters
# groups: "{{ hostvars[item].server_state.stdout }}"
loop: "{{ ansible_play_hosts }}"
changed_when: false
- name: show the groups the host(s) are in
debug:
msg: "{{ group_names }}"
- hosts: active
gather_facts: no
tasks:
- name: Run the shutdown on active server first
shell: hostname -f # changed that for debugging
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
- hosts: passive
gather_facts: no
tasks:
- name: Run the shutdown on passive server first
shell: hostname -f
register: run_result
- debug:
msg: "{{ run_result.stdout }}"
test-001 is PASSIVE
test-002 is ACTIVE
PLAY [all] ***************************************************************
TASK [Grab state of server] **********************************************
ok: [test-002]
ok: [test-001]
TASK [add_host] **********************************************************
ok: [test-001] => (item=test-001)
ok: [test-001] => (item=test-002)
TASK [show the groups the host(s) are in] ********************************
ok: [test-001] => {
"msg": [
"passive"
]
}
ok: [test-002] => {
"msg": [
"active"
]
}
PLAY [active] *************************************************************
TASK [Run the shutdown on active server first] ****************************
changed: [test-002]
TASK [debug] **************************************************************
ok: [test-002] => {
"msg": "test-002"
}
PLAY [passive] ************************************************************
TASK [Run the shutdown on passive server first] ****************************
changed: [test-001]
TASK [debug] **************************************************************
ok: [test-001] => {
"msg": "test-001"
}
PLAY RECAP ****************************************************************
test-001 : ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test-002 : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
For example, given two remote hosts
shell> ssh admin#test_11 cat /tmp/currentstate.log
ACTIVE
shell> ssh admin#test_13 cat /tmp/currentstate.log
PASSIVE
The playbook below reads the files and runs the commands on active and passive servers
shell> cat pb.yml
- hosts: "{{ host1 }},{{ host2 }}"
gather_facts: false
vars:
server_states: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, ['server_state', 'stdout'])|
list)) }}"
server_active: "{{ server_states|dict2items|
selectattr('value', 'eq', 'ACTIVE')|
map(attribute='key')|list }}"
server_pasive: "{{ server_states|dict2items|
selectattr('value', 'eq', 'PASSIVE')|
map(attribute='key')|list }}"
tasks:
- command: cat /tmp/currentstate.log
register: server_state
- debug:
var: server_state.stdout
- block:
- debug:
var: server_states
- debug:
var: server_active
- debug:
var: server_pasive
run_once: true
- command: echo 'Shutdown active server'
register: out_active
delegate_to: "{{ server_active.0 }}"
- command: echo 'Shutdown passive server'
register: out_pasive
delegate_to: "{{ server_pasive.0 }}"
- debug:
msg: |
{{ server_active.0 }}: [{{ out_active.stdout }}] {{ out_active.start }}
{{ server_pasive.0 }}: [{{ out_pasive.stdout }}] {{ out_pasive.start }}
run_once: true
shell> ansible-playbook pb.yml -e host1=test_11 -e host2=test_13
PLAY [test_11,test_13] ***********************************************************************
TASK [command] *******************************************************************************
changed: [test_13]
changed: [test_11]
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_state.stdout: ACTIVE
ok: [test_13] =>
server_state.stdout: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_states:
test_11: ACTIVE
test_13: PASSIVE
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_active:
- test_11
TASK [debug] *********************************************************************************
ok: [test_11] =>
server_pasive:
- test_13
TASK [command] *******************************************************************************
changed: [test_11]
changed: [test_13 -> test_11]
TASK [command] *******************************************************************************
changed: [test_11 -> test_13]
changed: [test_13]
TASK [debug] *********************************************************************************
ok: [test_11] =>
msg: |-
test_11: [Shutdown active server] 2022-10-27 11:16:00.766309
test_13: [Shutdown passive server] 2022-10-27 11:16:02.501907
PLAY RECAP ***********************************************************************************
test_11: ok=8 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
test_13: ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
From the description of your use case I understand that you like to perform tasks on certain servers which a have service role installed (annot.: Terracotta Server) and based on a certain service state.
Therefore, I like to recommend an approach with Custom facts.
Depending on if you have control about where the currentstate.log is placed or how it is structured, you could use in example something like
cat /tmp/ansible/service/terracotta.fact
[currentstate]
ACTIVE = true
PASSIVE = false
or add dynamic facts by adding executable scripts to facts.d ...
Means, alternatively, you can add the current service state to your host facts by creating and running a script in facts.d, which would just read the content of /tmp/currentstate.log.
Then, a sample playbook like
---
- hosts: localhost
become: false
gather_facts: true
fact_path: /tmp/ansible/service
gather_subset:
- "!all"
- "!min"
- "local"
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
when: ansible_local.terracotta.currentstate.active | bool
will result into an output of
TASK [Show Gathered Facts] ******
ok: [localhost] =>
msg:
ansible_local:
terracotta:
currentstate:
active: 'true'
passive: 'false'
gather_subset:
- '!all'
- '!min'
- local
module_setup: true
An other approach is to address How the inventory is build and Group the hosts
[terracotta:children]
terracotta_active
terracotta_passive
[terracotta_active]
terracotta1.example.com
[terracotta_passive]
terracotta2.example.com
You can then just easily and simple define where a playbook or task should run, just by Targeting hosts and groups
ansible-inventory -i hosts--graph
#all:
|--#terracotta:
| |--#terracotta_active:
| | |--terracotta1.example.com
| |--#terracotta_passive:
| | |--terracotta2.example.com
|--#ungrouped:
ansible-inventory -i hosts terracotta_active --graph
#terracotta_active:
|--terracotta1.example.com
or Conditionals based on ansible_facts, in example
when: 'terracotta_active' in group_names
... from my understanding, both would be minimal and simple solutions without re-implementing functionality which seems to be already there.

Running ansible playbook in serial over hostgroups

Im trying to run a playbook for n host groups serially (but 100% parallel within the host group). How do I achieve this?
I've tried things like:
- name: Test
hosts: group1:group2
serial: 100%
and even
- name: Test
hosts: group1:group2
serial: 1
Thinking it would do group by group, however these do not work.
How do I get it to run over all of group1, then after, all of group2 (but fail if anything in group1 fails)?
Also, how do I get it to run over n groups? (There are many hostgroups, which might be tough to define in the hosts key)
You can't control a playbook from another playbook. You'll have to control the playbook from outside, for example by a script. Given the inventory
shell> cat hosts-497
[group1]
srv1
[group2]
srv2
srv3
[group3]
srv4
srv5
srv6
and the playbook
shell> cat test-497.yml
- name: Test
hosts: all
gather_facts: false
tasks:
- debug:
msg: "{{ '%H:%M:%S'|strftime }}: {{ inventory_hostname }}"
the debug task is executed in parallel by all hosts
shell> ansible-playbook -i hosts-497 test-497.yml
PLAY [Test] ***************************************************************
TASK [debug] **************************************************************
ok: [srv3] =>
msg: '20:51:30: srv3'
ok: [srv1] =>
msg: '20:51:30: srv1'
ok: [srv4] =>
msg: '20:51:30: srv4'
ok: [srv2] =>
msg: '20:51:30: srv2'
ok: [srv5] =>
msg: '20:51:30: srv5'
ok: [srv6] =>
msg: '20:51:30: srv6'
If you want to control the hosts create a script and iterate the groups, e.g.
shell> cat test-497.sh
#!/usr/bin/sh
for i in group1 group2 group3; do
ansible-playbook -i hosts-497 --limit $i test-497.yml
done
gives (abridged)
shell> ./test-497.sh
PLAY [Test] *************************************************************
TASK [debug] ************************************************************
ok: [srv1] =>
msg: '20:56:41: srv1'
PLAY [Test] *************************************************************
TASK [debug] ************************************************************
ok: [srv3] =>
msg: '20:56:45: srv3'
ok: [srv2] =>
msg: '20:56:45: srv2'
PLAY [Test] *************************************************************
TASK [debug] ************************************************************
ok: [srv5] =>
msg: '20:56:52: srv5'
ok: [srv6] =>
msg: '20:56:52: srv6'
ok: [srv4] =>
msg: '20:56:53: srv4'

How to create a one-time user prompt input in ansible?

I have a the following oversimplified ansible playbook:
- name: Prepare worker nodes
hosts: "{{ hosts }}"
serial:
- 1
- 3
remote_user: root
any_errors_fatal: true
vars:
hosts: nodes
reboot: false
tasks:
- pause:
prompt: "Reboot server(s) to make sure things are working during setup? (Y/n)"
echo: true
register: confirm_reboot
tags: [ untagged, hostname, netplan, firewalld ]
- set_fact:
reboot: "{{ (confirm_reboot.user_input == '' or confirm_reboot.user_input == 'Y' or confirm_reboot.user_input == 'y' ) | ternary('True', 'False') }}"
tags: [ untagged, hostname, netplan, firewalld, firewalld-install, firewalld-config ]
- debug:
msg: "{{ reboot }}"
It asks for the user's input so it can decide on some reboot policies.
This works just fine when you have just one node, but when you have multiple nodes it will prompt for each one. Suppose you have 42 nodes -- it will ask you 42 times.
I'm trying to figure out if there is an easy way to make the prompt appear just once and share the result among the nodes. Maybe I have missed something in the docs?
Given the inventory
shell> cat hosts
[test]
host1
host2
host3
host4
host5
the playbook
shell> cat playbook.yml
---
- hosts: test
serial:
- 1
- 3
gather_facts: false
tasks:
- pause:
prompt: "Reboot? (Y/n)"
echo: true
register: confirm_reboot
run_once: true
- debug:
msg: "Reboot {{ inventory_hostname }}"
when: confirm_reboot.user_input|lower == 'y'
works as expected
shell> ansible-playbook -i hosts playbook.yml
PLAY [test] *********************************
TASK [pause] ********************************
[pause]
Reboot? (Y/n):
ok: [host1]
TASK [debug] ********************************
ok: [host1] =>
msg: Reboot host1
PLAY [test] *********************************
TASK [pause] ********************************
[pause]
Reboot? (Y/n):
ok: [host2]
TASK [debug] ********************************
ok: [host2] =>
msg: Reboot host2
ok: [host3] =>
msg: Reboot host3
ok: [host4] =>
msg: Reboot host4
PLAY [test] *********************************
TASK [pause] ********************************
[pause]
Reboot? (Y/n):
ok: [host5]
TASK [debug] ********************************
ok: [host5] =>
msg: Reboot host5
Q: "Require the input just once for the entire playbook and be propagated to all hosts."
A: Split the playbook, e.g.
shell> cat playbook.yml
---
- hosts: test
gather_facts: false
tasks:
- pause:
prompt: "Reboot? (Y/n)"
echo: true
register: confirm_reboot
run_once: true
- hosts: test
serial:
- 1
- 3
gather_facts: false
tasks:
- debug:
msg: "Reboot {{ inventory_hostname }}"
when: confirm_reboot.user_input|lower == 'y'
the variable from the first play will be shared among all hosts in the second play
shell> ansible-playbook -i hosts playbook.yml
PLAY [test] *********************************
TASK [pause] ********************************
[pause]
Reboot? (Y/n):
ok: [host1]
PLAY [test] *********************************
TASK [debug] ********************************
ok: [host1] =>
msg: Reboot host1
PLAY [test] *********************************
TASK [debug] ********************************
ok: [host3] =>
msg: Reboot host3
ok: [host2] =>
msg: Reboot host2
ok: [host4] =>
msg: Reboot host4
PLAY [test] *********************************
TASK [debug] ********************************
ok: [host5] =>
msg: Reboot host5
It looks like the only way this will work is by using delegate_to and delegate_facts. I came up with something like this:
- name: Prepare worker nodes
hosts: "{{ hosts }}"
serial:
- 1
- 3
remote_user: root
any_errors_fatal: true
vars:
hosts: nodes
reboot: true
pre_tasks:
- pause:
prompt: "Reboot server(s) to make sure things are working during setup? (Y/n)"
echo: true
register: confirm_reboot
run_once: true
delegate_to: localhost
delegate_facts: true
tags: [ untagged, hostname, netplan, firewalld, firewalld-install, firewalld-config ]
when: "'reboot' not in hostvars['localhost']"
- set_fact:
reboot: "{{ (confirm_reboot.user_input == '' or confirm_reboot.user_input == 'Y' or confirm_reboot.user_input == 'y' ) | ternary('True', 'False') }}"
run_once: true
delegate_to: localhost
delegate_facts: true
tags: [ untagged, hostname, netplan, firewalld, firewalld-install, firewalld-config ]
when: "'reboot' not in hostvars['localhost']"
- set_fact:
reboot: "{{ hostvars['localhost']['reboot'] }}"
run_once: true
tasks:
- debug:
msg: "{{ hostvars['localhost'] }}"
tags: [ untagged, hostname, netplan, firewalld, firewalld-install, firewalld-config ]
- debug:
msg: "{{ reboot }}"
tags: [ untagged, hostname, netplan, firewalld, firewalld-install, firewalld-config ]
This works by delegating the fact to the localhost (control node) and then it uses it by reference that seems to be kept between the different nodes. It is a hackish workaround to me, but since I don't have that much time to dig deeper into the "why", it'll have to do for now.
If anybody figures out a better way - feel free to post your answer.

Access variable from one role in another role in an Ansible playbook with multiple hosts

I'm using the latest version of Ansible, and I am trying to use a default variable in role-one used on host one, in role-two, used on host two, but I can't get it to work.
Nothing I have found in the documentation or on StackOverflow has really helped. I'm not sure what I am doing wrong. Ideally I want to set the value of the variable once, and be able to use it in another role for any host in my playbook.
I've broken it down below.
In my inventory I have a hosts group called [test] which has two hosts aliased as one and two.
[test]
one ansible_host=10.0.1.10 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key>
two ansible_host=10.0.1.20 ansible_connection=ssh ansible_user=centos ansible_ssh_private_key_file=<path_to_key>
I have a single playbook with a play for each of these hosts and I supply the hosts: value as "{{ host_group }}[0]" for host one and "{{ host_group }}[1]" for host two.
The play for host one uses a role called role-one and the play for host two uses a role called role-two.
- name: Test Sharing Role Variables
hosts: "{{ host_group }}[0]"
roles:
- ../../ansible-roles/role-one
- name: Test Sharing Role Variables
hosts: "{{ host_group }}[1]"
roles:
- ../../ansible-roles/role-two
In role-one I have set a variable variable-one.
---
# defaults file for role-one
variable_one: Role One Variable
I want to use the value of variable_one in a template in role-two but I haven't had any luck. I'm using the below as a task in role-two to test and see if the variable is getting "picked-up".
---
# tasks file for role-two
- debug:
msg: "{{ variable_one }}"
When I run the playbook with ansible-playbook test.yml --extra-vars "host_group=test" I get the below failure.
TASK [../../ansible-roles/role-two : debug] ***********************************************************************************************************************************************************************************************
fatal: [two]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: \"hostvars['test']\" is undefined\n\nThe error appears to be in 'ansible-roles/role-two/tasks/main.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# tasks file for role-two\n- debug:\n ^ here\n"}
Variables declared in roles are scoped to the play. If you want to access a variable from role-one in role-two, they would both need to be in the same play. For example, you could write:
- name: Test Sharing Role Variables
hosts: "{{ host_group }}"
tasks:
- import_role:
name: role-one
when: inventory_hostname == "one"
- import_role:
name: role-two
when: inventory_hostname == "two"
Alternatively, you could restructure your roles so that the variables can be imported separately from your actions. That is, have a role_one_vars role that does nothing but define variables, and then you can import that in both role-one and role-two. That is, you would have a structure something like:
playbook.yml
hosts
roles/
role-one/
tasks/
main.yml
role-one-vars/
variables/
main.yml
role-two/
tasks/
main.yml
And role-one/tasks/main.yml would look like:
- import_role:
name: role-one-vars
- debug:
msg: "in role-one: {{ variable_one }}"
role-two/tasks/main.yml would look like:
---
- import_role:
name: role-one-vars
- debug:
msg: "in role-two: {{ variable_one }}"
And role-one-vars/vars/main.yml would look like:
---
variable_one: role one variable
Putting this all together, the output looks like:
PLAY [Test Sharing Role Variables] *****************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************
ok: [one]
TASK [role-one : debug] ****************************************************************************************************************************************************
ok: [one] => {
"msg": "in role-one: role one variable"
}
PLAY [Test Sharing Role Variables] *****************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************
ok: [two]
TASK [role-two : debug] ****************************************************************************************************************************************************
ok: [two] => {
"msg": "in role-two: role one variable"
}
PLAY RECAP *****************************************************************************************************************************************************************
one : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
two : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Q: "Access variable from one role in another role in an Ansible playbook with multiple hosts"
A: Short answer: Use set_fact and put the variable into the hostvars.
Details: Given the roles
shell> cat roles/role-one/defaults/main.yml
variable_one: Role One Variable
shell> cat roles/role-one/tasks/main.yml
- debug:
var: variable_one
shell> cat roles/role-two/tasks/main.yml
- debug:
var: variable_one
The playbook
- hosts: one
roles:
- role-one
tasks:
- debug:
var: variable_one
- hosts: two
roles:
- role-two
- hosts: one
tasks:
- debug:
var: variable_one
gives (abridged)
PLAY [one] ************************************************
TASK [role-one : debug] ****
ok: [one] =>
variable_one: Role One Variable
TASK [debug] ****
ok: [one] =>
variable_one: Role One Variable
PLAY [two] ************************************************
TASK [role-two : debug] ****
ok: [two] =>
variable_one: VARIABLE IS NOT DEFINED!
PLAY [one] ************************************************
TASK [debug] ****
ok: [one] =>
variable_one: VARIABLE IS NOT DEFINED!
As expected, the variable variable_one is visible to the tasks in the first play. But, there is no reason the variable should be visible to the host two in the second play. The variable is not visible also to the same host in the third play because it hasn't been stored in the hostvars aka "instantiated". The playbook below
- hosts: one
roles:
- role-one
tasks:
- debug:
var: variable_one
- set_fact:
variable_one: "{{ variable_one }}"
- hosts: two
roles:
- role-two
- hosts: one
tasks:
- debug:
var: variable_one
gives (abridged)
PLAY [one] ************************************************
TASK [role-one : debug] ****
ok: [one] =>
variable_one: Role One Variable
TASK [debug] ****
ok: [one] =>
variable_one: Role One Variable
TASK [set_fact] ****
ok: [one]
PLAY [two] ************************************************
TASK [role-two : debug] ****
ok: [two] =>
variable_one: VARIABLE IS NOT DEFINED!
PLAY [one] ************************************************
TASK [debug] ****
ok: [one] =>
variable_one: Role One Variable
Now, the variable is visible to the host one in the whole playbook and can be visible to other hosts using hostvars as well. For example, the playbook below
- hosts: one
roles:
- role-one
tasks:
- debug:
var: variable_one
- set_fact:
variable_one: "{{ variable_one }}"
- hosts: two
tasks:
- set_fact:
variable_one: "{{ hostvars.one.variable_one }}"
- include_role:
name: role-two
gives (abridged)
PLAY [one] ************************************************
TASK [role-one : debug] ****
ok: [one] =>
variable_one: Role One Variable
TASK [debug] ****
ok: [one] =>
variable_one: Role One Variable
TASK [set_fact] ****
ok: [one]
PLAY [two] ************************************************
TASK [set_fact] ****
ok: [two]
TASK [include_role : role-two] ****
TASK [role-two : debug] ****
ok: [two] =>
variable_one: Role One Variable
The problem with the above setting is that the host referencing hostvars is hardcoded. A better approach is to "instantiate" the variable in the first play for all hosts. For example, add a dummy task to the role
shell> cat roles/role-one/tasks/noop.yml
- meta: noop
Then, in the first play, include all hosts, run_once import the role, run the dummy task only, and "instantiate" the variable for all hosts. For example
- hosts: all
tasks:
- import_role:
name: role-one
tasks_from: noop.yml
run_once: true
- set_fact:
variable_one: "{{ variable_one }}"
run_once: true
- hosts: two
roles:
- role-two
- hosts: one
roles:
- role-two
gives (abridged)
PLAY [all] ************************************************
TASK [set_fact] ****
ok: [one]
PLAY [two] ************************************************
TASK [role-two : debug] ****
ok: [two] =>
variable_one: Role One Variable
PLAY [one] ************************************************
TASK [role-two : debug] ****
ok: [one] =>
variable_one: Role One Variable

Ansible run task multiple times based on groups

How do you run a task on each group, not just once for the groups?
I was excepting the Run this on each host tasks would run once for each group_var value. Instead it seems to just be picking one and running it.
I plan on breaking these across multiple servers later but for now it should be able to run on one autoscale, and then easy break it up into multiple autoscale groups later as demand increases.
playbook.yml:
---
# Run with: ansible-playbook -i localhost, playbook.yml
- name: Register Groups
hosts: localhost
connection: local
tasks:
- name: Add the groups
add_host:
name: localhost
ansible_connection: local
groups: rest-api, msg-consumer
- name: Run this on each host
hosts:
- rest-api
- msg-consumer
tasks:
- name: Say type
debug: var=item
with_items: run_type
group_vars/rest-api:
---
run_type: web
group_vars/msg-consumer:
---
run_type: consumer
Output Ansible 1.8.2:
$ ansible-playbook -i localhost, playbook.yml
PLAY [Register Groups] ********************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [Add the groups] ********************************************************
ok: [localhost]
PLAY [Run this on each host] **************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [Say type] **************************************************************
ok: [localhost] => (item=consumer) => {
"item": "consumer"
}
PLAY RECAP ********************************************************************
localhost : ok=4 changed=0 unreachable=0 failed=0
Note: It may be something else. I thought I could also clutter my playbook but breaking up the tasks like like follows:
---
- name: Register Groups
hosts: localhost
connection: local
tasks:
- name: Add the groups
add_host:
name: localhost
ansible_connection: local
groups: rest-api, msg-consumer
- name: Run this on each host
hosts:
- msg-consumer
tasks:
- name: Say type
debug: var=item
with_items: run_type
- name: Run this on each host
hosts:
- rest-api
tasks:
- name: Say type
debug: var=item
with_items: run_type
But the output for the 2nd playbook is:
$ ansible-playbook -i localhost, playbook2.yml
PLAY [Register Groups] ********************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [Add the groups] ********************************************************
ok: [localhost]
PLAY [Run this on each host] **************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [Say type] **************************************************************
ok: [localhost] => (item=consumer) => {
"item": "consumer"
}
PLAY [Run this on each host] **************************************************
GATHERING FACTS ***************************************************************
ok: [localhost]
TASK: [Say type] **************************************************************
ok: [localhost] => (item=consumer) => {
"item": "consumer"
}
PLAY RECAP ********************************************************************
localhost : ok=6 changed=0 unreachable=0 failed=0
Edit: Yet Another attempt to access the data, it looks like group_vars isn't behaving like I expect. The following outputs consumer twice also.
-
# Run with: ansible-playbook -i localhost, playbook.yml
- name: Register Groups
hosts: localhost
connection: local
tasks:
- name: Add the groups
add_host:
name: localhost
ansible_connection: local
groups: rest-api, msg-consumer
- name: Run this on each host
hosts:
- msg-consumer
- rest-api
tasks:
- name: What's your run type
debug: var=hostvars[groups[item][0]]['run_type']
with_items: group_names
The easiest way to do this is to use aliases for the hostnames instead of the real hosts:
---
- name: Register Groups
hosts: localhost
connection: local
tasks:
- name: Add the rest-api alias for my app
add_host:
name: my-app-rest-api
ansible_ssh_host: 127.0.0.1
groups: rest-api
- name: Add the msg-consumer alias for my app
add_host:
name: my-app-msg-consumer
ansible_ssh_host: 127.0.0.1
groups: msg-consumer
- name: Test Run Types
hosts:
- msg-consumer
- rest-api
tasks:
- name: What's your run type
debug: msg="Run Type of {{ ansible_ssh_host }} is {{ run_type }}"
now you can use your group_vars again:
group_vars/rest-api:
---
run_type: web
group_vars/msg-consumer:
---
run_type: consumer
and the output will be:
PLAY [Register Groups] ********************************************************
TASK: [Add the rest-api alias for my app] *************************************
ok: [localhost]
TASK: [Add the msg-consumer alias for my app] *********************************
ok: [localhost]
PLAY [Test Run Types] *********************************************************
TASK: [What's your run type] **************************************************
ok: [my-app-msg-consumer] => {
"msg": "Run Type of 127.0.0.1 is consumer"
}
ok: [my-app-rest-api] => {
"msg": "Run Type of 127.0.0.1 is web"
}
For now this is the best I can come up with:
---
- name: Register Groups
hosts: localhost
connection: local
tasks:
- name: Add new host group
add_host:
name: 127.0.0.1
ansible_connection: local
groups: new-server
run_types:
- rest-api
- msg-consumer
- name: Add another new host group
add_host:
name: 127.0.0.2
ansible_connection: local
groups: new-server
run_types:
- nothing
- name: Test Run Types Server 1
hosts:
- new-server
tasks:
- name: What's your run type
debug: var=item
with_items: run_types
Note: The hosts must be different for this to work, otherwise it will override and use the last variable value used with add_host.
See my answer under Ansible run task once per database-name.
Basically, there is no run_once_per_group, and the closest method I'm aware of is a true run_once that loops over groups. To make matters more cluttered, there is no group_vars dictionary variable.
---
- hosts: all
tasks:
- name: "do this once per group"
delegate_to: localhost
debug:
msg: "do something on {{hostvars[groups[item.key].0]['somevar']}} for group named {{item}}"
run_once: yes
with_dict: groups
when: item.key not in ['all', 'ungrouped']

Resources