Ansible can we use if condition at hosts - ansible

How to run the playbook on a specific set of hosts with a conditional variable
vars_file.yml
deployment: blue
hosts_file.yml
[east1]
127.0.0.1
127.0.0.2
[west2]
127.0.0.3
127.0.0.4
playbook.yml
---
hosts: all
vars_files:
- 'vars_file.yml'
tasks:
- copy: src=config dest=/tmp/
hosts: {{ east1[0] if deployment == "blue" else west2[0]}}
vars_files:
- 'vars_file.yml'
tasks:
- shell: "./startup_script restart"
Note: I cant pass variables through the command line and I cant segregate task to a new playbook.

You can access variables defined on another host by targeting the hostvars dictionary key of that host.
In order to do that though, you need to register the variable on the host, with set_fact, importing it won't be enough.
Here is an example, given the inventory:
all:
children:
east1:
hosts:
east_node_1:
ansible_host: node1
east_node_2:
ansible_host: node4
west2:
hosts:
west_node_1:
ansible_host: node2
west_node_2:
ansible_host: node3
And the playbook:
- hosts: localhost
gather_facts: no
vars_files:
- vars_file.yml
tasks:
- set_fact:
deployment: "{{ deployment }}"
- hosts: >-
{{ groups['east1'][0]
if hostvars['localhost'].deployment == 'blue'
else groups['west2'][0]
}}
gather_facts: no
tasks:
- debug:
This would yield the recaps:
PLAY [localhost] *******************************************************************************************************************
TASK [set_fact] ********************************************************************************************************************
ok: [localhost]
PLAY [east_node_1] *****************************************************************************************************************
TASK [debug] ***********************************************************************************************************************
ok: [east_node_1] => {
"msg": "Hello world!"
}
PLAY RECAP *************************************************************************************************************************
east_node_1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
when vars_file.yml contains
deployment: blue
And
PLAY [localhost] *******************************************************************************************************************
TASK [set_fact] ********************************************************************************************************************
ok: [localhost]
PLAY [west_node_1] *****************************************************************************************************************
TASK [debug] ***********************************************************************************************************************
ok: [west_node_1] => {
"msg": "Hello world!"
}
PLAY RECAP *************************************************************************************************************************
west_node_1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
when vars_file.yml contains
deployment: red
Another equivalent construction, using patterns to target the group, would just see the host target changed to:
- hosts: >-
{{ 'east1[0]'
if hostvars['localhost'].deployment == 'blue'
else 'west2[0]'
}}

Related

Ansible playbook target patterns

I would like to ask you if it is possible to filter the inventory file, using patterns related to variables inside the inventory itself.
For example:
I have this inventory
groupA:
hosts:
host1A:
env: pre
host2A:
env: prod
groupB:
hosts:
host1B:
env: pre
host2B:
env: prod
and this is the plabook:
- name: test
hosts: **only hosts that contain env:pre??**
role: backend
Is it possible to achieve that? Maybe using a REG expression?
Please help me out.
Thanks in advance.
Given the inventory
shell> cat hosts
groupA:
hosts:
host1A:
env: pre
host1B:
env: prod
groupB:
hosts:
host1B:
env: pre
host1C:
env: prod
The simplest option is using module group_by. For example,
- hosts: all
gather_facts: false
tasks:
- group_by:
key: group_{{ env }}
- debug:
var: groups
run_once: true
gives
PLAY [all] ************************************************************************************
TASK [group_by] *******************************************************************************
changed: [host1A]
changed: [host1B]
changed: [host1C]
TASK [debug] **********************************************************************************
ok: [host1A] =>
groups:
all:
- host1A
- host1B
- host1C
groupA:
- host1A
- host1B
groupB:
- host1B
- host1C
group_pre:
- host1A
- host1B
group_prod:
- host1C
ungrouped: []
PLAY RECAP ************************************************************************************
host1A: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1B: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1C: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
This is probably not what you want. The host host1B is not a member of the prod group because the variable env evaluates to 'pre'.
You can easily fix it by putting all options into a comma-separated list. For example,
shell> cat hosts
groupA:
hosts:
host1A:
env: pre
host1B:
env: prod,pre
groupB:
hosts:
host1B:
host1C:
env: prod
and the playbook
- hosts: all
gather_facts: false
tasks:
- group_by:
key: "group_{{ item }}"
loop: "{{ env.split(',') }}"
- debug:
var: groups
run_once: true
gives what you want
PLAY [all] ************************************************************************************
TASK [group_by] *******************************************************************************
ok: [host1A] => (item=pre)
ok: [host1B] => (item=prod)
ok: [host1C] => (item=prod)
ok: [host1B] => (item=pre)
TASK [debug] **********************************************************************************
ok: [host1A] =>
groups:
all:
- host1A
- host1B
- host1C
groupA:
- host1A
- host1B
groupB:
- host1B
- host1C
group_pre:
- host1A
- host1B
group_prod:
- host1C
- host1B
ungrouped: []
PLAY RECAP ************************************************************************************
host1A: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1B: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1C: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible stop the whole playbook if all hosts in a single play fail

I'm struggling to understand what's the intended behavior of ansible in case all hosts fail in a single play but there are other plays on other hosts in the playbook.
For example consider the following playbook:
---
- name: P1
hosts: a,b
tasks:
- name: Assert 1
ansible.builtin.assert:
that: 1==2
when: inventory_hostname != "c"
- name: P2
hosts: y,z
tasks:
- name: Debug 2
ansible.builtin.debug:
msg: 'YZ'
All 4 hosts a,b,y,z point to localhost for the sake of clarity.
What happens is assert fails and the whole playbook stops. However it seems to contradict the documentation which says that in case of an error ansible stops executing on the failed host but continues on the other hosts, see Error handling
In case I change the condition to when: inventory_hostname != 'b' and therefore b does not fail then the playbook continues to execute the second play on hosts y,z.
To me the initial failure does not seem reasonable because the hosts y,z have not experience any errors and therefore execution on them should not be prevented by the error on the other hosts.
Is this is a bug or am I missing something?
It's not a bug. It's by design (see Notes 3,4 below). As discussed in the comments to the other answer, the decision whether to terminate the whole playbook when all hosts in a play fail or not seems to be a trade-off. Either a user will have to handle how to proceed to the next play if necessary or how to stop the whole playbook if necessary. You can see in the examples below that both options require handling errors in a block approximately to the same extent.
The first case was implemented by Ansible: A playbook will terminate when all hosts in a play fail. For example,
- hosts: host01,host02
tasks:
- assert:
that: false
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
fatal: [host02]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
PLAY RECAP ***********************************************************************************
host01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host02 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The playbook will proceed to the next play when not all hosts in a play fail. For example,
- hosts: host01,host02
tasks:
- assert:
that: false
when: inventory_hostname == 'host01'
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
skipping: [host02]
PLAY [host03] ********************************************************************************
TASK [debug] *********************************************************************************
ok: [host03] =>
msg: Hello
PLAY RECAP ***********************************************************************************
host01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host02 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host03 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
To proceed to the next play when all hosts in a play fail, a user has to clear the errors, and, optionally, end the host in a play as well. For example,
- hosts: host01,host02
tasks:
- block:
- assert:
that: false
rescue:
- meta: clear_host_errors
- meta: end_host
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
fatal: [host02]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
TASK [meta] **********************************************************************************
TASK [meta] **********************************************************************************
TASK [meta] **********************************************************************************
PLAY [host03] ********************************************************************************
TASK [debug] *********************************************************************************
ok: [host03] =>
msg: Hello
PLAY RECAP ***********************************************************************************
host01 : ok=0 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host02 : ok=0 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host03 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Update: The playbook can't be stopped by meta end_play after this was 'fixed' in 2.12.2.
It was possible to end the whole playbook by meta *end_play* in Ansible 2.12.1. Imagine that failed all hosts in a play wouldn't terminate the whole playbook. In other words, imagine it's not implemented that way. Then, a user might want to terminate the playbook on her own. For example,
- hosts: host01,host02
tasks:
- block:
- assert:
that: false
rescue:
- meta: clear_host_errors
- set_fact:
host_failed: true
- meta: end_play
when: ansible_play_hosts_all|map('extract', hostvars, 'host_failed') is all
run_once: true
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
fatal: [host02]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
TASK [meta] **********************************************************************************
TASK [set_fact] ******************************************************************************
ok: [host01]
ok: [host02]
TASK [meta] **********************************************************************************
PLAY RECAP ***********************************************************************************
host01 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
Notes
meta end_host means: 'end the play for this host'
- hosts: host01
tasks:
- meta: end_host
- hosts: host01,host02
tasks:
- debug:
msg: Hello
PLAY [host01] ********************************************************************************
TASK [meta] **********************************************************************************
PLAY [host01,host02] *************************************************************************
TASK [debug] *********************************************************************************
ok: [host01] =>
msg: Hello
ok: [host02] =>
msg: Hello
PLAY RECAP ***********************************************************************************
host01 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
meta end_play means: 'end the playbook' (this was 'fixed' in 2.12.2. See #76672)
- hosts: host01
tasks:
- meta: end_play
- hosts: host01,host02
tasks:
- debug:
msg: Hello
PLAY [host01] ********************************************************************************
TASK [meta] **********************************************************************************
PLAY RECAP ***********************************************************************************
Quoting from #37309
If all hosts in the current play batch (fail) the play ends, this is 'as designed' behavior ... 'play batch' is 'serial size' or all hosts in play if serial is not set.
Quoting from the source
# check the number of failures here, to see if they're above the maximum
# failure percentage allowed, or if any errors are fatal. If either of those
# conditions are met, we break out, otherwise, we only break out if the entire
# batch failed
failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \
(previously_failed + previously_unreachable)
if len(batch) == failed_hosts_count:
break_play = True
break
A playbook with multiple play is just sequential, it cannot know in front that you are going to have any other hosts in a later play.
Because your assert task, in the first play, has exhausted all hosts of the play, it makes sense that the playbook stops there, as it won't have anything to do on any further tasks in P1, and remember, it doesn't know anything about P2 yet, so it just end there.

Ansible how to select hosts based on certain attributes and use their ip addresses to create a list at runtime

I have a specific question about data manipulation in ansible.
In my inventory file, I have a group called postgresql as below:
[postgresql]
host1 ansible_host=1.1.1.1 postgresql_cluster_port=5432 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
host3 ansible_host=3.3.3.3 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
host4 ansible_host=4.4.4.4 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
Somewhere in my playbook I will need to manipulate and use filters to make a list of ip addresses of all hosts whose postgresql_harole=slave as below:
- hosts: postgresql
gather_facts: True
remote_user: root
tasks:
- set_facts:
slave_ip_list: "{{ expressions }}"
I am pulling my hairs to have the correct expressions... any help is highly appreciated!!!!
Q: "List of ip addresses of all hosts whose postgresql_harole=slave"
A: Select the attribute from hostvars, e.g.
- set_fact:
slave_ip_list: "{{ hostvars|dict2items|
selectattr('value.postgresql_harole', 'eq', 'slave')|
map(attribute='value.ansible_host')|
list }}"
run_once: true
gives
slave_ip_list:
- 2.2.2.2
- 3.3.3.3
- 4.4.4.4
Select the hostvars for the group postgresql first if there are hostvars for other hosts, e.g. as a result of - hosts: all. The task below gives the same result
- set_fact:
slave_ip_list: "{{ groups.postgresql|
map('extract', hostvars)|
selectattr('postgresql_harole', 'eq', 'slave')|
map(attribute='ansible_host')|
list }}"
run_once: true
Update
You can simplify both the code and inventory. Put the declarations into the group_vars. For example,
shell> cat group_vars/postgresql
postgresql_cluster_port: 5432
postgresql_master_ip: "{{ groups.postgresql|
map('extract', hostvars)|
selectattr('postgresql_harole', 'eq', 'master')|
map(attribute='ansible_host')|first }}"
postgresql_slave_ip: "{{ groups.postgresql|
map('extract', hostvars)|
selectattr('postgresql_harole', 'eq', 'slave')|
map(attribute='ansible_host')|list }}"
Then, you can remove postgresql_master_ip and postgresql_cluster_port from the inventory
shell> cat hosts
[postgresql]
host1 ansible_host=1.1.1.1 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_harole=slave
host3 ansible_host=3.3.3.3 postgresql_harole=slave
host4 ansible_host=4.4.4.4 postgresql_harole=slave
The playbook
- hosts: postgresql
gather_facts: false
tasks:
- debug:
msg: |
ansible_host: {{ ansible_host }}
postgresql_harole: {{ postgresql_harole }}
postgresql_master_ip: {{ postgresql_master_ip }}
postgresql_cluster_port: {{ postgresql_cluster_port }}
postgresql_slave_ip: {{ postgresql_slave_ip }}
gives
shell> ansible-playbook pb.yml
PLAY [postgresql] ****************************************************************************
TASK [debug] *********************************************************************************
ok: [host1] =>
msg: |-
ansible_host: 1.1.1.1
postgresql_harole: master
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
ok: [host2] =>
msg: |-
ansible_host: 2.2.2.2
postgresql_harole: slave
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
ok: [host4] =>
msg: |-
ansible_host: 4.4.4.4
postgresql_harole: slave
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
ok: [host3] =>
msg: |-
ansible_host: 3.3.3.3
postgresql_harole: slave
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
PLAY RECAP ***********************************************************************************
host1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The easiest solution is probably to use Ansible's group_by module, which allows you to create groups dynamically based on the values of host variables. For example, given this inventory:
[postgresql]
host1 ansible_host=1.1.1.1 postgresql_cluster_port=5432 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_cluster_port=5432 postgresql_harole=worker postgresql_master_ip=1.1.1.1
host3 ansible_host=3.3.3.3 postgresql_cluster_port=5432 postgresql_harole=worker postgresql_master_ip=1.1.1.1
host4 ansible_host=4.4.4.4 postgresql_cluster_port=5432 postgresql_harole=worker postgresql_master_ip=1.1.1.1
We can group hosts using the postgresql_harole variable like this:
- hosts: all
gather_facts: false
tasks:
- name: create group of postgresql workers
group_by:
key: "pg_role_{{ postgresql_harole|default('none') }}"
- hosts: pg_role_worker
gather_facts: false
tasks:
- run_once: true
debug:
var: groups.pg_role_worker
- debug:
msg: "Hello world"
Running the above playbook would generate output similar to:
PLAY [all] ***********************************************************************************
TASK [create group of postgresql workers] ****************************************************
changed: [host1]
changed: [host2]
changed: [host3]
changed: [host4]
PLAY [pg_role_worker] ************************************************************************
TASK [debug] *********************************************************************************
ok: [host2] => {
"groups.pg_role_worker": [
"host2",
"host3",
"host4"
]
}
TASK [debug] *********************************************************************************
ok: [host2] => {
"msg": "Hello world"
}
ok: [host3] => {
"msg": "Hello world"
}
ok: [host4] => {
"msg": "Hello world"
}
PLAY RECAP ***********************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You can see that (a) there now exists a group pg_role_worker containing hosts host2, host3, and host4 and that (b) the final debug task runs only on those hosts.

Ansible-playbook: list all servers as comma separated and exclude own server

I have three servers
[servers]
server1
server2
server3
I would like to create for each server list of servers without including itself: for example
for server1: it should be server2,server3;
for server2: it should be server1,server3;
for server3: it should be server1,server2;
I can create list of all servers but don't know how to exclude one server ?
- hosts: servers
vars:
network_check_list: "{{groups['servers']|join(',')}}"
You can use the difference filter with a single element list containing the current targeted server as argument:
---
- hosts: servers
gather_facts: false
vars:
network_check_list: "{{ groups['servers'] | difference([inventory_hostname]) | join(',') }}"
tasks:
- debug:
var: network_check_list
Since the jinja2 expression is interpreted on spot and for each run on a particular server, you can keep this definition in your playbook vars and it will be adapted to each context in the task. here is the result (used your example inventory):
$ ansible-playbook -i inventory play.yml
PLAY [servers] ****************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [server1] => {
"network_check_list": "server2,server3"
}
ok: [server2] => {
"network_check_list": "server1,server3"
}
ok: [server3] => {
"network_check_list": "server1,server2"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
server1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Ref: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#set-theory-filters

Issue adding duplicate name with different ansible_user to add_host dynamic inventory

Here is my playbook that builds a dynamic inventory using add_host:
---
- name: "Play 1"
hosts: localhost
gather_facts: no
tasks:
- name: "Search database"
command: > mysql --user=root --password=p#ssword deployment
--host=localhost -Ns -e "SELECT dest_ip,username FROM deploy_dets"
register: command_result
- name: Add hosts
add_host:
name: "{{ item.split('\t')[0] }}"
ansible_user: "{{ item.split('\t')[1] }}"
groups: dest_nodes
with_items: "{{ command_result.stdout_lines }}"
- hosts: dest_nodes
gather_facts: false
tasks:
- debug:
msg: Run the shell script with the arguments `{{ ansible_user }}` here"
The Output is good and as expected when the 'name:' attribute of add_host are of different values IPs viz '10.9.0.100' & '10.8.2.144'
$ ansible-playbook duplicate_hosts.yml
PLAY [Play 1] ***********************************************************************************************************************************************
TASK [Search database] **************************************************************************************************************************************
changed: [localhost]
TASK [Add hosts] ********************************************************************************************************************************************
changed: [localhost] => (item=10.9.0.100 user1)
changed: [localhost] => (item=10.8.2.144 user2)
PLAY [dest_nodes] *******************************************************************************************************************************************
TASK [debug] ************************************************************************************************************************************************
ok: [10.9.0.100] => {
"msg": "Run the shell script with the arguments `user1` here\""
}
ok: [10.8.2.144] => {
"msg": "Run the shell script with the arguments `user2` here\""
}
PLAY RECAP **************************************************************************************************************************************************
10.8.2.144 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.9.0.100 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The problem is when the 'name:' attribute for add_host gets duplicate entry say 10.8.2.144 despite having unique 'ansible_user' value the play ignores the first name, ansible_user entry and runs only once with the latest final entry.
$ ansible-playbook duplicate_hosts.yml
PLAY [Play 1] ***********************************************************************************************************************************************
TASK [Search database] **************************************************************************************************************************************
changed: [localhost]
TASK [Add hosts] ********************************************************************************************************************************************
changed: [localhost] => (item=10.8.2.144 user1)
changed: [localhost] => (item=10.8.2.144 user2)
PLAY [dest_nodes] *******************************************************************************************************************************************
TASK [debug] ************************************************************************************************************************************************
ok: [10.8.2.144] => {
"msg": "Run the shell script with the arguments `user2` here\""
}
PLAY RECAP **************************************************************************************************************************************************
10.8.2.144 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Interestingly the debug shows two entries for add_host name: 10.8.2.144 with the different ansible_users i.e 'user1' and 'user2' but when we run the group it runs just the single and latest name entry and seen in the output above.
I'm on the latest version of ansible.
Can you please provide some solution where i can run the play for every unique 'ansible_user' on the same host ?
In summary: I wish to run multiple tasks on the same host first with 'user1' and then with 'user2'
You can add a alias as inventory hostname. Here I have given the username as hostname(alias).
Please try this, I have not tested it.
- name: Add hosts
add_host:
hostname: "{{ item.split('\t')[1] }}"
ansible_host: "{{ item.split('\t')[0] }}"
ansible_user: "{{ item.split('\t')[1] }}"
groups: dest_nodes
with_items: "{{ command_result.stdout_lines }}"

Resources