Select one usable server from inventory - ansible

I have a group of servers, which may or may not be available at the moment. I want to execute queries against a usable server. I don't know which one to use, though. So I thought I could use an include...until construct, but that doesn't work as expected.
Example:
main.yml
---
- hosts: localhost
tasks:
- include: get_available_node.yml
loop:
- localhost
- example.com
- localhost
loop_control:
loop_var: domain
until: available_domain is defined and available_domain|length > 0
- debug:
var: available_domain
get_available_node.yml
---
- get_url:
url: "http://{{ domain }}"
dest: /dev/null
register: url_attempt
ignore_errors: yes
- set_fact:
available_domain: "{{ domain }}"
when: url_attempt.failed == false
I thought, my loop should stop as soon as my available_domain var gets set. However, it doesn't.
PLAY [localhost] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************************************************
ok: [localhost]
TASK [include] ********************************************************************************************************************************************************************
included: /Volumes/Source Code/playground/stackoverflow/get_available_node.yml for localhost => (item=localhost)
included: /Volumes/Source Code/playground/stackoverflow/get_available_node.yml for localhost => (item=example.com)
included: /Volumes/Source Code/playground/stackoverflow/get_available_node.yml for localhost => (item=localhost)
TASK [Try to curl localhost] ******************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "dest": "/dev/null", "elapsed": 0, "gid": 0, "group": "wheel", "mode": "0666", "msg": "Request failed: <urlopen error [Errno 61] Connection refused>", "owner": "root", "size": 0, "state": "file", "uid": 0, "url": "http://localhost"}
...ignoring
TASK [set_fact] *******************************************************************************************************************************************************************
skipping: [localhost]
TASK [Try to curl example.com] ****************************************************************************************************************************************************
ok: [localhost]
TASK [set_fact] *******************************************************************************************************************************************************************
ok: [localhost]
TASK [Try to curl localhost] ******************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "dest": "/dev/null", "elapsed": 0, "gid": 0, "group": "wheel", "mode": "0666", "msg": "Request failed: <urlopen error [Errno 61] Connection refused>", "owner": "root", "size": 0, "state": "file", "uid": 0, "url": "http://localhost"}
...ignoring
TASK [set_fact] *******************************************************************************************************************************************************************
skipping: [localhost]
TASK [debug] **********************************************************************************************************************************************************************
ok: [localhost] => {
"available_domain": "example.com"
}
PLAY RECAP ************************************************************************************************************************************************************************
localhost : ok=9 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=2
At the end, a usable server is in the variable, however, if there are 20 hosts in my original list, it takes a lot of time to loop through them.
Where is my misunderstanding of how the until parameter works? Is there a way to achieve what I'm trying to do?

Using run_once should work, as long as setup is run (ie gather_facts: true), because ansible will remove those hosts which can't be reached for Fact Gathering, from the candidate pool for run_once, and thus only run on the first "up" host. for example:
---
- hosts: all
become: false
run_once: true
gather_facts: true
tasks:
- ping:
$ ansible-playbook -i does,not,exist,localhost, test.yml
will run ping on only one of 4 hosts only: localhost, the only one that's up
PLAY RECAP ********************************************************************
does : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
exist : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
not : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
(you might have to add ansible_connection: local to host_vars/localhost to simulate). If there are multiple hosts up, run_once will only run on the first one from the inventory that's actually up.
Note, if you're doing fact caching, I'm not sure if this will still hold true, because setup might not get run and remove the node from candidate pool. If you need this, an initial ping task through all the servers, followed by the "work" task with run_once, might work (untested), with the default linear strategy.

Related

Ansible stop the whole playbook if all hosts in a single play fail

I'm struggling to understand what's the intended behavior of ansible in case all hosts fail in a single play but there are other plays on other hosts in the playbook.
For example consider the following playbook:
---
- name: P1
hosts: a,b
tasks:
- name: Assert 1
ansible.builtin.assert:
that: 1==2
when: inventory_hostname != "c"
- name: P2
hosts: y,z
tasks:
- name: Debug 2
ansible.builtin.debug:
msg: 'YZ'
All 4 hosts a,b,y,z point to localhost for the sake of clarity.
What happens is assert fails and the whole playbook stops. However it seems to contradict the documentation which says that in case of an error ansible stops executing on the failed host but continues on the other hosts, see Error handling
In case I change the condition to when: inventory_hostname != 'b' and therefore b does not fail then the playbook continues to execute the second play on hosts y,z.
To me the initial failure does not seem reasonable because the hosts y,z have not experience any errors and therefore execution on them should not be prevented by the error on the other hosts.
Is this is a bug or am I missing something?
It's not a bug. It's by design (see Notes 3,4 below). As discussed in the comments to the other answer, the decision whether to terminate the whole playbook when all hosts in a play fail or not seems to be a trade-off. Either a user will have to handle how to proceed to the next play if necessary or how to stop the whole playbook if necessary. You can see in the examples below that both options require handling errors in a block approximately to the same extent.
The first case was implemented by Ansible: A playbook will terminate when all hosts in a play fail. For example,
- hosts: host01,host02
tasks:
- assert:
that: false
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
fatal: [host02]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
PLAY RECAP ***********************************************************************************
host01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host02 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The playbook will proceed to the next play when not all hosts in a play fail. For example,
- hosts: host01,host02
tasks:
- assert:
that: false
when: inventory_hostname == 'host01'
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
skipping: [host02]
PLAY [host03] ********************************************************************************
TASK [debug] *********************************************************************************
ok: [host03] =>
msg: Hello
PLAY RECAP ***********************************************************************************
host01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host02 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host03 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
To proceed to the next play when all hosts in a play fail, a user has to clear the errors, and, optionally, end the host in a play as well. For example,
- hosts: host01,host02
tasks:
- block:
- assert:
that: false
rescue:
- meta: clear_host_errors
- meta: end_host
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
fatal: [host02]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
TASK [meta] **********************************************************************************
TASK [meta] **********************************************************************************
TASK [meta] **********************************************************************************
PLAY [host03] ********************************************************************************
TASK [debug] *********************************************************************************
ok: [host03] =>
msg: Hello
PLAY RECAP ***********************************************************************************
host01 : ok=0 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host02 : ok=0 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host03 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Update: The playbook can't be stopped by meta end_play after this was 'fixed' in 2.12.2.
It was possible to end the whole playbook by meta *end_play* in Ansible 2.12.1. Imagine that failed all hosts in a play wouldn't terminate the whole playbook. In other words, imagine it's not implemented that way. Then, a user might want to terminate the playbook on her own. For example,
- hosts: host01,host02
tasks:
- block:
- assert:
that: false
rescue:
- meta: clear_host_errors
- set_fact:
host_failed: true
- meta: end_play
when: ansible_play_hosts_all|map('extract', hostvars, 'host_failed') is all
run_once: true
- hosts: host03
tasks:
- debug:
msg: Hello
PLAY [host01,host02] *************************************************************************
TASK [assert] ********************************************************************************
fatal: [host01]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
fatal: [host02]: FAILED! => changed=false
assertion: false
evaluated_to: false
msg: Assertion failed
TASK [meta] **********************************************************************************
TASK [set_fact] ******************************************************************************
ok: [host01]
ok: [host02]
TASK [meta] **********************************************************************************
PLAY RECAP ***********************************************************************************
host01 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
host02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=1 ignored=0
Notes
meta end_host means: 'end the play for this host'
- hosts: host01
tasks:
- meta: end_host
- hosts: host01,host02
tasks:
- debug:
msg: Hello
PLAY [host01] ********************************************************************************
TASK [meta] **********************************************************************************
PLAY [host01,host02] *************************************************************************
TASK [debug] *********************************************************************************
ok: [host01] =>
msg: Hello
ok: [host02] =>
msg: Hello
PLAY RECAP ***********************************************************************************
host01 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
meta end_play means: 'end the playbook' (this was 'fixed' in 2.12.2. See #76672)
- hosts: host01
tasks:
- meta: end_play
- hosts: host01,host02
tasks:
- debug:
msg: Hello
PLAY [host01] ********************************************************************************
TASK [meta] **********************************************************************************
PLAY RECAP ***********************************************************************************
Quoting from #37309
If all hosts in the current play batch (fail) the play ends, this is 'as designed' behavior ... 'play batch' is 'serial size' or all hosts in play if serial is not set.
Quoting from the source
# check the number of failures here, to see if they're above the maximum
# failure percentage allowed, or if any errors are fatal. If either of those
# conditions are met, we break out, otherwise, we only break out if the entire
# batch failed
failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) - \
(previously_failed + previously_unreachable)
if len(batch) == failed_hosts_count:
break_play = True
break
A playbook with multiple play is just sequential, it cannot know in front that you are going to have any other hosts in a later play.
Because your assert task, in the first play, has exhausted all hosts of the play, it makes sense that the playbook stops there, as it won't have anything to do on any further tasks in P1, and remember, it doesn't know anything about P2 yet, so it just end there.

Ansible - Start at specific role

I have a playbook that only calls roles. This is what it looks like: (there are about 20 roles in it)
---
- hosts: prod1234
roles:
- role1
- role2
- role3
Sometimes, a role fails, and I don't want to start over as each role is huge and I would just like to start at that point or the next one.
With tasks, I know there's a flag for --start-at-task="task-name". Is there something similar I can do with roles?
My current solution is to comment out all the lines I don't need and run it again..
Thanks ahead!~~
Quick n dirty solution. The following roleimport.yml playbook
# Note you will have to implement error management (e.g. you give a role that does not exist).
- name: quickNdirty roles start demo
hosts: localhost
gather_facts: false
vars:
full_role_list:
- myfirstrole
- mysecondrole
- thirdone
- next
- last
# We calculate a start index for the above list. By default it will be 0.
# Unless we pass `start_role` var in extra_vars: the index will be set
# to that element
start_index: "{{ full_role_list.index(start_role|default('myfirstrole')) }}"
# We slice the list from start_index to end
current_role_list: "{{ full_role_list[start_index|int:] }}"
tasks:
# Real task commented out for this example
# - name: Import selected roles in order
# import_role:
# name: "{{ item }}"
# loop: "{{ current_role_list }}"
- name: Debug roles that would be used in above commented task
debug:
msg: "I would import role {{ item }}"
loop: "{{ current_role_list }}"
gives:
$ ansible-playbook roleimport.yml
PLAY [quickNdirty roles start demo] *******************************************************************************************************************************************************************************************
TASK [Debug roles that would be used in above commented task] *****************************************************************************************************************************************************************
ok: [localhost] => (item=myfirstrole) => {
"msg": "I would import role myfirstrole"
}
ok: [localhost] => (item=mysecondrole) => {
"msg": "I would import role mysecondrole"
}
ok: [localhost] => (item=thirdone) => {
"msg": "I would import role thirdone"
}
ok: [localhost] => (item=next) => {
"msg": "I would import role next"
}
ok: [localhost] => (item=last) => {
"msg": "I would import role last"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible-playbook roleimport.yml -e start_role=thirdone
PLAY [quickNdirty roles start demo] *******************************************************************************************************************************************************************************************
TASK [Debug roles that would be used in above commented task] *****************************************************************************************************************************************************************
ok: [localhost] => (item=thirdone) => {
"msg": "I would import role thirdone"
}
ok: [localhost] => (item=next) => {
"msg": "I would import role next"
}
ok: [localhost] => (item=last) => {
"msg": "I would import role last"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
$ ansible-playbook roleimport.yml -e start_role=last
PLAY [quickNdirty roles start demo] *******************************************************************************************************************************************************************************************
TASK [Debug roles that would be used in above commented task] *****************************************************************************************************************************************************************
ok: [localhost] => (item=last) => {
"msg": "I would import role last"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
# Implement error management yourself if you need it.
$ ansible-playbook roleimport.yml -e start_role=thisIsAnError
PLAY [quickNdirty roles start demo] *******************************************************************************************************************************************************************************************
TASK [Debug roles that would be used in above commented task] *****************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while templating '{{ full_role_list[start_index|int:] }}'. Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ full_role_list.index(start_role|default('myfirstrole')) }}'. Error was a <class 'ValueError'>, original message: 'thisIsAnError' is not in list"}
PLAY RECAP ********************************************************************************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Abort further task execution, with multiple hosts

Using Ansible v2.9.12
Question: I'd like Ansible to fail/stop the play when a task fails, when multiple hosts execute the task. In that sense, Ansible should abort the task from further execution. The configuration should work in a role, so using serial, or using different plays, is not possible.
Example ;
- hosts:
- host1
- host2
- host3
any_errors_fatal: true
tasks:
- name: always fail
shell: /bin/false
throttle: 1
Provides ;
===== task | always fail =======
host1: fail
host2: fail
host3: fail
Meaning, the task is still executed on the second host and third host. I'd desire the whole play to fail/stop, once a task fails on a host. When the task fails on the last host, Ansible should abort as well.
Desired outcome ;
===== task | always fail =======
host1: fail
host2: not executed/skipped, cause host1 failed
host3: not executed/skipped, cause host1 failed
As you can see, I've fiddled around with error handling, but without prevail.
Background info: I've been developing an idempotent Ansible role for mysql. It is possible to setup a cluster with multiple hosts. The role also supports adding an arbiter.
The arbiter does not has the mysql application installed, but the host is still required in the play.
Now, imagine three hosts. Host1 is the arbiter, host2 and host3 have mysql installed, setup in a cluster. The applications are setup by the Ansible role.
Now, Ansible executes the role for a second/third/fourth/whatever time, and changes a config setting of mysql. Mysql needs a rolling restart. Usually, one writes some thing along the lines of:
- template:
src: mysql.j2
dest: /etc/mysql
register: mysql_config
when: mysql.role != 'arbiter'
- service:
name: mysql
state: restarted
throttle: 1
when:
- mysql_config.changed
- mysql.role != 'arbiter'
The downside of this Ansible configuration, is that if mysql fails to start on host2 due to whatever reason, Ansible will also restart mysql on host3. And that is undesired, because if mysql fails on host3 as well, then the cluster is lost. So, for this specific task I'd like Ansible to stop/abort/skip other tasks if mysql has failed to start on a single host in the play.
Ok, this works:
# note that test-multi-01 set host_which_is_skipped: true
---
- hosts:
- test-multi-01
- test-multi-02
- test-multi-03
tasks:
- set_fact:
host_which_is_skipped: "{{ inventory_hostname }}"
when: host_which_is_skipped
- shell: /bin/false
run_once: yes
delegate_to: "{{ item }}"
loop: "{{ ansible_play_hosts }}"
when:
- item != host_which_is_skipped
- result is undefined or result is not failed
register: result
- meta: end_play
when: result is failed
- debug:
msg: Will not happen
When the shell command is set to /bin/true, the command is executed on host2 and host3.
One way to solve this would be to run the playbook with serial: 1. That way, the tasks are executed serially on the hosts and as soon as one task fails, the playbook terminates:
- name: My playbook
hosts: all
serial: 1
any_errors_fatal: true
tasks:
- name: Always fail
shell: /bin/false
In this case, this results in the task only being executed on the first host. Note that there is also the order clause, with which you can also control the order in which hosts are run: https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#hosts-and-users
Disclaimer: most of the credit of the fully working part of this answer is going to #Tomasz Klosinski's answer on Server Fault.
Here is for a partially working idea, that only falls short of one host.
For the demo, I purposely increased my hosts number to 5 hosts.
The idea is based on the special variables ansible_play_batch and ansible_play_hosts_all that are described in the above mentioned document page as:
ansible_play_hosts_all
List of all the hosts that were targeted by the play
ansible_play_batch
List of active hosts in the current play run limited by the serial, aka ‘batch’. Failed/Unreachable hosts are not considered ‘active’.
The idea, coupled with your trial at using throttle: 1 should work, but fail short of one host, executing on host2 when it should skip it.
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- shell: /bin/false
when: "ansible_play_batch | length == ansible_play_hosts_all | length"
throttle: 1
This yields the recap:
PLAY [all] ***********************************************************************************************************
TASK [shell] *********************************************************************************************************
fatal: [host1]: FAILED! => {"changed": true, "cmd": "/bin/false", "delta": "0:00:00.003915", "end": "2020-09-06 22:09:16.550406", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:09:16.546491", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
fatal: [host2]: FAILED! => {"changed": true, "cmd": "/bin/false", "delta": "0:00:00.004736", "end": "2020-09-06 22:09:16.844296", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:09:16.839560", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
skipping: [host3]
skipping: [host4]
skipping: [host5]
PLAY RECAP ***********************************************************************************************************
host1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
host3 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host4 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host5 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Looking further on that, I landed on this answer of Server Fault, and this looks to be the right idea to craft your solution.
Instead of going the normal way, the idea is to delegate everything from the first host with a loop on all targeted hosts of the play, because, in a loop, you are then able to access the registered fact of the previous host in an easy manner, as long as your register it.
So here is the playbook:
- hosts: all
gather_facts: no
tasks:
- shell: /bin/false
loop: "{{ ansible_play_hosts }}"
register: failing_task
when: "failing_task | default({}) is not failed"
delegate_to: "{{ item }}"
run_once: true
This would yield the recap:
PLAY [all] ***********************************************************************************************************
TASK [shell] *********************************************************************************************************
failed: [host1 -> host1] (item=host1) => {"ansible_loop_var": "item", "changed": true, "cmd": "/bin/false", "delta": "0:00:00.003706", "end": "2020-09-06 22:18:23.822608", "item": "host1", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:18:23.818902", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
skipping: [host1] => (item=host2)
skipping: [host1] => (item=host3)
skipping: [host1] => (item=host4)
skipping: [host1] => (item=host5)
NO MORE HOSTS LEFT ***************************************************************************************************
PLAY RECAP ***********************************************************************************************************
host1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
And just for the sake of proving it works as intended, altering it to make the host2 fail specifically, with the help of failed_when:
- hosts: all
gather_facts: no
tasks:
- shell: /bin/false
loop: "{{ ansible_play_hosts }}"
register: failing_task
when: "failing_task | default({}) is not failed"
delegate_to: "{{ item }}"
run_once: true
failed_when: "item == 'host2'"
Yields the recap:
PLAY [all] ***********************************************************************************************************
TASK [shell] *********************************************************************************************************
changed: [host1 -> host1] => (item=host1)
failed: [host1 -> host2] (item=host2) => {"ansible_loop_var": "item", "changed": true, "cmd": "/bin/false", "delta": "0:00:00.004226", "end": "2020-09-06 22:20:38.038546", "failed_when_result": true, "item": "host2", "msg": "non-zero return code", "rc": 1, "start": "2020-09-06 22:20:38.034320", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
skipping: [host1] => (item=host3)
skipping: [host1] => (item=host4)
skipping: [host1] => (item=host5)
NO MORE HOSTS LEFT ***************************************************************************************************
PLAY RECAP ***********************************************************************************************************
host1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Select host to execute task based on multiple conditions

Using Ansible v2.9.12
Imagine 4 hosts in a play, with the following vars:
# host1
my_var:
one: true
two: master
# host2
my_var:
one: false
two: master
# host3
my_var:
one: false
two: master
# host4
my_var:
one: false
two: arbiter
I want to execute a task on a single host, which is has my_var.two == 'master' and has my_var.one set to false. In this case it would relate to either host2 or host3.
Now, this won't work:
- shell: echo
when:
- my_var.two == 'master'
- not my_var.one
run_once: true
Because Ansible executes the task only on the first host in the group, which most likely relates to host1, which is undesired. I'm also fiddling around in the answer described in my previous question, which is somewhat related, but without avail.
Nothing stops you, in a playbook, to have multiple plays:
- hosts: all
gather_facts: no
tasks:
- debug:
msg: "{{ my_var }}"
- hosts: host2
gather_facts: no
tasks:
- debug:
msg: "{{ my_var }}"
# ^--- Off course, now the when become superfluous,
# since we target a host that we know have those conditions
- hosts: all
gather_facts: no
tasks:
- debug:
msg: "Keep on going with all hosts"
Would give the recap:
PLAY [all] **********************************************************************************************************
TASK [debug] ********************************************************************************************************
ok: [host1] => {
"msg": {
"one": true,
"two": "master"
}
}
ok: [host2] => {
"msg": {
"one": false,
"two": "master"
}
}
ok: [host3] => {
"msg": {
"one": false,
"two": "master"
}
}
ok: [host4] => {
"msg": {
"one": false,
"two": "arbiter"
}
}
PLAY [host2] ********************************************************************************************************
TASK [debug] ********************************************************************************************************
ok: [host2] => {
"msg": {
"one": false,
"two": "master"
}
}
PLAY [all] **********************************************************************************************************
TASK [debug] ********************************************************************************************************
ok: [host1] => {
"msg": "Keep on going with all hosts"
}
ok: [host2] => {
"msg": "Keep on going with all hosts"
}
ok: [host3] => {
"msg": "Keep on going with all hosts"
}
ok: [host4] => {
"msg": "Keep on going with all hosts"
}
PLAY RECAP **********************************************************************************************************
host1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Alternatively, for a more dynamic solution, you could select the correct host, looping through your hosts variables via the magic variable hostvars and, then, delegate the task to this host with delegate_to.
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- set_fact:
host_master_not_one: "{{ item }}"
when:
- host_master_not_one is not defined
- not hostvars[item].my_var.one
- "'master' == hostvars[item].my_var.two"
run_once: true
loop: "{{ ansible_play_hosts }}"
- debug:
msg: "Running only once, on host {{ host_master_not_one }}"
delegate_to: "{{ host_master_not_one }}"
run_once: true
It gives the recap:
PLAY [all] **********************************************************************************************************
TASK [set_fact] *****************************************************************************************************
skipping: [host1] => (item=host1)
ok: [host1] => (item=host2)
skipping: [host1] => (item=host3)
skipping: [host1] => (item=host4)
TASK [debug] ********************************************************************************************************
ok: [host1 -> host2] => {
"msg": "Running only once, on host host2"
}
PLAY RECAP **********************************************************************************************************
host1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The important part of this recap would be the [host1 -> host2] that indicates that the task was delegated and so run on host2.
Or even, on the same train of thoughts, skip hosts that does not match the one having those said conditions:
- hosts: all
gather_facts: no
tasks:
- set_fact:
host_master_not_one: "{{ item }}"
when:
- host_master_not_one is not defined
- not hostvars[item].my_var.one
- "'master' == hostvars[item].my_var.two"
run_once: true
loop: "{{ ansible_play_hosts }}"
- debug:
msg: "Running only once, on host {{ host_master_not_one }}"
when: inventory_hostname == host_master_not_one
Gives the recap:
PLAY [all] **********************************************************************************************************
TASK [set_fact] *****************************************************************************************************
skipping: [host1] => (item=host1)
ok: [host1] => (item=host2)
skipping: [host1] => (item=host3)
skipping: [host1] => (item=host4)
TASK [debug] ********************************************************************************************************
skipping: [host1]
ok: [host2] => {
"msg": "Running only once, on host host2"
}
skipping: [host3]
skipping: [host4]
PLAY RECAP **********************************************************************************************************
host1 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
host4 : ok=0 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

Issue adding duplicate name with different ansible_user to add_host dynamic inventory

Here is my playbook that builds a dynamic inventory using add_host:
---
- name: "Play 1"
hosts: localhost
gather_facts: no
tasks:
- name: "Search database"
command: > mysql --user=root --password=p#ssword deployment
--host=localhost -Ns -e "SELECT dest_ip,username FROM deploy_dets"
register: command_result
- name: Add hosts
add_host:
name: "{{ item.split('\t')[0] }}"
ansible_user: "{{ item.split('\t')[1] }}"
groups: dest_nodes
with_items: "{{ command_result.stdout_lines }}"
- hosts: dest_nodes
gather_facts: false
tasks:
- debug:
msg: Run the shell script with the arguments `{{ ansible_user }}` here"
The Output is good and as expected when the 'name:' attribute of add_host are of different values IPs viz '10.9.0.100' & '10.8.2.144'
$ ansible-playbook duplicate_hosts.yml
PLAY [Play 1] ***********************************************************************************************************************************************
TASK [Search database] **************************************************************************************************************************************
changed: [localhost]
TASK [Add hosts] ********************************************************************************************************************************************
changed: [localhost] => (item=10.9.0.100 user1)
changed: [localhost] => (item=10.8.2.144 user2)
PLAY [dest_nodes] *******************************************************************************************************************************************
TASK [debug] ************************************************************************************************************************************************
ok: [10.9.0.100] => {
"msg": "Run the shell script with the arguments `user1` here\""
}
ok: [10.8.2.144] => {
"msg": "Run the shell script with the arguments `user2` here\""
}
PLAY RECAP **************************************************************************************************************************************************
10.8.2.144 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.9.0.100 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The problem is when the 'name:' attribute for add_host gets duplicate entry say 10.8.2.144 despite having unique 'ansible_user' value the play ignores the first name, ansible_user entry and runs only once with the latest final entry.
$ ansible-playbook duplicate_hosts.yml
PLAY [Play 1] ***********************************************************************************************************************************************
TASK [Search database] **************************************************************************************************************************************
changed: [localhost]
TASK [Add hosts] ********************************************************************************************************************************************
changed: [localhost] => (item=10.8.2.144 user1)
changed: [localhost] => (item=10.8.2.144 user2)
PLAY [dest_nodes] *******************************************************************************************************************************************
TASK [debug] ************************************************************************************************************************************************
ok: [10.8.2.144] => {
"msg": "Run the shell script with the arguments `user2` here\""
}
PLAY RECAP **************************************************************************************************************************************************
10.8.2.144 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Interestingly the debug shows two entries for add_host name: 10.8.2.144 with the different ansible_users i.e 'user1' and 'user2' but when we run the group it runs just the single and latest name entry and seen in the output above.
I'm on the latest version of ansible.
Can you please provide some solution where i can run the play for every unique 'ansible_user' on the same host ?
In summary: I wish to run multiple tasks on the same host first with 'user1' and then with 'user2'
You can add a alias as inventory hostname. Here I have given the username as hostname(alias).
Please try this, I have not tested it.
- name: Add hosts
add_host:
hostname: "{{ item.split('\t')[1] }}"
ansible_host: "{{ item.split('\t')[0] }}"
ansible_user: "{{ item.split('\t')[1] }}"
groups: dest_nodes
with_items: "{{ command_result.stdout_lines }}"

Resources