If, during the execution of the playbook, we change the host file in host_vars (i.e. add a new variable), how then can we get this variable in hostvars in the current execution of the playbook? When you run it again, it appears in hostvars.
UPDATE 01:
Here's an example, it doesn't work (
The task Debug 3 should display test_1 instead of VARIABLE IS NOT DEFINED!
- name: Test
hosts: mon
tasks:
- name: Debug 1
debug:
var: hostvars.mon.test_1
- name: Add vars for host_vars
delegate_to: 127.0.0.1
blockinfile:
path: "{{ inventory_dir }}/host_vars/{{ inventory_hostname }}.yml"
marker: "# {mark}: {{ item.key }}"
block: |
{{ item.key }}: {{ item.value }}
with_dict:
- {test_1: "test_1"}
- name: Debug 2
debug:
var: hostvars.mon.test_1
- name: Clear facts
meta: clear_facts
- name: Refresh inventory
meta: refresh_inventory
- name: Setup
setup:
- name: Debug 3
debug:
var: hostvars.mon.test_1
Result:
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [mon]
TASK [Debug 1] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "VARIABLE IS NOT DEFINED!"
}
TASK [Add vars for host_vars] **************************************************
changed: [mon -> 127.0.0.1] => (item={'key': 'test_1', 'value': 'test_1'})
TASK [Debug 2] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "VARIABLE IS NOT DEFINED!"
}
TASK [Setup] *******************************************************************
ok: [mon]
TASK [Debug 3] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "VARIABLE IS NOT DEFINED!"
}
PLAY RECAP *********************************************************************
mon : ok=6 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
On restart:
PLAY [Test] ********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [mon]
TASK [Debug 1] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "test_1"
}
TASK [Add vars for host_vars] **************************************************
ok: [mon -> 127.0.0.1] => (item={'key': 'test_1', 'value': 'test_1'})
TASK [Debug 2] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "test_1"
}
TASK [Setup] *******************************************************************
ok: [mon]
TASK [Debug 3] *****************************************************************
ok: [mon] => {
"hostvars.mon.test_1": "test_1"
}
PLAY RECAP *********************************************************************
mon : ok=6 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Maybe there is a way to change hostvars manually in the process?
You can ask Ansible to reread inventory (including host_vars). Generally I'd say that changing inventory on-fly is a code smell, but there are few valid cases.
- name: Refreshing inventory, SO copypaste
meta: refresh_inventory
Related
I have an ansible playbook that calls a role, containing many tasks and handlers. the handlers will be triggered if my desired configuration of service is changed.
What I am looking for is a way to trigger multiple handlers on the hosts sequently. for example, if I have to targets target1 and target2 and I have also two handlers handler1 and handler2, What I want in execution of handlers on targets would be something like below:
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target1]
RUNNING HANDLER [myrole : handler2] *************************************************
changed: [target1]
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target2]
RUNNING HANDLER [myrole : handler2] *************************************************
changed: [target2]
But as is known, the normal execution of handlers on targets are as below:
RUNNING HANDLER [myrole : handler 1] ********************************************
changed: [target1]
changed: [target2]
RUNNING HANDLER [myrole : handler 2] ********************************************
changed: [target1]
changed: [target2]
That it is not what I want.
I know that with using of serial option in playbook level we can restrict parallelism, but this option will bring the cost of huge time consuming because all of my tasks would be executed in serial as well.
The ways I have tried was using both of throttle option and block directive on handlers but it wasn't usefull.
flush_handlers on each host separately. Dynamically create and include the file. For example, the playbook
shell> cat pb.yml
- hosts: target1,target2
tasks:
- debug:
msg: Notify handler1
changed_when: true
notify: handler1
- debug:
msg: Notify handler2
changed_when: true
notify: handler2
- block:
- copy:
dest: "{{ playbook_dir }}/flush_handlers_serial.yml"
content: |
{% for host in ansible_play_hosts_all %}
- meta: flush_handlers
when: inventory_hostname == '{{ host }}'
{% endfor %}
delegate_to: localhost
- include_tasks: flush_handlers_serial.yml
run_once: true
when: flush_handlers_serial|d(false)|bool
handlers:
- name: handler1
debug:
msg: Run handler1
- name: handler2
debug:
msg: Run handler2
by default runs the handlers in parallel (see linear strategy)
shell> ansible-playbook pb.yml
PLAY [target1,target2] ***************************************************************************************
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler1
changed: [target2] =>
msg: Notify handler1
TASK [debug] *************************************************************************************************
changed: [target2] =>
msg: Notify handler2
changed: [target1] =>
msg: Notify handler2
TASK [copy] **************************************************************************************************
skipping: [target1]
TASK [include_tasks] *****************************************************************************************
skipping: [target1]
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target1] =>
msg: Run handler1
ok: [target2] =>
msg: Run handler1
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target1] =>
msg: Run handler2
ok: [target2] =>
msg: Run handler2
PLAY RECAP ***************************************************************************************************
target1: ok=4 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
target2: ok=4 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
When you enable flush_handlers_serial=true the file below will be created and included
shell> cat flush_handlers_serial.yml
- meta: flush_handlers
when: inventory_hostname == 'target1'
- meta: flush_handlers
when: inventory_hostname == 'target2'
This will run the handlers serially, similarly to the strategy host_pinned
shell> ansible-playbook pb.yml -e flush_handlers_serial=true
PLAY [target1,target2] ***************************************************************************************
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler1
changed: [target2] =>
msg: Notify handler1
TASK [debug] *************************************************************************************************
changed: [target1] =>
msg: Notify handler2
changed: [target2] =>
msg: Notify handler2
TASK [copy] **************************************************************************************************
changed: [target1 -> localhost]
TASK [include_tasks] *****************************************************************************************
included: /export/scratch/tmp7/test-172/flush_handlers_serial.yml for target1
TASK [meta] **************************************************************************************************
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target1] =>
msg: Run handler1
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target1] =>
msg: Run handler2
RUNNING HANDLER [handler1] ***********************************************************************************
ok: [target2] =>
msg: Run handler1
TASK [meta] **************************************************************************************************
skipping: [target1]
RUNNING HANDLER [handler2] ***********************************************************************************
ok: [target2] =>
msg: Run handler2
PLAY RECAP ***************************************************************************************************
target1: ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
target2: ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I've got a way to solve my problem as a workaround.
the file handlers/main.yml would be as below:
---
- name: Trigger some handlers sequentially.
include_tasks: trigger_handlers_sequentialy.yml
run_once: true
loop: "{{ ansible_play_hosts_all }}"
listen: restart_my_service
So, the above handler will connect to just one target (because of using run_once option) and the loop operation on the targets will be done one time.
Then, all the handlers that are expected to be run on targets in serial mode should be inserted on the tasks/trigger_handlers_sequentialy.yml file:
---
- name: handler1
debug:
msg: run handler1
delegate_to: "{{ item }}"
- name: handler2
debug:
msg: run handler2
delegate_to: "{{ item }}"
So, what will be happened during the playbook execution will be something like below:
RUNNING HANDLER [myrole : Trigger some handlers sequentially.] ***********************************************************************************************************************************************
included: ~/ansible/roles/myrole/tasks/trigger_handlers_sequentialy.yml for target1 => (item=target1)
RUNNING HANDLER [myrole : run handler1] ***********************************************************************************************************************************************
ok: [target1]
RUNNING HANDLER [myrole : run handler2] **********************************************************************************************************************************************
ok: [target1]
included: ~/ansible/roles/myrole/tasks/trigger_handlers_sequentialy.yml for target1 => (item=target2)
RUNNING HANDLER [myrole : run handler1] ***********************************************************************************************************************************************
ok: [target1 -> target2(192.168.4.10)]
RUNNING HANDLER [myrole : run handler2] **********************************************************************************************************************************************
ok: [target1 -> target2(192.168.4.10)]
Trying to only copy an Nginx config file if the destination file does not have a string in it.
I thought this would work:
- name: Copy nginx config file
template:
src: templates/nginx.conf
dest: /etc/nginx/sites-enabled/default
validate: grep -l 'managed by Certbot' %s
But this task fails if "managed by Certbot" isn't in the file and stops the playbook run.
How can I just skip the template copy if the destination file already has that pattern? Maybe there's a better way to get the same result?
Inspired from this other answer
You can check for the presence of a content in a file using the lineinfile module in check mode. Then you can use the result as a condition to your template task. The default in the condition is to cope with the case when the file does not exists and the found attribute is not in the registered result.
---
- name: Check for presence of "managed by Certbot" in file
lineinfile:
path: /etc/nginx/sites-enabled/default
regexp: ".*# managed by Certbot.*"
state: absent
check_mode: yes
changed_when: false
register: certbot_managed
- name: Copy nginx config file when not certbot managed
template:
src: templates/nginx.conf
dest: /etc/nginx/sites-enabled/default
when: certbot_managed.found | default(0) == 0
You could use the failed_when condition and base it on the fail message that validate generate — failed to validate — to act upon:
- name: Copy nginx config file
template:
src: templates/nginx.conf
dest: /etc/nginx/sites-enabled/default
validate: grep -l 'managed by Certbot' %s
failed_when:
- copy_config_file.failed
- copy_config_file.msg != 'failed to validate'
register: copy_config_file
Note: in when and *_when, having a list of conditions is like doing list.0 and list.1 and ...
Given the playbook:
- hosts: all
gather_facts: no
tasks:
- copy:
dest: templates/nginx.conf
content: "{{ content | default('some random content') }}"
- copy:
dest: /etc/nginx/sites-enabled/default
content: "blank"
- template:
src: templates/nginx.conf
dest: /etc/nginx/sites-enabled/default
validate: grep -l 'managed by Certbot' %s
failed_when:
- copy_config_file.failed
- copy_config_file.msg != 'failed to validate'
register: copy_config_file
- shell: cat templates/nginx.conf
register: template_content
failed_when: false
- shell: cat /etc/nginx/sites-enabled/default
register: file_content
failed_when: false
- debug:
var: template_content.stdout
- debug:
var: file_content.stdout
When run via:
ansible-playbook play.yml
It gives:
PLAY [all] *******************************************************************************************************
TASK [copy] ******************************************************************************************************
changed: [localhost]
TASK [copy] ******************************************************************************************************
changed: [localhost]
TASK [template] **************************************************************************************************
ok: [localhost]
TASK [shell] *****************************************************************************************************
changed: [localhost]
TASK [shell] *****************************************************************************************************
changed: [localhost]
TASK [debug] *****************************************************************************************************
ok: [localhost] => {
"template_content.stdout": "some random content"
}
TASK [debug] *****************************************************************************************************
ok: [localhost] => {
"file_content.stdout": "blank"
}
PLAY RECAP *******************************************************************************************************
localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Now, when run with
ansible-playbook play.yml -e "content='managed by Certbot\nsome other content'"
With an extra parameter to modify the content of the template, it gives:
PLAY [all] *******************************************************************************************************
TASK [copy] ******************************************************************************************************
ok: [localhost]
TASK [copy] ******************************************************************************************************
changed: [localhost]
TASK [template] **************************************************************************************************
changed: [localhost]
TASK [shell] *****************************************************************************************************
changed: [localhost]
TASK [shell] *****************************************************************************************************
changed: [localhost]
TASK [debug] *****************************************************************************************************
ok: [localhost] => {
"template_content.stdout": "managed by Certbot\nsome other content"
}
TASK [debug] *****************************************************************************************************
ok: [localhost] => {
"file_content.stdout": "managed by Certbot\nsome other content"
}
PLAY RECAP *******************************************************************************************************
localhost : ok=7 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I want to run test.yaml, which contains a lot of tasks. I expect that the first host in my inventory runs all tasks in test.yaml before moving to the next one. But the actual result is that each task in test.yml is run on all three hosts at a time. How to implement this function?
This is my inventoy
[host1]
192.168.1.1
192.168.1.2
192.168.1.3
This is my task file
---
# test.yaml
- name: task1
shell: echo task1
- name: task2
shell: echo task2
- name: task3
shell: echo task3
And this is how I include the task file in my playbook
- name: Multiple machine loops include
include: test.yaml
delegate_to: "{{item}}"
loop: "{{ groups['host1'] }}"
The actual result is
TASK [Multiple machine loops include] **********************************************************************************************************************************************************
included: /home/learn/main.yml for 192.168.1.1, 192.168.1.2, 192.168.1.3 => (item=192.168.1.1)
included: /home/learn/main.yml for 192.168.1.1, 192.168.1.2, 192.168.1.3 => (item=192.168.1.2)
included: /home/learn/main.yml for 192.168.1.1, 192.168.1.2, 192.168.1.3 => (item=192.168.1.3)
TASK [task1] *********************************************************************************************************************************************************
ok: [192.168.11.1]
ok: [192.168.11.2]
ok: [192.168.11.3]
TASK [task2] *********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task3] ******************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task1] *******************************************************************************************************************************************************
ok: [192.168.11.1]
ok: [192.168.11.2]
ok: [192.168.11.3]
TASK [task2] ********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task3] *********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task1] *********************************************************************************************************************************************************
ok: [192.168.11.1]
ok: [192.168.11.2]
ok: [192.168.11.3]
TASK [task2] ********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
TASK [task3] *********************************************************************************************************************************************************
changed: [192.168.11.1]
changed: [192.168.11.2]
changed: [192.168.11.3]
PLAY RECAP ***********************************************************************************************************************************************************
192.168.11.1 : ok=12 changed=6 unreachable=0 failed=0
192.168.11.2 : ok=12 changed=6 unreachable=0 failed=0
192.168.11.3 : ok=12 changed=6 unreachable=0 failed=0
What I expect is:
TASK [task1] ***********************************************************************************************************************************************************
changed: [192.168.1.1]
TASK [task2] ***********************************************************************************************************************************************************
changed: [192.168.1.1]
TASK [task3] ***********************************************************************************************************************************************************
changed: [192.168.1.1]
TASK [task1] ***********************************************************************************************************************************************************
changed: [192.168.1.2]
TASK [task2] ***********************************************************************************************************************************************************
changed: [192.168.1.2]
TASK [task3] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
TASK [task1] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
TASK [task2] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
TASK [task3] ***********************************************************************************************************************************************************
changed: [192.168.1.3]
What you are trying to do is to play a set of tasks serially on a set of hosts. Vladimir's answer points out why your current implementation cannot achieve your requirement.
You could do that with an include and a loop (see below if you really need that for a particular reason), but the best way IMO is to use serial in your play as described in the documentation for rolling upgrades
For both example below, I created a "fake" inventory file with 3 declared hosts all using the local connection type
[my_group]
host1 ansible_connection=local
host2 ansible_connection=local
host3 ansible_connection=local
Serial run (preferred)
This is the test.yml playbook for the serial run
---
- name: Serial run demo
hosts: my_group
serial: 1
tasks:
- name: task1
shell: echo task 1
- name: task2
shell: echo task 2
- name: task3
shell: echo task 3
And the result
$ ansible-playbook -i inventory test.yml
PLAY [Serial run demo] ******************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [host1]
TASK [task1] ****************************************************************************************************************
changed: [host1]
TASK [task2] ****************************************************************************************************************
changed: [host1]
TASK [task3] ****************************************************************************************************************
changed: [host1]
PLAY [Serial run demo] ******************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [host2]
TASK [task1] ****************************************************************************************************************
changed: [host2]
TASK [task2] ****************************************************************************************************************
changed: [host2]
TASK [task3] ****************************************************************************************************************
changed: [host2]
PLAY [Serial run demo] ******************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************
ok: [host3]
TASK [task1] ****************************************************************************************************************
changed: [host3]
TASK [task2] ****************************************************************************************************************
changed: [host3]
TASK [task3] ****************************************************************************************************************
changed: [host3]
PLAY RECAP ******************************************************************************************************************
host1 : ok=4 changed=3 unreachable=0 failed=0
host2 : ok=4 changed=3 unreachable=0 failed=0
host3 : ok=4 changed=3 unreachable=0 failed=0
Include run (alternative if really needed)
If you really need to use include and delegate to a set of hosts, this would still be possible but you need:
To change your included file to add delegate_to for each task with a variable.
To target a single host in the play running your include and not your group of hosts like demonstrated in your question.
Note that in your question, you are using include which is already announce for future deprecation (see the notes on module documentation). You should prefer all the include_* and import_* replacements modules, in your case include_tasks
This is the test_include.yml file
---
- name: task1
shell: echo task 1
delegate_to: "{{ delegate_host }}"
- name: task2
shell: echo task 2
delegate_to: "{{ delegate_host }}"
- name: task3
shell: echo task 3
delegate_to: "{{ delegate_host }}"
This is the test.yml playbook:
---
- name: Include loop demo
hosts: localhost
gather_facts: false
tasks:
- name: Include needed tasks for each hosts
include_tasks: test_include.yml
loop: "{{ groups['my_group'] }}"
loop_control:
loop_var: delegate_host
And the result
$ ansible-playbook -i inventory test.yml
PLAY [Include loop demo] *********************************************************************
TASK [Include needed tasks for each hosts] ***************************************************
included: /tmp/testso/test_include.yml for localhost
included: /tmp/testso/test_include.yml for localhost
included: /tmp/testso/test_include.yml for localhost
TASK [task1] *********************************************************************************
changed: [localhost -> host1]
TASK [task2] *********************************************************************************
changed: [localhost -> host1]
TASK [task3] *********************************************************************************
changed: [localhost -> host1]
TASK [task1] *********************************************************************************
changed: [localhost -> host2]
TASK [task2] *********************************************************************************
changed: [localhost -> host2]
TASK [task3] *********************************************************************************
changed: [localhost -> host2]
TASK [task1] *********************************************************************************
changed: [localhost -> host3]
TASK [task2] *********************************************************************************
changed: [localhost -> host3]
TASK [task3] *********************************************************************************
changed: [localhost -> host3]
PLAY RECAP ***********************************************************************************
localhost : ok=12 changed=9 unreachable=0 failed=0
'include' is not delegated. Quoting from Delegation
Be aware that it does not make sense to delegate all tasks, debug, add_host, include, etc always get executed on the controller.
To see what's going on run the simplified playbook below
- hosts: localhost
tasks:
- name: Multiple machine loops include
include: test.yaml
delegate_to: "{{ item }}"
loop: "{{ groups['host1'] }}"
with 'test.yaml'
- debug:
msg: "{{ inventory_hostname }} {{ item }}"
For example, with the inventory
host1:
hosts:
test_01:
test_02:
test_03:
the abridged output below shows that the included task run at localhost despite the fact that the task was delegated to another host.
TASK [debug]
ok: [localhost] => {
"msg": "localhost test_01"
}
TASK [debug]
ok: [localhost] => {
"msg": "localhost test_02"
}
TASK [debug]
ok: [localhost] => {
"msg": "localhost test_03"
}
PLAY RECAP
localhost : ok=6 changed=0 unreachable=0 failed=0
The solution is to move 'delegate_to' to the included task. The play below
- hosts: localhost
tasks:
- name: Multiple machine loops include
include: test.yaml
loop: "{{ groups['host1'] }}"
with the included task
- command: hostname
register: result
delegate_to: "{{ item }}"
- debug: var=result.stdout
gives (abridged):
TASK [command]
changed: [localhost -> test_01]
TASK [debug]
ok: [localhost] => {
"result.stdout": "test_01.example.com"
}
TASK [command]
changed: [localhost -> test_02]
TASK [debug]
ok: [localhost] => {
"result.stdout": "test_02.example.com"
}
TASK [command]
changed: [localhost -> test_03]
TASK [debug]
ok: [localhost] => {
"result.stdout": "test_03.example.com"
}
PLAY RECAP
localhost : ok=9 changed=3 unreachable=0 failed=0
I have two identical tasks in the same playbook:
when: var == "true"
when: var == "false"
Both tasks are using register: result, but first one fails and second one succeeds.
I tried using block: and not just when: and the behaviour is the same.
bug-when.yml
---
- hosts: localhost
tasks:
- name: when true
debug:
msg: "this is true"
register: result
when: var == "true"
- name: when false
debug:
msg: "this is false"
register: result
when: var == "false"
- name: print result
debug:
msg: "{{ result }}"
Example running it:
ansible-playbook bug-when.yml -e var=true
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Thursday 09 May 2019 18:51:35 +0000 (0:00:02.018) 0:00:02.018 **********
ok: [localhost]
TASK [when true] ***************************************************************
Thursday 09 May 2019 18:51:35 +0000 (0:00:00.437) 0:00:02.456 **********
ok: [localhost] => {
"msg": "this is true"
}
TASK [when false] **************************************************************
Thursday 09 May 2019 18:51:35 +0000 (0:00:00.027) 0:00:02.483 **********
skipping: [localhost]
TASK [print result] ************************************************************
Thursday 09 May 2019 18:51:36 +0000 (0:00:00.023) 0:00:02.506 **********
ok: [localhost] => {
"msg": {
"changed": false,
"skip_reason": "Conditional check failed",
"skipped": true
}
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
Second example running it:
ansible-playbook bug-when.yml -e var=false
PLAY [localhost] ***************************************************************
TASK [setup] *******************************************************************
Thursday 09 May 2019 18:52:01 +0000 (0:00:02.019) 0:00:02.019 **********
ok: [localhost]
TASK [when true] ***************************************************************
Thursday 09 May 2019 18:52:01 +0000 (0:00:00.453) 0:00:02.472 **********
skipping: [localhost]
TASK [when false] **************************************************************
Thursday 09 May 2019 18:52:01 +0000 (0:00:00.024) 0:00:02.497 **********
ok: [localhost] => {
"msg": "this is false"
}
TASK [print result] ************************************************************
Thursday 09 May 2019 18:52:02 +0000 (0:00:00.028) 0:00:02.525 **********
ok: [localhost] => {
"msg": {
"changed": false,
"msg": "this is false"
}
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=0
To expand upon what #b.enoit.be said:
When you have a task like this:
- name: some task
debug:
msg: this is an example
when: false
register: result
This will update result even if the task is skipped. This is what permits you, in a subsequent task, to see if this task was skipped or not:
- name: check if task was skipped
debug:
msg: previous task was skipped
when: result is skipped
Consider registering a different variable in each task, and then:
- name: when true
debug:
msg: "this is true"
register: result1
when: var == "true"
- name: when false
debug:
msg: "this is false"
register: result2
when: var == "false"
- name: print result
debug:
msg: "{{ result1.msg if result2 is skipped else result2.msg }}"
The behaviour you see here happens because the tasks are always registered, mostly because you can self reference a task register in itself, and this behaviour would fail if the tasks was not always registering itself.
So what you need to do is to have two different register handle and act upon the result and skipped attributes of them to display your message properly.
Here is the playbook:
---
- hosts: localhost
tasks:
- name: when true
debug:
msg: "this is true"
register: result_is_true
when: var == "true"
- name: when false
debug:
msg: "this is false"
register: result_is_false
when: var == "false"
- name: print result
debug:
msg: "{{ result_is_true if result_is_false is skipped else result_is_false }}"
Here is the run when var is true
$ ansible-playbook so.yml -e var=true
PLAY [localhost] *******************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [host1]
TASK [when true] *******************************************************************************************************************************************************************************************
ok: [host1] => {
"msg": "this is true"
}
TASK [when false] ******************************************************************************************************************************************************************************************
skipping: [host1]
TASK [print result] ****************************************************************************************************************************************************************************************
ok: [host1] => {
"msg": {
"changed": false,
"failed": false,
"msg": "this is true"
}
}
PLAY RECAP *************************************************************************************************************************************************************************************************
host1 : ok=3 changed=0 unreachable=0 failed=0
And here is the result when var is false
$ ansible-playbook so.yml -e var=false
PLAY [localhost] *******************************************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [host1]
TASK [when true] *******************************************************************************************************************************************************************************************
skipping: [host1]
TASK [when false] ******************************************************************************************************************************************************************************************
ok: [host1] => {
"msg": "this is false"
}
TASK [print result] ****************************************************************************************************************************************************************************************
ok: [host1] => {
"msg": {
"changed": false,
"failed": false,
"msg": "this is false"
}
}
PLAY RECAP *************************************************************************************************************************************************************************************************
host1 : ok=3 changed=0 unreachable=0 failed=0
Needless to say: I would guess you simplified your issue for a MCVE, but your play can actually be as simple as
---
- hosts: localhost
tasks:
- name: print result
debug:
msg: "{{ 'this is true' if var == true else 'this is false' }}"
Which runs:
$ ansible-playbook so.yml -e var=false
PLAY [localhost] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************************************
ok: [host1]
TASK [print result] ********************************************************************************************************************************************************************************
ok: [host1] => {
"msg": "this is false"
}
PLAY RECAP *****************************************************************************************************************************************************************************************
host1 : ok=2 changed=0 unreachable=0 failed=0
$ ansible-playbook so.yml -e var=true
PLAY [localhost] ***********************************************************************************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************************************************************************
ok: [host1]
TASK [print result] ********************************************************************************************************************************************************************************
ok: [host1] => {
"msg": "this is true"
}
PLAY RECAP *****************************************************************************************************************************************************************************************
host1 : ok=2 changed=0 unreachable=0 failed=0
And for reference here is a jinja inline if-expression question: https://stackoverflow.com/a/14215034/2123530
Ansible v2.6.3
I have the simple task, which gets the AWS ARNs in my jenkins ECS cluster
tasks:
- command: aws ecs list-container-instances --cluster jenkins
register: jenkins_ecs_containers
- debug: var=jenkins_ecs_containers.stdout
and has the following output
TASK [debug] *******************************************************************
ok: [localhost] => {
"jenkins_ecs_containers.stdout": {
"containerInstanceArns": [
"arn:aws:ecs:us-east-1:arn0",
"arn:aws:ecs:us-east-1:arn1"
]
}
}
How can I iterate over the ARNs? I tried
- debug: var=item
with_items: jenkins_ecs_containers.stdout.containerInstanceArns
gives
TASK [debug] *******************************************************************
ok: [localhost] => (item=jenkins_ecs_containers.stdout.containerInstanceArns) => {
"item": "jenkins_ecs_containers.stdout.containerInstanceArns"
}
or
- debug: var=item
with_items: "{{ jenkins_ecs_containers.stdout.containerInstanceArns }}"
gives
TASK [debug] *******************************************************************
fatal: [localhost]: FAILED! => {"msg": "'ansible.utils.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'containerInstanceArns'"}
to retry, use: --limit #/Users/cfouts/git-repos/ansible/playbooks/loop.retry
Thanks!
I created a file with your output. So I used set_fact. Otherwise, it's just a string, not a JSON object:
tasks:
- command: cat files/stdout.txt
register: result
- debug: var=result.stdout
- set_fact:
jenkins_ecs_containers: "{{ result.stdout }}"
- debug:
msg: "{{ item }}"
with_items: "{{ jenkins_ecs_containers.containerInstanceArns }}"
This gave me the following output:
PLAY [localhost] ***************************************************************
TASK [Gathering Facts] *********************************************************
ok: [localhost]
TASK [command] *****************************************************************
changed: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"result.stdout": {
"containerInstanceArns": [
"arn:aws:ecs:us-east-1:arn0",
"arn:aws:ecs:us-east-1:arn1"
]
}
}
TASK [set_fact] ****************************************************************
ok: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => (item=None) => {
"msg": "arn:aws:ecs:us-east-1:arn0"
}
ok: [localhost] => (item=None) => {
"msg": "arn:aws:ecs:us-east-1:arn1"
}
PLAY RECAP *********************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0
You can iterate over like this:
- debug:
msg: "{{ item[1] }}"
with_subelements:
- "{{ jenkins_ecs_containers }}"
- containerInstanceArns
Go through this link, it will make it clearer.