Ansible playbook skip task on last node in serial execution - ansible

I want to run some ansible task on 4 servers one by one, i.e. on serial manner. But there will be a pause in between. So, I have added the pause at last in the playbook, but I want it to be skipped on last server. Otherwise it will wait for no reason. Please let me know how to implement this.
---
- hosts: server1,server2,server3,server4
serial: 1
vars_files:
- ./vars.yml
tasks:
- name: Variable test
pause:
minutes: 1

Really interesting problem which forced me to look for an actual solution. Here is the quickest one I came up with.
The ansible special variables documentation defines the ansible_play_hosts_all variable as follow
List of all the hosts that were targeted by the play
The list of hosts in that var is in the order it was found inside the inventory.
Provided you use the default inventory order for your play, you can set a test that will trigger the task unless the current host is the last one in that list:
when: inventory_hostname != ansible_play_hosts_all[-1]
As reported by #Vladimir in the comments below, if you change the play order parameter from default, this approach will break.

The playbook below does the job
- hosts: all
serial: 1
vars:
completed: false
tasks:
- set_fact:
completed: true
- block:
- debug:
msg: All completed. End of play.
- meta: end_play
when: "groups['all']|
map('extract', hostvars, 'completed')|
list is all"
- name: Variable test
pause:
minutes: 1
Notes
see any/all
see Extracting values from containers
see hostvars

Related

Ansible Run second playbook only if the first one is done correctly

I’ll tell you about my scenario. I have a host file with two servers and two playbooks.
I need to perform the following, first I have to run a playbook on server number 1 and if there is no errors, I need to run the second playbook on server number 2.
What condition should I use?
- name: Execute the First Play
import_playbook: first-playbook-to-run.yml
- name: Run the Second Playbook
import_playbook: second-playbook-to-run.yml
Any helps?
Regards,
Try the below
- name: Execute the First Play
import_playbook: first-playbook-to-run.yml
delegate_to: server1
run_once: true
- name: Run the Second Playbook
import_playbook: second-playbook-to-run.yml
delegate_to: server2
run_once: true

How to end a whole play while using serial (Ansible 2.9)

Using Ansible 2.9, how do you end a playbook while also using serial?
When I run the following code that is a rolling upgrade, it executes the prompt for each target system. When using serial, run_once only seems to work for the current target. I only want it to execute once.
- hosts: all
become: yes
serial: 1
handlers:
- include: handlers/main.yml
pre_tasks:
- name: Populate service facts
service_facts:
- name: Prompt
pause:
prompt: "NOTE: You are running a dangerous playbook. Do you want to continue? (yes/no)"
register: confirm_input
run_once: True
- name: end play if user didn't enter yes
meta: end_play
when: confirm_input.user_input | default ("yes") =="no"
run_once: True
tasks:
(other stuff)
run_once will be executed at each serial execution in the play. That means, if you choose serial = 1, it will be asked to confirm as many times as the quantity of targets on the play.
Check Ansible docs: https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html#running-on-a-single-machine-with-run-once
When used together with serial, tasks marked as run_once will be run
on one host in each serial batch. If the task must run only once
regardless of serial mode, use when: inventory_hostname ==
ansible_play_hosts_all[0] construct.
to avoid default ansible behaviour and do what you want, follow the doc and use when: inventory_hostname == ansible_play_hosts_all[0] on the prompt task.

Run tasks on different hosts within an imported task

The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO

How to execute task on all hosts from group when playbook is executed with limited hosts?

Scenario
I have a group A in my inventory, where A contains a1,a2,a3 hosts. It does mean that I can write in my playbook X.yml:
- hosts: A
roles:
- role:
name: r
The problem is about playbook X is started with limited number of hosts, namely launch of ansible-playbook X is limited to host a1. This playbook X invoke role r (which is executed on host a1). I wouldn't like to change this behaviour (in other words I would like to preserve this limitation, don't ask why please).
Question
Is it possible to write task in role r in such way that it will be executed on all hosts from group A even if playbook is limited to host a1? Please keep in mind that my inventory contains group A.
If not, could you suggest me another approach?
The one that I can do is:
- hosts: A
tasks:
- name: "This task"
I do not know for certain, but this might work:
- name: Run task on hosts in group A
some_random_module:
var1: value1
var2: value2
delegate_to: "{{ item }}"
with_items: "{{ groups['A'] }}"
No promises.

How to avoid a playbook to run when no host matches?

Use case: users can provide a host name and will trigger a playbook run. In case the hostname has a typo I want to fail complete playbook run when "no hosts matched". I want to fail it since I like to detect a failure im Bamboo (which I use for CD/CI) to run the playbook.
I have done quite extensive research. It seems that it is a wanted behavior that the playbook exists with an exit code = 0 when no host matches. Here is one indication I found. I agree that the general behavior should be like this.
So I need for my use case an extra check. I tried the following:
- name: Deploy product
hosts: "{{ target_hosts }}"
gather_facts: no
any_errors_fatal: true
pre_tasks:
- name: Check for a valid target host
fail:
msg: "The provided host is not knwon"
when: target_hosts not in groups.tomcat_servers
But since there is no host match the playbook will not run, that is ok but it also ends with exit code 0. That way I can not fail the run in my automation system (Bamboo).
Due to this I am looking for a solution to throw an exit code != 0 when no host matches.
Add a play which would set a fact if a host matched, then check that fact in a second play:
- name: Check hosts
hosts: "{{ target_hosts }}"
gather_facts: no
tasks:
- set_fact:
hosts_confirmed: true
delegate_to: localhost
delegate_facts: true
- name: Verify hosts
hosts: localhost
gather_facts: no
tasks:
- assert:
that: hosts_confirmed | default(false)
- name: The real play
hosts: "{{ target_hosts }}"
# ...

Resources