How to stop the playbook if one play fails - ansible

I have the following playbook with 3 plays. When one of the plays fail, the next plays still execute. I think it is because I run those plays with a different host target.
I would like to avoid this and have the playbook stop when one play fails, is it possible?
---
- name: create the EC2 instances
hosts: localhost
any_errors_fatal: yes
connection: local
tasks:
- ...
- name: configure instances
hosts: appserver
any_errors_fatal: yes
gather_facts: true
tasks:
- ...
- name: Add to load balancer
hosts: localhost
any_errors_fatal: yes
vars:
component: travelmatrix
tasks:
- ...

You can use any_errors_fatal which stops the play if there are any errors.
- name: create the EC2 instances
hosts: localhost
connection: local
any_errors_fatal: true
tasks:
- ...
Reference Link

it is ansible bug
to workaround use max_fail_percentage: 0

Related

Launch a play in playbook based on gather facts

I have this Ansible Playbook with three different plays. What I want to do is to launch the two lasts plays based on a condition. How can I do this directly at playbook level (not using when clause in each role)?
- name: Base setup
hosts: all
roles :
- apt
- pip
# !!!!! SHUT DOWN IF NOT DEBIAN DIST
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
roles:
- mariadb
- name: webserver and application setup
hosts: webserver
remote_user: "{{ user }}"
become: true
roles:
- php
- users
- openssh
- sshkey
- gitclone
- symfony
You could just end the play for the hosts you do not wish to continue with, with the help of the meta task in a pre_tasks:
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
pre_tasks:
- meta: end_host
when: ansible_facts.os_family != 'Debian'
roles:
- mariadb
And do the same for the web servers.

How to wait for ssh to become available on a host before installing a role?

Is there a way to wait for ssh to become available on a host before installing a role? There's wait_for_connection but I only figured out how to use it with tasks.
This particular playbook spin up servers on a cloud provider before attempting to install roles. But fails since the ssh service on the hosts isn't available yet.
How should I fix this?
---
- hosts: localhost
connection: local
tasks:
- name: Deploy vultr servers
include_tasks: create_vultr_server.yml
loop: "{{ groups['vultr_servers'] }}"
- hosts: all
gather_facts: no
become: true
tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
- name: Gather facts for first time
setup:
- name: Install curl
package:
name: "curl"
state: present
roles: # How to NOT install roles UNLESS the current host is available ?
- role: apache2
vars:
doc_root: /var/www/example
message: 'Hello world!'
- common-tools
Ansible play actions start with pre_tasks, then roles, followed by tasks and finally post_tasks. Move your wait_for_connection task as the first pre_tasks and it will block everything until connection is available:
- hosts: all
gather_facts: no
become: true
pre_tasks:
- name: wait_for_connection # This one works
wait_for_connection:
delay: 5
timeout: 600
roles: ...
tasks: ...
For more info on execution order, see this title in role's documentation (paragraph just above the notes).
Note: you probably want to move all your current example tasks in that section too so that facts are gathered and curl installed prior to do anything else.

How do I tell Ansible to include localhost on the task?

I have this task:
- name: Install OpenJDK
become: true
apt:
name: openjdk-8-jre-headless
cache_valid_time: 60
state: latest
I want to run it in all hosts, including localhost. How can I tell Ansible to include localhost in the hosts for just one play?
You just add localhost to the pattern of targeted hosts in your play. Note that, unless your re-define it in your inventory, localhost is implicit and does not match the all special group.
The global idea
---
- name: This play will target all hosts in inventory
hosts: all
tasks:
- debug:
msg: I'm a dummy task
- name: This play will target all inventory hosts AND implicit localhost
hosts: all:localhost
tasks:
- debug:
msg: Yet an other dummy task

Ansible Playbook skip some play on multiple play playbook file

I have an ansible playbook YAML file which contains 3 plays.
The first play and the third play run on localhost but the second play runs on remote machine as you can see an example below:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
- name: Play2
hosts: remote_host
tasks:
- ... task here
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
I found that, on the first run, Ansible Playbook executes Play1 and Play3 and skips Play2. Then, I try to run again, it executes all of them correctly.
What is wrong here?
The problem is that, at Play2, I use ec2 inventor like tag_Name_my_machine but this instance was not created yet, because it would be created at Play1's task.
Once Play1 finished, it will run Play2 but no host found so it silently skip this play.
The solution is to create dynamic inventor and manually register at Play1's tasks:
Playbook may look like this:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch new ec2 instance
register: ec2
ec2: ...
- name: create dynamic group
add_host:
name: "{{ ec2.instances[0].private_ip }}"
group: host_dynamic_lastec2_created
- name: Play2
user: ...
hosts: host_dynamic_lastec2_created
become: yes
become_method: sudo
become_user: root
tasks:
- name: do something
shell: ...
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here

Ansible: How can I deploy the roles on paralel to different hosts

I have the next main.yml, and I would like to run the roles one by one but in parallel for diferent hosts:
For example, first I would like to run "cluster-prerequisites" role on all the host in paralel, when it finish run the "docker" role etc..
- hosts: masters:private_agent:public_agent
remote_user: "{{user}}"
become: True
serial: 1
roles:
- role: cluster_prerequisites
- hosts: bootstrap:masters:private_agent:public_agent
remote_user: "{{user}}"
become: True
serial: 1
roles:
- role: docker
- hosts: bootstrap
remote_user: "{{user}}"
become: True
serial: 1
roles:
- role: prepare_bootstrap
- hosts: masters
remote_user: "{{user}}"
become: True
serial: 1
roles:
- role: run_masters
- hosts: private_agent
remote_user: "{{user}}"
become: True
serial: 1
roles:
- role: run_private_agents
- hosts: public_agent
remote_user: "{{user}}"
become: True
serial: 1
roles:
- role: run_public_agents
From Rolling Update Batch Size chapter:
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling updates use case, you can define how many hosts Ansible should manage at a single time by using the serial keyword...
So, if you remove serial: 1 from your plays, Ansible will run tasks on all hosts in play in parallel.
By setting serial: 1 you tell Ansible to take hosts one by one, and move to next one only when all tasks gets completed on previous one.
Usually you want to do serial runs on bunch of backend servers to update them in batches to prevent service downtime, because part of servers can still serve client's requests.

Resources