Ansible Playbook skip some play on multiple play playbook file - ansible

I have an ansible playbook YAML file which contains 3 plays.
The first play and the third play run on localhost but the second play runs on remote machine as you can see an example below:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
- name: Play2
hosts: remote_host
tasks:
- ... task here
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here
I found that, on the first run, Ansible Playbook executes Play1 and Play3 and skips Play2. Then, I try to run again, it executes all of them correctly.
What is wrong here?

The problem is that, at Play2, I use ec2 inventor like tag_Name_my_machine but this instance was not created yet, because it would be created at Play1's task.
Once Play1 finished, it will run Play2 but no host found so it silently skip this play.
The solution is to create dynamic inventor and manually register at Play1's tasks:
Playbook may look like this:
- name: Play1
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch new ec2 instance
register: ec2
ec2: ...
- name: create dynamic group
add_host:
name: "{{ ec2.instances[0].private_ip }}"
group: host_dynamic_lastec2_created
- name: Play2
user: ...
hosts: host_dynamic_lastec2_created
become: yes
become_method: sudo
become_user: root
tasks:
- name: do something
shell: ...
- name: Play3
hosts: localhost
connection: local
gather_facts: false
tasks:
- ... task here

Related

Launch a play in playbook based on gather facts

I have this Ansible Playbook with three different plays. What I want to do is to launch the two lasts plays based on a condition. How can I do this directly at playbook level (not using when clause in each role)?
- name: Base setup
hosts: all
roles :
- apt
- pip
# !!!!! SHUT DOWN IF NOT DEBIAN DIST
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
roles:
- mariadb
- name: webserver and application setup
hosts: webserver
remote_user: "{{ user }}"
become: true
roles:
- php
- users
- openssh
- sshkey
- gitclone
- symfony
You could just end the play for the hosts you do not wish to continue with, with the help of the meta task in a pre_tasks:
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
pre_tasks:
- meta: end_host
when: ansible_facts.os_family != 'Debian'
roles:
- mariadb
And do the same for the web servers.

How to run some tasks locally in Ansible

I have a playbook with some roles/tasks to be executed on the remote host. There is a scenario where I want some tasks to execute locally like downloading artifacts from svn/nexus to local server.
Here is my main playbook where I am passing the target_env from the command line and dynamically loading the variables using group_vars directory
---
- name: Starting Deployment of Application to tomcat nodes
hosts: '{{ target_env }}'
become: yes
become_user: tomcat
become_method: sudo
gather_facts: yes
roles:
- role: repodownload
tags:
- repodownload
- role: stoptomcat
tags:
- stoptomcat
The first role repodownload actually download the artifacts from svn/nexus to the local server/controller. Here is the main.yml of this role -
- name: Downloading MyVM Artifacts on the local server
delegate_to: localhost
get_url: url="http://nexus.com/myrelease.war" dest=/tmp/releasename/
- name: Checkout latest application configuration templates from SVN repo to local server
delegate_to: localhost
subversion:
repo: svn://12.57.98.90/release-management/config
dest: ../templates
in_place: yes
But it's not working. Could it be because in my main yml file I am becoming the user using which I want to execute the commands on remote host.
Let me know if someone can help. It will be appreciated.
ERROR -
"changed": false,
"module_stderr": "sudo: a password is required\n",
"module_stdout": "",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
Given the scenario, you can do it multiple ways. One could be adding another play for repodownload role to your main playbook that runs only on localhost. Then remove delegate_to: localhost from the role tasks and move the variables accordingly.
---
- name: Download repo
hosts: localhost
gather_facts: yes
roles:
- role: repodownload
tags:
- repodownload
- name: Starting Deployment of Application to tomcat nodes
hosts: '{{ target_env }}'
become: yes
become_user: tomcat
become_method: sudo
gather_facts: yes
roles:
- role: stoptomcat
tags:
- stoptomcat
Another way could be removing become from play level and add to role stoptomcat. Something like below should work.
---
- name: Starting Deployment of Application to tomcat nodes
hosts: '{{ target_env }}'
gather_facts: yes
roles:
- role: repodownload
tags:
- repodownload
- role: stoptomcat
become: yes
become_user: tomcat
become_method: sudo
tags:
- stoptomcat
Haven't tested the code so apologies if any formatting issues.

Pass host as a variable from role to next role

I have a long playbook with a number of roles defined. Now i have a requirement for one role i need to pass the host as a variable which will be defined on an earlier role.
eg playbook
---
- name: task1
hosts: app1
gather_facts: no
any_errors_fatal: true
roles:
- role-1
- name: task2
hosts: "{{ host }}"
any_errors_fatal: true
gather_facts: no
roles:
- role-2
My role-1
---
- name: setting the var
set_fact:
host: "app2"
- debug:
var: host
My role-2
---
- debug:
var: host
- name: do something
file:
path: /home/ec2-user/dir1
state: directory
mode: '0755'
however, when I try to run my playbook my role-2 gets skipped because no hosts matched. can someone point me on how to get this setup working.
The thing you want is add_host: and then set the newly created or assigned group as the hosts: the 2nd play:
- hosts: app1
tasks:
- add_host:
name: app2
groups:
- my-group
- hosts: my-group
tasks:
- debug: var=ansible_host

Running playbook in host A to get value and pass on the value to task where it execute locally

Execute a Task to get the hostname in windows machine and store in variable called HstName. Then I have to use the variable to create a folder in GCP dynamically. My GCP module works only on localhost
Ansible version : 2.7
Task 1: Target host windows, to get the hostname
Task 2: "gcp_storage" module uses localhost for creating an object inside storage bucket.
hosts: win
gather_facts: no
serial: 1
connection: winrm
become_method: runas
become_user: admin
vars:
ansible_become_password: '}C7I]8zH#5cPDE8'
gcp_object: "gcp-shareddb-bucket/emeraldpos/stage/"
tasks:
- name: hostname
win_command: hostname
register: Hstname
- set_fact:
Htname: "{{ Hstname.stdout | trim}}"
- debug:
msg: "{{Htname}}"
- hosts: localhost
gather_facts: no
tasks:
- name: Create folder under EMERALDPOS
gc_storage:
bucket: gcp-shareddb-bucket
object: "{{gcp_object}}{{Htname}}"
mode: create
region: us-west2
project: sharedBucket-228905
gs_access_key: "NAKJSAIUS231219237LOKM"
gs_secret_key: "XABSAHSKASAJS:OAKSAJNDADIAJAJD:N<MNMN"
Ansible should create gcp-shareddb-bucket/emeraldpos/stage/HOSTNAME in gcp storage object

How to stop the playbook if one play fails

I have the following playbook with 3 plays. When one of the plays fail, the next plays still execute. I think it is because I run those plays with a different host target.
I would like to avoid this and have the playbook stop when one play fails, is it possible?
---
- name: create the EC2 instances
hosts: localhost
any_errors_fatal: yes
connection: local
tasks:
- ...
- name: configure instances
hosts: appserver
any_errors_fatal: yes
gather_facts: true
tasks:
- ...
- name: Add to load balancer
hosts: localhost
any_errors_fatal: yes
vars:
component: travelmatrix
tasks:
- ...
You can use any_errors_fatal which stops the play if there are any errors.
- name: create the EC2 instances
hosts: localhost
connection: local
any_errors_fatal: true
tasks:
- ...
Reference Link
it is ansible bug
to workaround use max_fail_percentage: 0

Resources