Run ansible playbook in role with specific playbook - ansible

I need run one playbook in main playbook with another hosts file e.g:
- hosts: starter
become: yes
tasks:
- name: start playbook2
shell: |
ansible-playbook -b -i host-cluster.txt install-cluster.yaml
args:
executable: /bin/bash
it's ok and run my playbook, but I know this structure is not correct! also, I need when playbook2 started I see result palybook2 in terminal, but I only see
TASK [install-cluster : Run task1 ] *******************************
I want to see result task1 in terminal.
Update:
I need run one role with specific file ( install-cluster.yaml) and with specific inventory hosts file (host-cluster.txt).
something like this:
- name: start kuber cluster
include_role:
name:kuber
tasks_from: cluster.yml
hosts: kuber-hosts.txt

You can load several inventories at once and then target different group(s)/host(s) in different plays. Just as an idea, you could have the following pseudo your_playbook.yml:
- name: Play1 does some stuff on starter hosts
hosts: starter
become: true
tasks:
- name: do a stuff on starter servers
debug:
msg: "I did a thing"
- name: Play2 starts cluster on kuber hosts
hosts: kuber
tasks:
- name: start kuber cluster
include_role:
name: kuber
tasks_from: cluster.yml
- name: Play3 does more stuff on starter hosts
hosts: starter
tasks:
- name: do more stuff on starter servers
debug:
msg: "I did more things"
You can then use this playbook with two different inventories at once if this is how you architectured it:
ansible-playbook -i inventories/starter -i inventories/kuber your_playbook.yml

Related

Ansible condition to run specific tasks in a playbook only if host belongs to specific group in inventory file

I created a playbook with 6 different tasks and want to run tasks 1,2,3 only when host is a child of was855node group (e.g. b, c) and tasks 4,5 to run only when hosts is child of was855dmgr group (e.g. a)
My inventory file:
[local-linux:children]
was855
[was855:children]
was855dmgr
was855node
[was855dmgr:children]
hppidc-pps-dmgr
hppndc-pps-dmgr
hppb-pps-dmgr
**hreg1-mnp-dmgr**
[was855node:children]
wppdev3-pps-node
hint1-pps-node
**hreg1-mnp-node**
[hreg1-mnp-dmgr]
a
[hreg1-mnp-node]
b
c
is this the correct way to do it?
- name: Run create-app-logs.py on the Nodes ***(i want to run it on a ONLY)***
shell: "{{ was_profile_path }}/bin/wsadmin.sh -lang jython -f {{ build_scripts_dir }}/create-app-logs.py"
register: r
when: "'was855dmgr' in group_names"
tags: create_app_logs
- name: Create shared libraries directory ***(i want to run it on b and c ONLY)***
file: path={{ was_shared_lib_location }} state=directory owner=wasadm group=apps mode=0755
when: "'was855node' in group_names"
You would typically split this out into separate plays within the same playbook, and use the hosts directive to control where each play is executed.
For example:
- name: Play 1
hosts: was855dmgr
tasks:
- task1
- task2
- task3
- name: Play 2
hosts: was855node
tasks:
- task4
- task5
- name: Play 2
hosts: all
tasks:
- task6

How to Execute Role Inside the Ansible Blocks

I have written a ansible playbook which do a deployment on the remote machines
- name: Deployment part of the script
vars:
hostName:
build_num:
hosts: "{{hostName}}"
become: true
serial: 1
tasks: this does deployment
after this I want to execute the util which is on localhost from where this playbook will be executed.
Now I have written a roles which does this for me if I execute them separately as a playbook
- name: Roles Demo
hosts: 127.0.0.1
connection: local
vars:
var1: "sometextvalue"
var2: "sometextvalue"
var3: "someurl"
roles:
- demorole #role which I created
Now I want to integrate the role in my main playbook mentioned at the top but I am getting
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
although its the same snippet which is working fine when run individually
Also I wanted to execute this using "Ansible blocks" like when a certain condition is matched execute a certain role but for that also I am getting the same above error just to summarize what I want to achieve using the blocks is as below
- name: Deployment part of the script
vars:
hostName:
build_num:
hosts: "{{hostName}}"
become: true
serial: 1
tasks: this does deployment complete
- name: Task for Doing some check
hosts: 127.0.0.1
connection: local
vars:
var1: "dakdkadadakdhkahdkahkdh
var2: "jdjaldjlaj"
var3: "djasjdlajdlajdljadljaldjlaj"
block:
- name: Doing Check for some1
roles:
- role1
when: x == "somevalue1"
- block:
- name: Doing check for some2
roles:
- role2
when: x == "somevalue2"
.
.
.
assuming the vars value are same
so I am not sure if this could be achieved
Using a block outside of the tasks section is not valid.
You can however execute roles from within the tasks section, which will allow you to use blocks and when conditionals however you choose.
Example:
- name: Task for Doing some check
hosts: 127.0.0.1
connection: local
vars:
var1: "dakdkadadakdhkahdkahkdh
var2: "jdjaldjlaj"
var3: "djasjdlajdlajdljadljaldjlaj"
tasks:
- name: Doing Check for some1
import_role:
name: role1
when: x == "somevalue1"
You will need to decide whether to use import_role or include_role. Take a look at https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse.html#dynamic-vs-static for an explanation of the differences.

Stopping ansible from a playbook (site.yml)

I have re-written the question to be more specific instead of using a generic example of what I am trying to achieve, as per #Zeitounator's suggestion.
I use ansible to spin up VM's in VMware by adding a new entry in the hosts.ini file and running ansible-playbook -i inventory/dev/hosts.ini --limit SomeGroup playbooks/site.yml
The vmware role (calledvmware) will
* check to see if the VM already exists.
* If it does, then obviously it does not create the VM.
* If it does not exist, then it will create the VM from a template.
To destroy a VM, I run this: ansible-playbook -i inventory/dev/hosts.ini --limit SomeGroup playbooks/site.yml -e 'vmware_destroy=true'
That works as intended. Now for my issue.
When this variable is set (vmware_destroy=true), it will destroy the VM successfully, BUT ansible will attempt to carry on with the rest of the playbook on the host that has just been destroyed. Obviously it fails. The playbook does actually stop due to the failure. But not gracefully.
Here is an example of the flow of logic:
$ cat playbooks/site.yml
---
- hosts: all
gather_facts: no
roles:
- { role: vmware, tags: vmware }
- hosts: all
gather_facts: yes
roles:
- { role: bootstrap, tags: bootstrap }
- { role: common, tags: common }
- hosts: AppServers
gather_facts: no
roles:
- { role: application }
# and so on.
$ cat playbooks/roles/vmware/main.yml
---
# Checks to see if the VM exists already.
# A variable `found_vm` is registered in this task.
- import_tasks: find.yml
# Only import this task when all of the `when` conditions are met.
- import_tasks: destroy.yml
when:
- vmware_destroy is defined
- vmware_destroy # Meaning 'True'
- found_vm
# If the above is true, it will not import this task.
- import_tasks: create.yml
when:
- found_vm.failed
- vmware_destroy is not defined
So, the point is, when I specify -e 'vmware_destroy=true', ansible will attempt to run the rest of the playbook and fail.
I don't want ansible to fail. I want it to stop gracefully after completing the vmware role based on -e 'vmware_destroy=true flag provided on the command line.
I am aware I can use a different playbook for this, something like:
ansible-playbook -i inventory/dev/hosts.ini --limit SomeGroup playbooks/VMWARE_DESTROY.yml. But I would rather use a variable as opposed to a separate playbook. If there is a strong argument to split out the playbook in this way, please provide references.
Please let me know if more clarification is needed.
Thank you in advance.
A playbook is the top Ansible abstraction layer (playbook -> role -> task -> ...). There are 2 more layers in AWX (workflow -> job template -> playbook ...) to control the playbooks. To follow the architecture either AWX, or any other tool to interface Ansible (ansible-runner, or scripts for example) should be used to control the playbooks.
Controlling of the playbooks inside Ansible is rather awkward. Create 2 playbooks vmware-create.yml and vmware-destroy.yml
$ cat vmware-create.yml
- hosts: all
gather_facts: yes # include cached variables
tasks:
- block:
- debug:
msg: Don't create VM this time. End of play.
- meta: end_play
when: not hostvars['localhost'].vmware_create
- debug:
msg: Create VM.
$ cat vmware-destroy.yml
- hosts: all
gather_facts: yes # include cached variables
tasks:
- block:
- debug:
msg: Don't destroy VM this time. End of play.
- meta: end_play
when: not hostvars['localhost'].vmware_destroy
- debug:
msg: Destroy VM.
and import them into the playbook vmware_control.yml. See below
$ cat vmware-control.yml
- hosts: localhost
vars:
vmware_create: true
vmware_destroy: false
tasks:
- set_fact:
vmware_create: "{{ vmware_create }}" # cache variable
- set_fact:
vmware_destroy: "{{ vmware_destroy }}" # cache variable
- import_playbook: vmware-create.yml
- import_playbook: vmware-destroy.yml
Control the flow with the variables vmware_create and vmware_destroy. Run vmware_control.yml at localhost and declare hosts: all inside vmware-create.yml and vmware-destroy.yml.

ansible generate *.retry file error when there are multiple plays in playbook

For example, deploy.yml is a ansible playbook. There are two plays in deploy.yml, play1 and play2.
$ cat deploy.yml
- hosts: nodes
remote_user: cloud
become: yes
tasks:
- name: play1
copy: src=test1 dest=/root
- hosts: nodes
remote_user: cloud
become: yes
tasks:
- name: play2
copy: src=test2 dest=/root
$ cat hosts
[nodes]
192.168.1.12
192.168.1.13
Running
ansible-playbook -i hosts deploy.yml
When play1 failed on 192.168.1.12 but success on 192.168.1.13, the deploy.retry only list 192.168.1.12 but no 192.168.1.13.
$ cat deploy.retry
192.168.1.12
Then I running
ansible-playbook -i hosts deploy.yml --limit #deploy.retry
I got a wrong result of play2 haven't be running on 192.168.1.13! Some people know how to solve this problem?
Problem is in playbok file, in facts, you have two independent playbooks in one file. I tested your setup with ansible 2.2.1.0 and second play run correctly for host without error in play1, but there can be difference in configuration.
Correct playbook format for expecting behavior is
- hosts: nodes
remote_user: cloud
become: yes
tasks:
- name: play1
copy: src=test1 dest=/root
- name: play2
copy: src=test2 dest=/root

can ansible run multiple playbooks same time?

- name: "API"
hosts: api
vars:
platform: "{{ application.api }}"
vars_files:
- vars/application-vars.yml
tasks:
- include: tasks/application-install.yml
- name: "JOBS"
hosts: jobs 
vars:
platform: "{{ application.jobs }}"
vars_files:
 - vars/application-vars.yml
tasks:
  - include: tasks/application-install.yml
playbook like before described, can I execute this difference tasks on difference hosts in the same time as parallel way?
No sure what do you actually want, but I'd combine it into single play:
- hosts: api:jobs
tasks:
- include: tasks/application-install.yml
And add group vars to inventory:
[api:vars]
platform="{{ application.api }}"
[jobs:vars]
platform="{{ application.jobs }}"
This way you can run your playbook on all hosts at once, and also you can use --limit option to choose api or jobs group only.
you can run more playbooks using "ansible-playbook [OPTIONS] *.yml" command. This will execute all the playbooks NOT IN PARALLEL WAY, but in serial way, so first one playbook and after the execution, another playbook. This command can be helpful if you have many playbooks.
I don't know a way to execute more playbooks in parallel.

Resources