I have 3 tasks in a playbook. My requirement is to complete the first task and then the second and third tasks should happen in parallel. As by default, these three tasks will happen one after the other, is there a way to make the second and third task in parallel once the first one is done?
- hosts: conv4
remote_user: username
tasks:
- name: Start Services
script: app-stop.sh
register: console
- hosts: patchapp
remote_user: username
become_user: username
become_method: su
tasks:
- name: Stop APP Services
script: stopapp.sh
register: console
- debug: var=console.stdout_lines
- hosts: patchdb
remote_user: username
become_user: username
become_method: su
tasks:
- name: Stop DB Services
script: stopdb.sh
register: console
- debug: var=console.stdout_lines
I need to run Start Services task first and then once it is complete i need to run Stop APP Services and Stop DB Services tasks parallely.
1.Import playbook works fine.
play.yml
- import_playbook: play1.yml
- import_playbook: play2.yml
play1.yml
- hosts: localhost
tasks:
- debug:
msg: 'play1: {{ ansible_host }}'
play2.yml
- hosts:
- vm2
- vm3
tasks:
- debug:
msg: 'play2: {{ ansible_host }}'
2.Loop over include_tasks and delegate_to is a nogo.
An option would be to loop over include_tasks and delegate_to inside the task (see below). But there is an unsolved bug delegate_to with remote_user on ansible 2.0 .
Workflows in Ansible Tower require license.
Workflows are only available to those with Enterprise-level licenses.
play.yml
- name: 'Test loop delegate_to'
hosts: localhost
gather_facts: no
vars:
my_servers:
- vm2
- vm3
tasks:
- name: "task 1"
debug:
msg: 'Task1: Running at {{ ansible_host }}'
- include_tasks: task2.yml
loop: "{{ my_servers }}"
task2.yml
- debug:
msg: 'Task2: Running at {{ ansible_host }}'
delegate_to: "{{ item }}"
ansible-playbook play.yml
PLAY [Test loop delegate_to]
TASK [task 1]
ok: [localhost] =>
msg: 'Task1: Running at 127.0.0.1'
TASK [include_tasks]
included: ansible-examples/examples/example-014/task2.yml for localhost
included: ansible-examples/examples/example-014/task2.yml for localhost
TASK [debug]
ok: [localhost -> vm2] =>
msg: 'Task2: Running at vm2'
TASK [debug]
ok: [localhost -> vm3] =>
msg: 'Task2: Running at vm3'
PLAY RECAP
localhost : ok=5 changed=0 unreachable=0 failed=0
Related
I have distilled a playbook that has three plays. The goal is to collect the database password from a prompt in one play and then use the same password in the other two plays.
---
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
- hosts: dbservers
tasks:
- command: echo {{db_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{db_password | mandatory }}
It fails as shown below.
Enter Database Password for databse user root [root]:
PLAY [database password] ******************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
PLAY [dbservers] **************************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
TASK: [command echo {{db_password | mandatory}}] ***************************
fatal: [vc-dev-1] => One or more undefined variables: 'db_password' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #.../playbook2.retry
vc-dev-1 : ok=3 changed=0 unreachable=1 failed=0
I have found the following workaround using set_fact to assign the variable entered by a user into a variable with playbook scope. It seems that var_prompt variables are not like facts and other variables, its scope is restricted in the play that prompts for them not the entire playbook. I am not sure if this is a feature or a bug.
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{ db_password}}"
- hosts: dbservers
tasks:
- command: echo {{ db_root_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{ db_root_password | mandatory }}
Improvising gae123's answer, in case if your hosts are added dynamically, it will not be possible to get and set the fact on the existing group of servers, in which case localhost can be used to set and get.
---
- name: database password
hosts: localhost
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{db_password}}"
- hosts: dbservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
- hosts: webservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
I ended up taking user3037143 answer for my localhost sudo password problem:
---
- hosts: localhost
tags:
- always
vars_prompt:
- name: sudo_password
prompt: "Sudo password for localhost ONLY"
tasks:
- set_fact: ansible_become_password={{ sudo_password }}
no_log: true
Now it's shared between every include playbooks I have.
I'm on ansible > 2.
I am tasked with creating a playbook, where I perform the following operations:
Fetch information from a YAML file (the file contains details on VLANs)
Cycle through the YAML objects and verify which subnet contains the IP, then return the object
The object contains also a definition of the inventory_hostname where to run the Ansible playbook
At the moment, I have the following (snippet):
Playbook:
- name: "Playbook"
gather_facts: false
hosts: "localhost"
tasks:
- name: "add host"
add_host:
name: "{{ vlan_target }}"
- name: "debug"
debug:
msg: "{{ inventory_hostname }}"
The defaults file defines the target_host as an empty string "" and then it is evaluated within another role task, like so (snippet):
Role:
- set_fact:
vlan_object: "vlan | trim | from_json | first"
- name: "set facts based on IP address"
set_fact:
vlan_name: "{{ vlan_object.name }}"
vlan_target: "{{ vlan_object.target }}"
delegate_to: localhost
What I am trying to achieve is to change the hosts: variable so that I can use the right target, since the IP/VLAN should reside on a specific device.
I have tried to put the aforementioned task above the add_host, or even putting in the same playbook, like so:
- name: "Set_variables"
hosts: "localhost"
tasks:
- name: "Set target host"
import_role:
name: test
tasks_from: target_selector
- name: "Playbook"
gather_facts: false
hosts: "localhost"
Adding a debug clause to the above playbook sets the right target, but it is not re-used below, making me think that the variable is not set globally, but within that run.
I am looking for a way to set the target, based on a variable that I am passing to the playbook.
Does anyone have any experience with this?
Global facts are not a thing, at best you can assign a fact to all hosts in the play, but since you are looking to use the said fact to add an host, this won't be a solution for your use case.
You can access facts of another hosts via the hostvars special variable, though. It is a dictionary where the keys are the names of the hosts.
The usage of the role is not relevant to your issue at hand, so, in the demonstration below, let's put this aside.
Given the playbook:
- hosts: localhost
gather_facts: no
tasks:
- set_fact:
## fake object, since we don't have the structure of your JSON
vlan_object:
name: foo
target: bar
- set_fact:
vlan_name: "{{ vlan_object.name }}"
vlan_target: "{{ vlan_object.target }}"
run_once: true
- add_host:
name: "{{ vlan_target }}"
- hosts: "{{ hostvars.localhost.vlan_target }}"
gather_facts: no
tasks:
- debug:
var: ansible_play_hosts
This would yield
PLAY [localhost] *************************************************************
TASK [set_fact] **************************************************************
ok: [localhost]
TASK [set_fact] **************************************************************
ok: [localhost]
TASK [add_host] **************************************************************
changed: [localhost]
PLAY [bar] *******************************************************************
TASK [debug] *****************************************************************
ok: [bar] =>
ansible_play_hosts:
- bar
I have a little problem I can't solve. I am writing an update/reboot playbook for my Linux servers and I want to make sure that a task is executed only if another host is in the same playbook run
for example:
stop app on app server when the database server is going to be rebooted
- hosts: project-live-app01
tasks:
- name: stop app before rebooting db servers
systemd:
name: app
state: stopped
when: <<< project-live-db01 is in this ansible run >>>
- hosts: dbservers
serial: 1
tasks:
- name: Unconditionally reboot the machine with all defaults
reboot:
post_reboot_delay: 20
msg: "Reboot initiated by Ansible"
- hosts: important_servers:!dbservers
serial: 1
tasks:
- name: Unconditionally reboot the machine with all defaults
reboot:
post_reboot_delay: 20
msg: "Reboot initiated by Ansible"
I want to use the same playbook to reboot hosts and If i --limit the execution to only some hosts and especially not the dbserver then I don't want to have the app stopped. I try to create a generic playbook for all my projects, which only executes tasks if certain servers are affected by the playbook run.
Is there any way for that?
thank you and have a great day!
cheers, Ringo
It would be possible to create a dictionary with the structure of the project, e.g. in group_vars
shell> cat group_vars/all
project_live:
app01:
dbs: [db01, db09]
app02:
dbs: [db02, db09]
app03:
dbs: [db03, db09]
Put all groups of the DB servers you want to use into the inventory, e.g.
shell> cat hosts
[dbserversA]
db01
db02
[dbserversB]
db02
db03
[dbserversC]
db09
Then the playbook below demonstrates the scenario
shell> cat playbook.yml
---
- name: Stop app before rebooting db servers
hosts: localhost
gather_facts: false
tasks:
- debug:
msg: "Stop app on {{ item.key }}"
loop: "{{ project_live|dict2items }}"
when: item.value.dbs|intersect(groups[dbservers])|length > 0
- name: Reboot db servers
hosts: "{{ dbservers }}"
gather_facts: false
tasks:
- debug:
msg: "Reboot {{ inventory_hostname }}"
For example
shell> ansible-playbook -i hosts playbook.yml -e dbservers=dbserversA
PLAY [Stop app before rebooting db servers] ***********************
msg: Stop app on app01
msg: Stop app on app02
PLAY [Reboot db servers] *******************************************
msg: Reboot db01
msg: Reboot db02
How can I stop services on app* when the play is running on the localhost? Either use delegate_to, or create dynamic group by add_host and use it in the next play, e.g.
shell cat playbook.yml
---
- name: Create group appX
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: "{{ item.key }}"
groups: appX
loop: "{{ project_live|dict2items }}"
loop_control:
label: "{{ item.key }}"
when: item.value.dbs|intersect(groups[dbservers])|length > 0
- name: Stop app before rebooting db servers
hosts: appX
gather_facts: false
tasks:
- debug:
msg: "Stop app on {{ inventory_hostname }}"
- name: Reboot db servers
hosts: "{{ dbservers }}"
gather_facts: false
tasks:
- debug:
msg: "Reboot {{ inventory_hostname }}"
gives
shell> ansible-playbook -i hosts playbook.yml -e dbservers=dbserversA
PLAY [Create group appX] ******************************************
ok: [localhost] => (item=app01)
ok: [localhost] => (item=app02)
skipping: [localhost] => (item=app03)
PLAY [Stop app before rebooting db servers] ************************
msg: Stop app on app01
msg: Stop app on app02
PLAY [Reboot db servers] *******************************************
msg: Reboot db01
msg: Reboot db02
My question is somehow similar to the one posted here, but that doesn't quite answer it.
In my case I have an array containing multiple vars: entries, which I loop over when calling a certain role. The following examples shows the idea:
some_vars_file.yml:
redis_config:
- vars:
redis_version: 6.0.6
redis_port: 6379
redis_bind: 127.0.0.1
redis_databases: 1
- vars:
redis_version: 6.0.6
redis_port: 6380
redis_bind: 127.0.0.1
redis_databases: 1
playbook.yml:
...
- name: Install and setup redis
include_role:
name: davidwittman.redis
with_dict: "{{ dictionary }}"
loop: "{{ redis_config }}"
loop_control:
loop_var: dictionary
...
As far as I understand, this should just set the dictionary beginning with the vars node on every iteration, but it somehow doesn't. Is there any chance to get something like this to work, or do I really have to redefine all properties at the role call, populating them using with_items?
Given the role
shell> cat roles/davidwittman_redis/tasks/main.yml
- debug:
var: dictionary
Remove with_dict. The playbook
shell> cat playbook.yml
- hosts: localhost
vars_files:
- some_vars_file.yml
tasks:
- name: Install and setup redis
include_role:
name: davidwittman_redis
loop: "{{ redis_config }}"
loop_control:
loop_var: dictionary
gives
shell> ansible-playbook playbook.yml
PLAY [localhost] **********************************************
TASK [Install and setup redis] ********************************
TASK [davidwittman_redis : debug] *****************************
ok: [localhost] =>
dictionary:
vars:
redis_bind: 127.0.0.1
redis_databases: 1
redis_port: 6379
redis_version: 6.0.6
TASK [davidwittman_redis : debug] ******************************
ok: [localhost] =>
dictionary:
vars:
redis_bind: 127.0.0.1
redis_databases: 1
redis_port: 6380
redis_version: 6.0.6
Q: "Might there be an issue related to the variable population on role call?"
A: Yes. It can. See Variable precedence. vars_files is precedence 14. Any higher precedence will override it. Decide how to structure the data and optionally use include_vars (precedence 18). For example
shell> cat playbook.yml
- hosts: localhost
tasks:
- include_vars: some_vars_file.yml
- name: Install and setup redis
include_role:
name: davidwittman_redis
loop: "{{ redis_config }}"
loop_control:
loop_var: dictionary
Ultimately, command line --extra-vars would override all previous settings
shell> ansible-playbook playbook.yml --extra-vars "#some_vars_file.yml"
Q: "Maybe it is not possible to set the vars section directly via an external dictionary?"
A: It is possible, of course. The example in this answer clearly proves it.
I have distilled a playbook that has three plays. The goal is to collect the database password from a prompt in one play and then use the same password in the other two plays.
---
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
- hosts: dbservers
tasks:
- command: echo {{db_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{db_password | mandatory }}
It fails as shown below.
Enter Database Password for databse user root [root]:
PLAY [database password] ******************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
PLAY [dbservers] **************************************************************
GATHERING FACTS ***************************************************************
ok: [vc-dev-1]
TASK: [command echo {{db_password | mandatory}}] ***************************
fatal: [vc-dev-1] => One or more undefined variables: 'db_password' is undefined
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #.../playbook2.retry
vc-dev-1 : ok=3 changed=0 unreachable=1 failed=0
I have found the following workaround using set_fact to assign the variable entered by a user into a variable with playbook scope. It seems that var_prompt variables are not like facts and other variables, its scope is restricted in the play that prompts for them not the entire playbook. I am not sure if this is a feature or a bug.
- name: database password
hosts:
- webservers
- dbservers
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{ db_password}}"
- hosts: dbservers
tasks:
- command: echo {{ db_root_password | mandatory }}
- hosts: webservers
tasks:
- command: echo {{ db_root_password | mandatory }}
Improvising gae123's answer, in case if your hosts are added dynamically, it will not be possible to get and set the fact on the existing group of servers, in which case localhost can be used to set and get.
---
- name: database password
hosts: localhost
vars_prompt:
- name: "db_password"
prompt: "Enter Database Password for databse user root"
default: "root"
tasks:
- set_fact:
db_root_password: "{{db_password}}"
- hosts: dbservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
- hosts: webservers
vars:
- db_root_password: "{{ hostvars['localhost']['db_root_password'] }}"
tasks:
- command: echo {{db_root_password | mandatory }}
I ended up taking user3037143 answer for my localhost sudo password problem:
---
- hosts: localhost
tags:
- always
vars_prompt:
- name: sudo_password
prompt: "Sudo password for localhost ONLY"
tasks:
- set_fact: ansible_become_password={{ sudo_password }}
no_log: true
Now it's shared between every include playbooks I have.
I'm on ansible > 2.