I've an inventory as follow:
[central]
central_1
central_2
[peripheral]
peripheral_1
peripheral_2
...
central_1 and central_2 share the same database and each peripheral_* has its own local database.
Connection parameters are all stored in group_vars/central.yml and multiple host_vars/peripheral_* files.
Then I have a bunch of roles that access the database and perform specific operation based on values in peripheral_conf hostvars.
My goal here is to run a subset of the roles on one central host using the right DB access parameter.
First attempt
First attempt was to have two different play in the playbook targeting different hosts
- hosts: peripheral
pre_tasks:
- name: Load Configuration
set_fact:
peripheral_conf: "{{ hostvars[inventory_hostname] }}"
roles:
- role1
- role2
- role3
- hosts: central
roles:
- role1
- role2
but, since I normally run the playbook limiting the inventory to a specific peripheral host, I'm not able to run it against the central hosts.
Second attempt
I then tried to include role using delegate_to and delegate_fact:
- hosts: peripheral
gather_facts: False
pre_tasks:
- name: Load Configuration
set_fact:
peripheral_conf: "{{ hostvars[inventory_hostname] }}"
tasks:
- name: Insert values into central DB based on peripheral configuration
include_role:
name: '{{ item }}'
apply:
delegate_to: central
delegate_facts: yes
run_once: yes
loop:
- role1
- role2
roles:
# insert values into peripheral DB
- role1
- role2
- role3
but it keeps using peripheral's DB connection parameters.
Working attempt
I finally was able to make it works adding an explicit overriding of the connection parameters:
- hosts: peripheral
gather_facts: False
pre_tasks:
- name: Load Configuration
set_fact:
peripheral_conf: "{{ hostvars[inventory_hostname] }}"
tasks:
- name: Insert values into central DB based on peripheral configuration
include_role:
name: '{{ item }}'
apply:
delegate_to: central
run_once: yes
vars:
db_host : "{{ hostvars['central_app1']['db_host'] }}"
db_database: "{{ hostvars['central_app1']['db_database'] }}"
db_username: "{{ hostvars['central_app1']['db_username'] }}"
db_password: "{{ hostvars['central_app1']['db_password'] }}"
loop:
- role1
- role2
roles:
# insert values into peripheral DB
- role1
- role2
- role3
And finally, the question
The playbook does the job but I'd like to know if this is the only way of do this or there is any better and/or more efficient way to do this.
Related
I have this Ansible Playbook with three different plays. What I want to do is to launch the two lasts plays based on a condition. How can I do this directly at playbook level (not using when clause in each role)?
- name: Base setup
hosts: all
roles :
- apt
- pip
# !!!!! SHUT DOWN IF NOT DEBIAN DIST
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
roles:
- mariadb
- name: webserver and application setup
hosts: webserver
remote_user: "{{ user }}"
become: true
roles:
- php
- users
- openssh
- sshkey
- gitclone
- symfony
You could just end the play for the hosts you do not wish to continue with, with the help of the meta task in a pre_tasks:
- name: dbserver setup
hosts: dbserver
remote_user: "{{ user }}"
become: true
pre_tasks:
- meta: end_host
when: ansible_facts.os_family != 'Debian'
roles:
- mariadb
And do the same for the web servers.
I have a playbook defined as below:
- name: install percona rpms
hosts: imdp
roles:
- role1
- role2
- role3
- role4
I just want the tasks defined in role 3 to be executed serially. If I define serial: 1 in the role3 tasks, it doesn't work. All tasks are executed in parallel. But if I defined serial: 1 in the main yaml (the above yaml) then all the roles are executed serially, which is also not needed.
How can I get just role3 to be executed serially?
"serial" is available in a play only. See Playbook Keywords. The solution is to split the roles among more plays. For example
- name: Play 1. install percona rpms
hosts: imdp
roles:
- role1
- role2
- name: Play 2. install percona rpms
hosts: imdp
serial: 1
roles:
- role3
- name: Play 3. install percona rpms
hosts: imdp
roles:
- role4
As example (emulation)
my-tasks.yml:
---
- include_tasks: custom-tasks.yml
when: inventory_hostname == item
with_items: "{{ ansible_play_hosts }}"
custom-tasks.yml:
---
- debug:
var: inventory_hostname
I'm still somewhat new to Ansible so I'm sure this isn't the proper way of doing this, but it's what I've come up with considering the requirements I was given.
I have to perform tasks on a server, which I do not have credentials to access since they are locked in a vault. My way of working around this is to get the credentials from the vault, then delegate tasks to that server. I've accomplished this, but I'm wondering if there is a cleaner or more adequate way of doing it. So, here's my setup:
I have a playbook that just has:
---
- hosts: localhost
roles:
- role: get_credentials <-- Not the real role names
- role: use_credentials
Basically, get_credentials gets some credentials from a vault and then use_credentials performs tasks, but each task has
delegate_to: protected_server
vars:
ansible_ssh_user: "{{ user }}"
ansible_ssh_pass: "{{ password }}"
at the end of it
Is there a way I can delegate all the tasks in use_credentials without having to delegate each task individually?
I'ld move both your role from the roles: section to the tasks:, using include_role. Something like this:
tasks:
- name: Get credentials
include_role:
name: get_credentials # I expect this one to set_fact user_from_get_credential and password_from_get_credential
delegate_to: protected_server
- name: Use credentials
include_role:
name: use_credentials
vars:
ansible_ssh_user: "{{ user_from_get_credential }}"
ansible_ssh_pass: "{{ password_from_get_credential }}"
I have a long playbook with a number of roles defined. Now i have a requirement for one role i need to pass the host as a variable which will be defined on an earlier role.
eg playbook
---
- name: task1
hosts: app1
gather_facts: no
any_errors_fatal: true
roles:
- role-1
- name: task2
hosts: "{{ host }}"
any_errors_fatal: true
gather_facts: no
roles:
- role-2
My role-1
---
- name: setting the var
set_fact:
host: "app2"
- debug:
var: host
My role-2
---
- debug:
var: host
- name: do something
file:
path: /home/ec2-user/dir1
state: directory
mode: '0755'
however, when I try to run my playbook my role-2 gets skipped because no hosts matched. can someone point me on how to get this setup working.
The thing you want is add_host: and then set the newly created or assigned group as the hosts: the 2nd play:
- hosts: app1
tasks:
- add_host:
name: app2
groups:
- my-group
- hosts: my-group
tasks:
- debug: var=ansible_host
I have a Ansible playbook which does multiple things as below -
Download artifacts fron nexus into local server (Ansible Master).
Copy those artifacts onto multiple remote machines let's say server1/2/3 etc..
And I have used roles in my playbook and the role (repodownload) which downloads the artifacts I want to run it only once because why would i want to download the same thing again. I have tried to use run_once: true but i guess that won't work because that only works for one playbook run but my playbook is running multiple times for multiple hosts.
---
- name: Deploy my Application to tomcat nodes
hosts: '{{ target_env }}'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy
Here target_env is being passed from the command line and it's the remote host group.
Any help is appreciated.
Below is the code from main.yml from repodownload role -
- connection: local
name: Downloading files from Nexus to local server
get_url: url="{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war" dest={{ local_server_location }}
with_items:
- "{{ temps }}"
This is a really simple one that I battled with too.
Try this:
- connection: local
name: Downloading files from Nexus to local server
get_url:
url: "{{ nexus_url }}/{{item}}/{{ myvm_release_version }}/{{item}}-{{ release_ver }}.war"
dest: "{{ local_server_location }}"
with_items:
- "{{ temps }}"
run_once: true
Just something else, unrelated to your main question;
When you run a module that has really long args, like in your example above, rather break the params into their own lines nested under the module. It makes for easier reading, and it makes it easier to spot any potential typos or syntax errors early.
Okay extending from your converstation with Zeitounator. The following workaround will work without changing your vars files. Just remember that this is a workaround, might not be the most efficient way to do the job.
---
- name: Download my repo to localhost
# Executes only for first host in target_env and has visibility to group vars of target_env
hosts: '{{ target_env }}[0]'
serial: 1
roles:
- role: repodownload
tags:
- repodownload
- name: Deploy my Application to tomcat nodes
# Executes for all hosts in target_env
hosts: '{{ target_env }}'
serial: 1
roles:
- role: copyrepo
tags:
- copyrepo
- role: stoptomcat
tags:
- stoptomcat
- role: deploy
tags:
- deploy