How to modularize Ansible playbook with multiple virtual hosts? - ansible

(EDIT: the question was thoroughly rewritten based on feedback in comments, aiming to follow a suggestion to use "more code, less talk")
I've accumulated some ansible playbooks for setting up our host(s). I want to start modularizing them. The current situation is roughly like below:
docker-registry-playbook.yml:
- hosts: foo
tasks:
- name: install docker
# ...
- name: copy config files
# ...
- name: start docker registry
# ...etc...
issue-tracker-playbook.yml:
- hosts: foo
tasks:
- name: install issue tracker
# ...etc...
tools-playbook.yml:
- hosts: foo
tasks:
- name: configure nginx virtual hosts
copy:
src: "{{ item }}"
dest: /etc/nginx/sites-available/
# ...
with_items:
- docker.example.com
- issues.example.com
- name: start nginx
# ...
- name: configure letsencrypt
# ...
My first idea was to split it into roles like below:
- hosts: foo
roles: webserver docker_registry issue_tracker
However, in such case, I don't know how to make the webserver role "auto-detect" the host names (virtual hosts) it should expose? I would like to be easily able to move e.g. docker_registry role to a different host (bar), and have the webserver auto-detect the change and update virtual hosts accordingly when I change the new playbook to:
- hosts: foo
roles: webserver docker_registry
- hosts: bar
roles: webserver issue_tracker
I would strongly prefer not having to explicitly list virtual hosts as parameters to webserver both on foo and bar; this would be too much duplication for me. Is it possible to have webserver autodetect names of "virtual hosts" from the other roles configured on the same server?
Or am I approaching the modularization in a wrong way, and the idiomatic approach in Ansible is to split it along some other axis when modularizing?
If not possible to do fully automatically, I'd like it to be done with minimal wiring; I could maybe grudgingly accept something like below, though it already looks much too long and ugly to me (but I still don't know how to do this in Ansible):
- hosts: foo
roles:
- docker_registry
- role: webserver
vars:
vhosts: [ "docker_registry" ]

I'm not sure if I understand your problem correctly. Why is it necessary to have a common variable virtual_hosts in all roles?
... Initially, I thought I could set a variable in each of service1 and service2, named e.g. virtual_hosts: and Ansible would merge the duplicated variables into one...
For example:
- hosts: foo
vars:
virtual_host_service1: docker.example.com
virtual_host_service2: issues.example.com
webserver_vm:
- "{{ virtual_host_service1 }}"
- "{{ virtual_host_service2 }}"
roles:
- webserver
- service1
- service2
You might want to set the variables in the playbook, or in the host_vars, or in many other places according to the precedens

Related

Run tasks on different hosts within an imported task

The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO

Specifying multiple default groups as hosts in an Ansible playbook

I'm looking for a way to specify multiple default groups as hosts in an Ansible playbook. I've been using this method:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development') }}"
tasks: # do things on hosts
However I'm trying to avoid specifying hosts manually (it is error-prone), and the default hosts are inadequate if I want to run the same tasks against multiple groups of servers (for instance, development and QA).
I don't know if this is possible in a playbook:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development') && default('qa') }}"
I don't know if this is possible in an inventory:
[development]
1.2.3.4
[qa]
2.3.4.5
[dev_plus_qa]
development
qa
Creating multiple redundant tasks is undesirable as well - I would like to avoid forcing users to specify specific_qa_hosts (for instance) and I would like to avoid code repitition:
- name: Do things on DEV
hosts: "{{ specific_hosts | default('development') }}"
- name: Do things on QA
hosts: "{{ specific_hosts | default('qa') }}"
Is there any elegant way of accomplishing this?
This is indeed possible and is described in the common patterns targeting hosts and groups documentation of Ansible.
So targeting two groups is as simple as hosts: group1:group2, and, so, your playbook becomes:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development:qa') }}"
And if you rather want to achieve this via the inventory, this is also possible, as described in the documentation examples:
So in your case, it would be:
[development]
1.2.3.4
[qa]
2.3.4.5
[dev_plus_qa:children]
development
qa
And then, as a playbook:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('dev_plus_qa') }}"

Filter hosts using a variable from with_items in ansible

I have the following set up for Ansible, and I would like to parameterize a filter that will loop, and filter out specific hosts.
- name: run on hosts
hosts: "{{ item }}"
roles:
- directory/role-name
with_items:
- us-east-1a
- us-east-1b
- us-east-1c
The result would be that the role called role-name would be first run on us-east-1a hosts, then us-east-1b... etc.
The above simple errors out with
ERROR! 'with_items' is not a valid attribute for a Play
Is there a way to accomplish what I am trying to do, which is chunking my host list into groups, and running the same role against them, one at a time?
The following achieves the result I am looking for, but is clunky, and not dynamic in length.
- name: run on us-east-1a
hosts: "us-east-1a"
roles:
- my-role
- name: run on us-east-1b
hosts: "us-east-1b"
roles:
- my-role
- name: run on us-east-1c
hosts: "us-east-1c"
roles:
- my-role
I think the only way to (1) have a common code and (2) serialise play execution per group of hosts (with targets inside a group running in parallel) would be to split your playbook into two:
playbook-main.yml
---
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1a
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1b
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1c
playbook-sub.yml
- hosts: "{{ host_group_to_run }}"
roles:
- my-role
# other common code
If you wanted to serialise per host, then there is a serial declaration that might be used in conjunction with this suggestion, but despite your comments and edit, it's unclear because once you refer to us-east-1a as a "host" in singular form, other times as a "group of hosts" or an "availability zone".
Will host patterns do the job?:
- name: run on us-east-1a
hosts: us-east-1a,us-east-1b,us-east-1c
roles:
- my-role
Update: #techraf has opened my eyes with his comment – host pattern alone will not do the job.
It will just concatenate all hosts from all groups.
But in a predictable way, which in some cases can be used to iterate hosts in every group separately.
See this answer for details.

Ansible run the same role multiple times with different vars file included

I have a role which I would like to run multiple times with different vars files, I am currently doing the following:
- hosts: localhost
pre_tasks:
include_vars: "vars/vars1.yml"
roles:
- my_role
- hosts: localhost
pre_tasks:
include_vars: "vars/vars2.yml"
roles:
- my_role
Is there a less boilerplate way to do this? I know it is possible to parameterise roles but I can't find anything in the ansible documentation regarding running a role multiple times and calling a different include_vars each time.
i wanted to do something similar a while back and i ended up having two groups in my inventory
[group1]
localhost1
[group2]
localhost2
and then in group_vars i had different values. In your case that would be
# file: group_vars/group1/main.yml
include_file: vars/vars1.yml
and
# file: group_vars/group2/main.yml
include_file: vars/vars2.yml
Then, you can modify your playbook to something like this
- hosts: all
pre_tasks:
include_vars: "{{ include_file }}"
roles:
- my_role
and finally, execute your playbook for both groups
ansible-playbook pb.yml -l group1,group2
and it should take care of both instalations

Ansible: removing hosts

I know that one can add host with the following task:
- name: Add new instance to host group
add_host:
hostname: '{{ item.public_ip }}'
groupname: "tag_Name_api_production"
with_items: ec2.instances
But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Unfortunately, it seems, that you can't do this using Ansible 2. There is no such a module called remove_host or another one.
However, using Ansible 2 you can refresh your inventory mid-play:
- meta: refresh_inventory
Have a look at this question
Another idea might be to filter hosts beforehand. Try adding them to group, and then excluding this group in a play lately, e.g. :
- hosts: '!databases'
You can just stop the play for those hosts:
- name: Remove unwanted hosts from play_hosts
meta: end_host
when: unwanted
This assumes, of course, that the unwanted variable exists on all the hosts and is set properly.
Can not removing hosts, but can choose to run on the new created group.
- name: kubectl
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: nextcloud
ansible_connection: kubectl
ansible_kubectl_context: cluster
ansible_kubectl_namespace: default
ansible_kubectl_pod: nextcloud-75fc7f5c6f-hxrq6
groupname: "pods"
# only run on pods
- hosts: pods
gather_facts: false
tasks:
- raw: pwd
register: raw_result
- debug:
msg: "{{raw_result.stdout_lines[0]}}"
We typically do it like this in a playbook using multiple hosts: sections.
- hosts: auth:!ocp
roles:
- ntp-server
- hosts: all:!auth:!ocp
roles:
- ntp-client
This will remove the groups of hosts from consideration via the !group mechanism. Specifically here in the 1st block we're removing the !ocp group and in the 2nd we're removing both the !auth and !ocp groups.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
I just got the same problem. I have tests which run converge for a given playbook (which I can't modify), and then I need to run the same playbook with smaller set of hosts in the group.
My solution is:
Let's say you have group target you want to make smaller.
Put smallest number of hosts in the target group.
Put additional hosts into target_addon group.
- hosts: target_addon
tasks:
- add_host:
name: '{{ inventory_hostname }}'
groups: [target]
- import_playbook: converge.yaml # uses group `target` with added hosts
# Removing hosts added from target_addon group from group target by reloading inventory
- hosts: localhost
tasks:
- meta: refresh_inventory
- import_playbook: converge.yaml # uses group `target` without added hosts

Resources