Issue passing variable to Ansible playbook for hosts: with double quotes - ansible

I'm trying to write a playbook that kicks off role playbooks and pass a list of hosts to it. The "master" playbook has some load balancing logic in it that I don't want to repeat in every role playbook and can't put into site.yml.
inventory.yml
[webservers]
Web1
Web2
Web3
Web4
master.yml
---
- name: Split Inventory into Odd/Even
hosts: all
gather_facts: false
tasks:
- name: Set Group Odd
set_fact:
group_type: "odd"
when: (inventory_hostname.split(".")[0])[-1] | int is odd
- name: Set Group Even
set_fact:
group_type: "even"
when: (inventory_hostname.split(".")[0])[-1] | int is even
- name: Make new groups "odd" or "even"
group_by:
key: "{{ group_type }}"
- name: Perform Roles on Odd
include: webservers.yml hosts={{ groups['odd'] | join(' ')}}
- name: Perform Roles on Even
include: webservers.yml hosts={{ groups['even'] | join(' ')}}
webservers.yml
- name: Perform Tasks on Webservers
hosts: webservers:&"{{ hosts | replace('\"','')}}"
roles:
- pause
The join(' ') flattens the list of hosts into a string with a space separating each one. When I run the playbook it passes the list of hosts to webservers.yml, however it adds double quotes to the beginning and end, causing webservers.yml to do nothing since no hosts match. I would assume the replace('\"','') would remove the quotes around the string but doesn't seem to be the case. Here's an example output from webservers.yml:
[WARNING]: Could not match supplied host pattern, ignoring: Web4"
[WARNING]: Could not match supplied host pattern, ignoring: "Web2
Any ideas? Does hosts: handle filtering differently?

I feel that you use a role and a play in a wrong way. When you do tasks you should not change list of hosts this task or role is been executed upon. Basically, only play (a thing with 'hosts: ..., tasks: ..., roles: ...') can control where to run.
There are few exceptions, f.e. you can play with delegation and so on. But for your case any attempt to use tasks or roles to control list of the host will only bring misery and hate (toward yourself, toward ansible, etc).
To do it right, just add yet another play in to your playbook (playbook is a list of plays).
Here is your code, slightly modified.
---
- name: Split Inventory into Odd/Even
hosts: all
gather_facts: false
tasks:
- name: Set Group Odd
set_fact:
group_type: "odd"
when: (inventory_hostname.split(".")[0])[-1] | int is odd
- name: Set Group Even
set_fact:
group_type: "even"
when: (inventory_hostname.split(".")[0])[-1] | int is even
- name: Make new groups "odd" or "even"
group_by:
key: "{{ group_type }}"
- name: Doing odd things
hosts: odd
gather_facts: false
tasks:
- name: Perform Roles
include: webservers.yml
- name: Doing even things
hosts: even
gather_facts: false
tasks:
- name: Perform Roles
include: webservers.yml
You can see, I've just assigned a playbook to two groups ('odd' and 'even'). Dynamic groups are preserved between plays in a playbook, and they are no different from any other group in this matter.
P.S. Do not use 'include', use 'import_tasks' (includes are dangerous in newer versions of ansible, avoid them if you could.).

Related

Specifying multiple default groups as hosts in an Ansible playbook

I'm looking for a way to specify multiple default groups as hosts in an Ansible playbook. I've been using this method:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development') }}"
tasks: # do things on hosts
However I'm trying to avoid specifying hosts manually (it is error-prone), and the default hosts are inadequate if I want to run the same tasks against multiple groups of servers (for instance, development and QA).
I don't know if this is possible in a playbook:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development') && default('qa') }}"
I don't know if this is possible in an inventory:
[development]
1.2.3.4
[qa]
2.3.4.5
[dev_plus_qa]
development
qa
Creating multiple redundant tasks is undesirable as well - I would like to avoid forcing users to specify specific_qa_hosts (for instance) and I would like to avoid code repitition:
- name: Do things on DEV
hosts: "{{ specific_hosts | default('development') }}"
- name: Do things on QA
hosts: "{{ specific_hosts | default('qa') }}"
Is there any elegant way of accomplishing this?
This is indeed possible and is described in the common patterns targeting hosts and groups documentation of Ansible.
So targeting two groups is as simple as hosts: group1:group2, and, so, your playbook becomes:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development:qa') }}"
And if you rather want to achieve this via the inventory, this is also possible, as described in the documentation examples:
So in your case, it would be:
[development]
1.2.3.4
[qa]
2.3.4.5
[dev_plus_qa:children]
development
qa
And then, as a playbook:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('dev_plus_qa') }}"

Filter hosts using a variable from with_items in ansible

I have the following set up for Ansible, and I would like to parameterize a filter that will loop, and filter out specific hosts.
- name: run on hosts
hosts: "{{ item }}"
roles:
- directory/role-name
with_items:
- us-east-1a
- us-east-1b
- us-east-1c
The result would be that the role called role-name would be first run on us-east-1a hosts, then us-east-1b... etc.
The above simple errors out with
ERROR! 'with_items' is not a valid attribute for a Play
Is there a way to accomplish what I am trying to do, which is chunking my host list into groups, and running the same role against them, one at a time?
The following achieves the result I am looking for, but is clunky, and not dynamic in length.
- name: run on us-east-1a
hosts: "us-east-1a"
roles:
- my-role
- name: run on us-east-1b
hosts: "us-east-1b"
roles:
- my-role
- name: run on us-east-1c
hosts: "us-east-1c"
roles:
- my-role
I think the only way to (1) have a common code and (2) serialise play execution per group of hosts (with targets inside a group running in parallel) would be to split your playbook into two:
playbook-main.yml
---
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1a
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1b
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1c
playbook-sub.yml
- hosts: "{{ host_group_to_run }}"
roles:
- my-role
# other common code
If you wanted to serialise per host, then there is a serial declaration that might be used in conjunction with this suggestion, but despite your comments and edit, it's unclear because once you refer to us-east-1a as a "host" in singular form, other times as a "group of hosts" or an "availability zone".
Will host patterns do the job?:
- name: run on us-east-1a
hosts: us-east-1a,us-east-1b,us-east-1c
roles:
- my-role
Update: #techraf has opened my eyes with his comment – host pattern alone will not do the job.
It will just concatenate all hosts from all groups.
But in a predictable way, which in some cases can be used to iterate hosts in every group separately.
See this answer for details.

Ansible - task serial 1 reverse order

I'd like to create two playbooks, one to stop an environment, another to start it.
Part of the environment is a RabbitMQ cluster, for which stop/start order is quite important, specifically the last node stopped needs to be the first node started.
I was wondering if there is a way to specify a reverse order for running a task against a group.
That way I could apply the stop with serial 1, and the start with serial 1 and reverse group order.
I haven't found a way to do that but to define the rabbitmq host group twice (under different names), in inverted order, which seems a bit distasteful.
Also attempted following:
- hosts: "{ myhostsgroup | sort(reverse=False) }"
serial: 1
And
- hosts: "{ myhostsgroup | reverse }"
serial: 1
But result stays the same, whichever case and its variation (reverse=True, reverse|list) is attempted
Any help would be greatly appreciated.
You can create dynamic groups in runtime:
---
- hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
group: forward
with_items: "{{ groups['mygroup'] }}"
- add_host:
name: "{{ item }}"
group: backward
with_items: "{{ groups['mygroup'] | reverse | list }}"
- hosts: forward
gather_facts: no
serial: 1
tasks:
- debug:
- hosts: backward
gather_facts: no
serial: 1
tasks:
- debug:

If set_fact is scoped to a host, can I use 'dummy' host as a global variable map?

I have defined two group of hosts: wmaster and wnodes. Each group runs in its play:
- hosts: wmaster
roles:
- all
- swarm-mode
vars:
- swarm_master: true
- hosts: wnodes
roles:
- all
- swarm-mode
I use host variables (swarm_master) to define different behavior of some role.
Now, my first playbook performs some initialization and I need to share data with the nodes. What I did is to use set_fact in first play, and then to lookup in the second play:
- set_fact:
docker_worker_token: "{{ hostvars[smarm_master_ip].foo }}"
I don't like using the swarm_master_ip. How about to add a dummy host: global with e.g. address 1.1.1.1 that does not get any role, and serves just for holding the global facts/variables?
If you're using Ansible 2 then you can take use of delegate_facts during your first play:
- name: set fact on swarm nodes
set_fact: docker_worker_token="{{ some_var }}"
delegate_to: "{{ item }}"
delegate_facts: True
with_items: "{{ groups['wnodes'] }}"
This should delegate the set_fact task to every host in the wnodes group and will also delegate the resulting fact to those hosts as well instead of setting the fact on the inventory host currently being targeted by the first play.
How about to add a dummy host: global
I have actually found this suggestion to be quite useful in certain circumstances.
---
- hosts: my_server
tasks:
# create server_fact somehow
- add_host:
name: global
my_server_fact: "{{ server_fact }}"
- hosts: host_group
tasks:
- debug: var=hostvars['global']['my_server_fact']

Ansible: removing hosts

I know that one can add host with the following task:
- name: Add new instance to host group
add_host:
hostname: '{{ item.public_ip }}'
groupname: "tag_Name_api_production"
with_items: ec2.instances
But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Unfortunately, it seems, that you can't do this using Ansible 2. There is no such a module called remove_host or another one.
However, using Ansible 2 you can refresh your inventory mid-play:
- meta: refresh_inventory
Have a look at this question
Another idea might be to filter hosts beforehand. Try adding them to group, and then excluding this group in a play lately, e.g. :
- hosts: '!databases'
You can just stop the play for those hosts:
- name: Remove unwanted hosts from play_hosts
meta: end_host
when: unwanted
This assumes, of course, that the unwanted variable exists on all the hosts and is set properly.
Can not removing hosts, but can choose to run on the new created group.
- name: kubectl
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: nextcloud
ansible_connection: kubectl
ansible_kubectl_context: cluster
ansible_kubectl_namespace: default
ansible_kubectl_pod: nextcloud-75fc7f5c6f-hxrq6
groupname: "pods"
# only run on pods
- hosts: pods
gather_facts: false
tasks:
- raw: pwd
register: raw_result
- debug:
msg: "{{raw_result.stdout_lines[0]}}"
We typically do it like this in a playbook using multiple hosts: sections.
- hosts: auth:!ocp
roles:
- ntp-server
- hosts: all:!auth:!ocp
roles:
- ntp-client
This will remove the groups of hosts from consideration via the !group mechanism. Specifically here in the 1st block we're removing the !ocp group and in the 2nd we're removing both the !auth and !ocp groups.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
I just got the same problem. I have tests which run converge for a given playbook (which I can't modify), and then I need to run the same playbook with smaller set of hosts in the group.
My solution is:
Let's say you have group target you want to make smaller.
Put smallest number of hosts in the target group.
Put additional hosts into target_addon group.
- hosts: target_addon
tasks:
- add_host:
name: '{{ inventory_hostname }}'
groups: [target]
- import_playbook: converge.yaml # uses group `target` with added hosts
# Removing hosts added from target_addon group from group target by reloading inventory
- hosts: localhost
tasks:
- meta: refresh_inventory
- import_playbook: converge.yaml # uses group `target` without added hosts

Resources