I am not sure on how to find the first ansible hostname from group_names. Could you kindly advise me on how to do it?
hosts
[webservers]
server1
server2
server3
[webserver-el7]
server4
server5
server6
And i have 2 different playbook for each host groups
playbook1.yml
- name: deploy app
hosts: webservers
serial: 8
roles:
- roles1
playbook2.yml
- name: deploy app
hosts: webservers-el7
serial: 8
roles:
- roles1
the problem is that i have delegate task to first host of each group. previously i only used webservers group, so it was much easier by using the task below
- name: syncing web files to {{ version_dir }}
synchronize:
src: "{{ build_dir }}"
dest: "{{ version_dir }}"
rsync_timeout: 60
delegate_to: "{{ groups.webservers | first }}"
If i have 2 different group_names, how can i select the first one of each group? so it can be more dynamic
If you want the first host of current play to be a kind of master host to sync from, I'd recommend another approach: use one of play_hosts or ansible_play_hosts (depending on your Ansible version) variables. See magic variables.
Like delegate_to: "{{ play_hosts | first }}".
The thing is when you say hosts: webservers-el7 to Ansible webservers-el7 is a pattern here. Ansible search for hosts to match this pattern and feed them into Play. You may have written webservers-el* as well. So inside Play you don't have any variable that will tell you "I'm running this Play on hosts from group webserver-el7...". You may only make some guess work analyzing group_names and groups magic variables. But this become clumsy when you have one host in several groups.
For hosts in single group only, you may try: groups[group_names | first] | first
To get any element from group, use group[group_name][0...n].
This will get the first element from the group.
- debug: msg="{{ groups['group_name'][0] }}"
Related
I'm looking for a way to specify multiple default groups as hosts in an Ansible playbook. I've been using this method:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development') }}"
tasks: # do things on hosts
However I'm trying to avoid specifying hosts manually (it is error-prone), and the default hosts are inadequate if I want to run the same tasks against multiple groups of servers (for instance, development and QA).
I don't know if this is possible in a playbook:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development') && default('qa') }}"
I don't know if this is possible in an inventory:
[development]
1.2.3.4
[qa]
2.3.4.5
[dev_plus_qa]
development
qa
Creating multiple redundant tasks is undesirable as well - I would like to avoid forcing users to specify specific_qa_hosts (for instance) and I would like to avoid code repitition:
- name: Do things on DEV
hosts: "{{ specific_hosts | default('development') }}"
- name: Do things on QA
hosts: "{{ specific_hosts | default('qa') }}"
Is there any elegant way of accomplishing this?
This is indeed possible and is described in the common patterns targeting hosts and groups documentation of Ansible.
So targeting two groups is as simple as hosts: group1:group2, and, so, your playbook becomes:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('development:qa') }}"
And if you rather want to achieve this via the inventory, this is also possible, as described in the documentation examples:
So in your case, it would be:
[development]
1.2.3.4
[qa]
2.3.4.5
[dev_plus_qa:children]
development
qa
And then, as a playbook:
- name: Do things on hosts
hosts: "{{ specific_hosts | default('dev_plus_qa') }}"
I'm completely new to Ansible so I'm still struggling with its way of working... I have an inventory file with several hosts sorted by environment and function:
[PRO-OSB]
host-1
host-2
[PRO-WL]
host-3
host-4
[PRO:children]
PRO-OSB
PRO-WL
But I think sometimes I might need to run playbooks specifying even more, i.e attending to the environment, its function, cluster of hosts and application running on the host. So in resume every host must have 4 "categories": environment, function, cluster and app.
How could I achieve this without having to constantly repeat entries??
How could I achieve this without having to constantly repeat entries?
You can't. You have to declare in each needed groups the machines that belong to it. So if a machine belongs to 4 distinct groups (not taking into account parent groups), you'll have to declare that host in the 4 relevant groups.
Ansible could have chosen the other way around (i.e. list for each hosts the groups it belongs to) but this is not the retained solution and it would be as verbose.
To make things easier and IMO a bit more secure, you can split your inventory in several environment inventories (prod, dev....) so that you can remove one level of complexity inside each inventory. The downside is that you cannot target all your envs at once with such a setup.
If your inventory is big and targeting some sort of cluster/cloud environment (vsphere, aws...), dynamic inventories can help.
Q: "Every host must have 4 "categories": environment, function, cluster and app. How could I achieve this without having to constantly repeat entries?"
A: It's possible to declare default options in a [*:vars] section and override it with the host's specific options. See How variables are merged. For example the inventory
$ cat hosts
[PRO_OSB]
test_01 my_cluster='cluster_A'
test_02
[PRO_WL]
test_03 my_cluster='cluster_A'
test_04
[PRO:children]
PRO_OSB
PRO_WL
[PRO:vars]
my_environment='default_env'
my_function='default_fnc
my_cluster='default_cluster'
my_app='default_app'
with the playbook
- hosts: PRO
gather_facts: false
tasks:
- debug:
msg: "{{ inventory_hostname }}
{{ my_environment }}
{{ my_function }}
{{ my_cluster }}
{{ my_app }}"
gives (my_cluster variable of test_01 and test_03 was overridden by hosts' value)
"msg": "test_01 default_env 'default_fnc cluster_A default_app"
"msg": "test_02 default_env 'default_fnc default_cluster default_app"
"msg": "test_04 default_env 'default_fnc default_cluster default_app"
"msg": "test_03 default_env 'default_fnc cluster_A default_app"
Q: "Run playbooks specifying even more, i.e attending to the environment, its function, cluster of hosts and application."
It's possible to create dynamic groups with the module add_host and select the hosts as needed. For example create new group cluster_A in the first play and use it in the next one
- hosts: all
tasks:
- add_host:
name: "{{ item }}"
group: cluster_A
loop: "{{ hostvars|
dict2items|
json_query('[?value.my_cluster == `cluster_A`].key') }}"
delegate_to: localhost
run_once: true
- hosts: cluster_A
tasks:
- debug:
var: inventory_hostname
gives
ok: [test_01] => {
"inventory_hostname": "test_01"
}
ok: [test_03] => {
"inventory_hostname": "test_03"
}
for folks who might need something like this --
I ended up using this format for my inventory file when I was trying to establish multiple variables for each host for delivering files based on role or team.
inventory/support/hosts
support:
hosts:
qahost:
host_role:
waiter
dishwasher
host_teams:
ops
sales
awshost:
host_role:
waiter
host_teams:
dev
testhost1:
host_role:
dishwasher
host_teams:
ops
dev
testhost2:
host_role:
dishwasher
boss
host_teams:
ops
dev
sales
and referenced them in this play:
- name: parse inventory file host variables
debug:
msg: |
- "role attribute of host: {{ item }} is {{ hostvars[item]['host_role'] }}"
- "team attribute of host: {{ item }} is {{ hostvars[item]['host_teams'] }}"
with_items: "{{ inventory_hostname }}"
when: hostvars[item]['host_teams'] is contains (team_name)
ansible command with custom extra_var that is passed to the playbook and matched in the inventory with the when conditional:
ansible-playbook -i inventory/support/hosts -e 'team_name="sales"' playbook.yml
I have a dynamic inventory that assigns a "fact" to each host, called a 'cluster_number'.
The cluster numbers are not known in advance, but there is one or more hosts that are assigned the same number. The inventory has hundreds of hosts and 2-3 dozen unique cluster numbers.
I want to run a task for all hosts in the inventory, however I want to execute it only once per each group of hosts sharing the same 'cluster_number' value. It does not matter which specific host is selected for each group.
I feel like there should be a relatively straight forward way to do this with ansible, but can't figure out how. I've looked at group_by, when, loop, delegate_to etc. But no success yet.
An option would be to
group_by the cluster_number
run_once a loop over cluster numbers
and pick the first host from each group.
For example given the hosts
[test]
test01 cluster_number='1'
test02 cluster_number='1'
test03 cluster_number='1'
test04 cluster_number='1'
test05 cluster_number='1'
test06 cluster_number='2'
test07 cluster_number='2'
test08 cluster_number='2'
test09 cluster_number='3'
test10 cluster_number='3'
[test:vars]
cluster_numbers=['1','2','3']
the following playbook
- hosts: all
gather_facts: no
tasks:
- group_by: key=cluster_{{ cluster_number }}
- debug: var=groups['cluster_{{ item }}'][0]
loop: "{{ cluster_numbers }}"
run_once: true
gives
> ansible-playbook test.yml | grep groups
"groups['cluster_1'][0]": "test01",
"groups['cluster_2'][0]": "test06",
"groups['cluster_3'][0]": "test09",
To execute tasks at the targets include_tasks (instead of debug in the loop above) and delegate_to the target
- set_fact:
my_group: "cluster_{{ item }}"
- command: hostname
delegate_to: "{{ groups[my_group][0] }}"
Note: Collect the list cluster_numbers from the inventory
cluster_numbers: "{{ hostvars|json_query('*.cluster_number')|unique }}"
If you don't mind play logs cluttering, here's a way:
- hosts: all
gather_facts: no
serial: 1
tasks:
- group_by:
key: "single_{{ cluster_number }}"
when: groups['single_'+cluster_number] | default([]) | count == 0
- hosts: single_*
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }}"
serial: 1 is crucial in the first play to reevaluate when statement on for every host.
After first play you'll have N groups for each cluster with only single host in them.
Here's the basic use case:
I have an NGINX reverse proxy I want to set up and so I specific a play that only runs on the "nginx" group.
However, in order to know the ips to reverse proxy to I need to gather facts from the "upstreams" group. This doesn't happen since the play does not run setup on the "upstreams".
This answer contains a solution I've used before, but I'd like to be able to have it all self contained in a single hosts play that I can run independently of others.
Use Delegated Facts, pre_tasks, and delegate the facts to the hosts they belong to.
- hosts: nginx
become: yes
tags:
- nginx
vars:
listen_address: "x.x.x.x"
pre_tasks:
- name: 'gather upstream facts.'
setup:
delegate_to: "{{item}}"
delegate_facts: True
with_items: "{{groups['upstreams']}}"
roles:
- role: nginx
upstreams: "{{ groups['upstreams'] | map('extract', hostvars, ['ansible_all_ipv4_addresses']) | ipaddr('x.x.x.x') | first | list }}"
I know that one can add host with the following task:
- name: Add new instance to host group
add_host:
hostname: '{{ item.public_ip }}'
groupname: "tag_Name_api_production"
with_items: ec2.instances
But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Unfortunately, it seems, that you can't do this using Ansible 2. There is no such a module called remove_host or another one.
However, using Ansible 2 you can refresh your inventory mid-play:
- meta: refresh_inventory
Have a look at this question
Another idea might be to filter hosts beforehand. Try adding them to group, and then excluding this group in a play lately, e.g. :
- hosts: '!databases'
You can just stop the play for those hosts:
- name: Remove unwanted hosts from play_hosts
meta: end_host
when: unwanted
This assumes, of course, that the unwanted variable exists on all the hosts and is set properly.
Can not removing hosts, but can choose to run on the new created group.
- name: kubectl
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: nextcloud
ansible_connection: kubectl
ansible_kubectl_context: cluster
ansible_kubectl_namespace: default
ansible_kubectl_pod: nextcloud-75fc7f5c6f-hxrq6
groupname: "pods"
# only run on pods
- hosts: pods
gather_facts: false
tasks:
- raw: pwd
register: raw_result
- debug:
msg: "{{raw_result.stdout_lines[0]}}"
We typically do it like this in a playbook using multiple hosts: sections.
- hosts: auth:!ocp
roles:
- ntp-server
- hosts: all:!auth:!ocp
roles:
- ntp-client
This will remove the groups of hosts from consideration via the !group mechanism. Specifically here in the 1st block we're removing the !ocp group and in the 2nd we're removing both the !auth and !ocp groups.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
I just got the same problem. I have tests which run converge for a given playbook (which I can't modify), and then I need to run the same playbook with smaller set of hosts in the group.
My solution is:
Let's say you have group target you want to make smaller.
Put smallest number of hosts in the target group.
Put additional hosts into target_addon group.
- hosts: target_addon
tasks:
- add_host:
name: '{{ inventory_hostname }}'
groups: [target]
- import_playbook: converge.yaml # uses group `target` with added hosts
# Removing hosts added from target_addon group from group target by reloading inventory
- hosts: localhost
tasks:
- meta: refresh_inventory
- import_playbook: converge.yaml # uses group `target` without added hosts