I know that one can add host with the following task:
- name: Add new instance to host group
add_host:
hostname: '{{ item.public_ip }}'
groupname: "tag_Name_api_production"
with_items: ec2.instances
But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Unfortunately, it seems, that you can't do this using Ansible 2. There is no such a module called remove_host or another one.
However, using Ansible 2 you can refresh your inventory mid-play:
- meta: refresh_inventory
Have a look at this question
Another idea might be to filter hosts beforehand. Try adding them to group, and then excluding this group in a play lately, e.g. :
- hosts: '!databases'
You can just stop the play for those hosts:
- name: Remove unwanted hosts from play_hosts
meta: end_host
when: unwanted
This assumes, of course, that the unwanted variable exists on all the hosts and is set properly.
Can not removing hosts, but can choose to run on the new created group.
- name: kubectl
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: nextcloud
ansible_connection: kubectl
ansible_kubectl_context: cluster
ansible_kubectl_namespace: default
ansible_kubectl_pod: nextcloud-75fc7f5c6f-hxrq6
groupname: "pods"
# only run on pods
- hosts: pods
gather_facts: false
tasks:
- raw: pwd
register: raw_result
- debug:
msg: "{{raw_result.stdout_lines[0]}}"
We typically do it like this in a playbook using multiple hosts: sections.
- hosts: auth:!ocp
roles:
- ntp-server
- hosts: all:!auth:!ocp
roles:
- ntp-client
This will remove the groups of hosts from consideration via the !group mechanism. Specifically here in the 1st block we're removing the !ocp group and in the 2nd we're removing both the !auth and !ocp groups.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
I just got the same problem. I have tests which run converge for a given playbook (which I can't modify), and then I need to run the same playbook with smaller set of hosts in the group.
My solution is:
Let's say you have group target you want to make smaller.
Put smallest number of hosts in the target group.
Put additional hosts into target_addon group.
- hosts: target_addon
tasks:
- add_host:
name: '{{ inventory_hostname }}'
groups: [target]
- import_playbook: converge.yaml # uses group `target` with added hosts
# Removing hosts added from target_addon group from group target by reloading inventory
- hosts: localhost
tasks:
- meta: refresh_inventory
- import_playbook: converge.yaml # uses group `target` without added hosts
Related
The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO
I have the following set up for Ansible, and I would like to parameterize a filter that will loop, and filter out specific hosts.
- name: run on hosts
hosts: "{{ item }}"
roles:
- directory/role-name
with_items:
- us-east-1a
- us-east-1b
- us-east-1c
The result would be that the role called role-name would be first run on us-east-1a hosts, then us-east-1b... etc.
The above simple errors out with
ERROR! 'with_items' is not a valid attribute for a Play
Is there a way to accomplish what I am trying to do, which is chunking my host list into groups, and running the same role against them, one at a time?
The following achieves the result I am looking for, but is clunky, and not dynamic in length.
- name: run on us-east-1a
hosts: "us-east-1a"
roles:
- my-role
- name: run on us-east-1b
hosts: "us-east-1b"
roles:
- my-role
- name: run on us-east-1c
hosts: "us-east-1c"
roles:
- my-role
I think the only way to (1) have a common code and (2) serialise play execution per group of hosts (with targets inside a group running in parallel) would be to split your playbook into two:
playbook-main.yml
---
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1a
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1b
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1c
playbook-sub.yml
- hosts: "{{ host_group_to_run }}"
roles:
- my-role
# other common code
If you wanted to serialise per host, then there is a serial declaration that might be used in conjunction with this suggestion, but despite your comments and edit, it's unclear because once you refer to us-east-1a as a "host" in singular form, other times as a "group of hosts" or an "availability zone".
Will host patterns do the job?:
- name: run on us-east-1a
hosts: us-east-1a,us-east-1b,us-east-1c
roles:
- my-role
Update: #techraf has opened my eyes with his comment – host pattern alone will not do the job.
It will just concatenate all hosts from all groups.
But in a predictable way, which in some cases can be used to iterate hosts in every group separately.
See this answer for details.
I'm trying to look for a text pattern in a load balancer host from a worker host, using the following:
- name: A play
hosts: workers
tasks:
- name: Look for text pattern in delegated host
delegate_to: load-balancer-host
find:
paths: "$ENVIRONMENT_VARIABLE/subdir"
file_type: file
patterns: file.pattern
contains: 'text pattern'
register: aVariable
The problem is that I can't found any way to make $ENVIRONMENT_VARIABLE (this variable exists in the load-balancer-host) available for the play (it contains the directory, in load-balancer-host, from where I want to look for). ansible_env is only available for the workers but not for the load-balancer-host
I have tried...
- name: A play
hosts: workers
tasks:
- name: set fact
set_fact:
env_var: "{{ lookup('env', 'ENVIRONMENT_VARIABLE') }}"
delegate_to: load-balancer-host
- name: debug
debug:
msg: "{{ env_var }}"
... too, but it prints an empty string.
For users running Ansible 1.x, see kfreezy's answer.
For users running Ansible 2.x, I have found the following solution:
- hosts: workers
tasks:
- name: gather facts from lb
setup:
delegate_to: load-balancer-host
delegate_facts: false
This task will make $ENVIRONMENT_VARIABLE available in every worker ansible_env var. If you want to make $ENVIRONMENT_VARIABLE available in the load-balancer-host ansible_env, just set delegate_facts to True.
More info in ansible docs
Personally I would simplify your playbook by either adding the $ENVIRONMENT_VARIABLE as a variable in Ansible (probably in the host_vars for load-balancer-host) or running a play against load-balancer-host rather than use delegate_to. It might not make sense depending on what the other tasks are.
Here's a direct answer to your question though.
load-balancer-host's ansible_env will only be defined when the host is included in the playbook. You can add another play against the 'load-balancer-host' that will just gather facts. Then you can reference the facts from 'load-balancer-host' using hostvars in your subsequent plays against 'workers'. He's what it would look like.
- hosts: load-balancer-host
tasks:
- name: print debug message
debug:
msg: "this play is for gathering facts on the LB"
- name: A play
hosts: workers
tasks:
- name: Look for text pattern in delegated host
delegate_to: load-balancer-host
find:
paths: "{{ hostvars['load-balancer-host'].ansible_env.ENVIRONMENT_VARIABLE }}/subdir"
file_type: file
patterns: file.pattern
contains: 'text pattern'
register: aVariable
I am not sure on how to find the first ansible hostname from group_names. Could you kindly advise me on how to do it?
hosts
[webservers]
server1
server2
server3
[webserver-el7]
server4
server5
server6
And i have 2 different playbook for each host groups
playbook1.yml
- name: deploy app
hosts: webservers
serial: 8
roles:
- roles1
playbook2.yml
- name: deploy app
hosts: webservers-el7
serial: 8
roles:
- roles1
the problem is that i have delegate task to first host of each group. previously i only used webservers group, so it was much easier by using the task below
- name: syncing web files to {{ version_dir }}
synchronize:
src: "{{ build_dir }}"
dest: "{{ version_dir }}"
rsync_timeout: 60
delegate_to: "{{ groups.webservers | first }}"
If i have 2 different group_names, how can i select the first one of each group? so it can be more dynamic
If you want the first host of current play to be a kind of master host to sync from, I'd recommend another approach: use one of play_hosts or ansible_play_hosts (depending on your Ansible version) variables. See magic variables.
Like delegate_to: "{{ play_hosts | first }}".
The thing is when you say hosts: webservers-el7 to Ansible webservers-el7 is a pattern here. Ansible search for hosts to match this pattern and feed them into Play. You may have written webservers-el* as well. So inside Play you don't have any variable that will tell you "I'm running this Play on hosts from group webserver-el7...". You may only make some guess work analyzing group_names and groups magic variables. But this become clumsy when you have one host in several groups.
For hosts in single group only, you may try: groups[group_names | first] | first
To get any element from group, use group[group_name][0...n].
This will get the first element from the group.
- debug: msg="{{ groups['group_name'][0] }}"
I have defined two group of hosts: wmaster and wnodes. Each group runs in its play:
- hosts: wmaster
roles:
- all
- swarm-mode
vars:
- swarm_master: true
- hosts: wnodes
roles:
- all
- swarm-mode
I use host variables (swarm_master) to define different behavior of some role.
Now, my first playbook performs some initialization and I need to share data with the nodes. What I did is to use set_fact in first play, and then to lookup in the second play:
- set_fact:
docker_worker_token: "{{ hostvars[smarm_master_ip].foo }}"
I don't like using the swarm_master_ip. How about to add a dummy host: global with e.g. address 1.1.1.1 that does not get any role, and serves just for holding the global facts/variables?
If you're using Ansible 2 then you can take use of delegate_facts during your first play:
- name: set fact on swarm nodes
set_fact: docker_worker_token="{{ some_var }}"
delegate_to: "{{ item }}"
delegate_facts: True
with_items: "{{ groups['wnodes'] }}"
This should delegate the set_fact task to every host in the wnodes group and will also delegate the resulting fact to those hosts as well instead of setting the fact on the inventory host currently being targeted by the first play.
How about to add a dummy host: global
I have actually found this suggestion to be quite useful in certain circumstances.
---
- hosts: my_server
tasks:
# create server_fact somehow
- add_host:
name: global
my_server_fact: "{{ server_fact }}"
- hosts: host_group
tasks:
- debug: var=hostvars['global']['my_server_fact']